Advertisment

Rising threat of audio deepfakes in global politics: The new frontier of information warfare

Robocalls featuring a fabricated voice resembling US President Joe Biden

author-image
Surinder Singh Oberoi
New Update
Donald Trump Joe Biden Deepfake

Illustrative image of Joe Biden and Donald Trump engaging in a war of words

New Delhi: New Hampshire's attorney-general office in the United States is currently investigating potential voter suppression after receiving complaints about robocalls featuring a fabricated voice resembling US President Joe Biden.

Advertisment

The calls circulated on the day of the state's first-in-the-nation primary, urged Democratic voters to abstain, falsely asserting that voting would aid the Republicans in re-electing Donald Trump.

Media said that the White House confirmed that the robocall was fraudulent, emphasizing the challenges posed by emerging technologies.

White House press secretary Karine Jean-Pierre highlighted the President's concerns about risks associated with deep fakes, citing the potential for fake images and misinformation to be amplified through advanced technologies.

Advertisment

This incident underscores a broader trend of audio deepfakes being used in information warfare, particularly during significant election periods globally.

In September, NewsGuard uncovered a network of TikTok accounts masquerading as credible news outlets, disseminating conspiracy theories and political misinformation. Notably, deepfake voice-overs, including one of former President Barack Obama, garnered widespread attention, generating hundreds of millions of views, and were seemingly created using ElevenLabs' tool.

The misuse of synthetic audio to influence politics and elections has been observed in various countries, including the UK, Nigeria, Sudan, Ethiopia, and Slovakia throughout 2023.

Advertisment

Prime Minister Modi's Concern

In response to the rising threat of deepfakes, Prime Minister Narendra Modi expressed concerns about the potential misuse of AI in social media. PM Modi emphasised the need for caution in adopting new technologies like AI, particularly addressing the misleading and potentially dangerous nature of deepfake videos.

He called for a global framework to regulate AI technology, advocating for measures to verify the authenticity of videos and images before acceptance.

Advertisment

During the virtual G20 summit on November 22, PM Modi reiterated the need for global regulations on AI to ensure the safety and societal benefits of these technologies.

His stance reflects the growing recognition of the impact that AI can have on political processes and the urgency to establish safeguards against misuse.

AI-powered voice-cloning technologies, once limited, are becoming more accessible online, raising alarm among researchers

Advertisment

Experts attribute the rise of audio deepfakes to the availability of affordable and efficient AI tools from companies like ElevenLabs, Resemble AI, Respeecher, and Replica Studios.

In November 2023, a deepfake video featuring actress Rashmika Mandanna went viral, leading to the arrest of the main perpetrator by Delhi Police.

The market for text-to-speech tools has experienced rapid growth, with companies such as Voice AI providing free apps, and others like Replica Studios and Respeecher charging nominal fees for various creative purposes.

Advertisment

Financial Times reported that Microsoft's research division announced last year the development of VALL-E, an AI model capable of cloning a voice from just three seconds of recording.

ElevenLabs, co-founded by former Google and Palantir employees Piotr Dabkowski and Mati Staniszewski, offers a range of AI audio generation tools, from free basic versions to more sophisticated subscriptions.

Despite this, the origins of politically motivated deepfakes often remain elusive, raising concerns about potential abuses in an unregulated space.

Advertisment

US intelligence agencies have reported a significant uptick in personalized AI scams, prompting political experts to sound the alarm about the potential proliferation of viral deepfake audio clips for use in robocalling or political campaigns.

Recognising the gravity of the situation, some companies are taking proactive measures to combat disinformation. Microsoft has issued an ethical statement for its AI audio tool, ElevenLabs has developed detection tools for its audio recordings, and Resemble is exploring the incorporation of inaudible watermarks.

Financial Times reported that during the 2023 elections in Nigeria and Slovakia, AI-manipulated clips were deployed to falsely implicate political candidates.

The challenge of detecting AI-created audio, compared to video, is a significant obstacle;  An emerging market for technology-assisted detection is underway, with companies like McAfee developing tools such as Project Mockingbird to identify fake audio. Online platforms, including Meta and TikTok, are investing in labelling and detection capabilities.

The escalating use of deepfakes in politics has become a pressing concern, prompting an urgent call for policymakers to establish protections against this evolving threat.

Advertisment
Advertisment
Subscribe