AI in politics: How lines between reality and 'deepfake' are blurring
It is foreseeable that the use of AI and deepfakes in the political realm, as well as the confusion regarding any content’s authenticity, will increase manifold in the days ahead, with major polls scheduled in the US, India, European Union, Russia, Taiwan and beyond in 2024
Back in December last year, Pakistan Tehreek-e-Insaf (PTI) held an online campaign rally, featuring a video of the former leader Imran Khan sitting in front of a Pakistani flag and addressing his supporters, urging them to vote in large numbers for his party.
But in reality, Khan was detained at the time and remains so, making it impossible for him to genuinely come before the public in any shape or form.
Instead, the four-and-a-half-minute video was actually a deepfake created by artificial intelligence, also featuring an AI-generated voice that closely resembled Khan's, to give the illusion of authenticity.
Interestingly, Khan recently stirred yet another debate over AI, as he made a surprising claim by saying that an essay published by The Economist a fortnight ago under his name was AI-generated.
In the essay – which went viral on social media and drew objections from the caretaker government – the former Pakistan Prime Minister had expressed apprehensions that forthcoming general elections may not take place at all.
However, he later retracted this stance, acknowledging that he had indeed "verbally dictated" the article.
Such contradictions in both words and actions by Khan further heighten concerns regarding the use of AI in politics. We have finally reached a point where it is no longer possible, at least easily, to distinguish reality from the simulation of reality.
If only Jean Baudrillard could live to see this day!
Meanwhile, this is also a fascinating case study of how AI intersects with democracy, believes Sabhanaz Rashid Diya, founding board director at Tech Global Institute, a tech policy think tank focused on advancing equity for the Global South.
"AI poses risks to public trust, which is a key element of a functional democracy," she told The Business Standard.
According to her, it is easy to point to any content and claim it is AI-generated, thereby passing on the burden of proof to journalists and civil society groups that are trying to hold truth to power.
As a result, it is foreseeable that the use of AI and deepfakes in the political realm, as well as the confusion regarding any content's authenticity, will increase manifold in the days ahead, with major polls scheduled in the US, India, European Union, Russia, Taiwan and beyond in 2024.
Policymakers around the world are worried over how AI-generated disinformation can be harnessed to manipulate voters and stoke divides in the lead-up to significant elections.
Syed Nazmus Sakib, an AI expert currently pursuing a PhD in machine learning at the University of Idaho, notes that while AI could be used in creating and spreading misinformation and disinformation targeting specific candidates or issues, AI chatbots could also incite extreme human emotions by analysing which side the majority is taking on delicate issues.
In May last year, OpenAI CEO Sam Altman was also asked in a US Senate hearing if political organisations could use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways.
Altman acknowledged that he was indeed concerned that certain people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.
Even the India Prime Minister Narendra Modi also said recently that deepfake videos are a "big concern," and his government issued a warning to social media firms, including Facebook and YouTube, to repeatedly remind users that it is against the law in their country to post deepfakes and content that propagates misinformation or obscenity.
According to Sagar Vishnoi, an India-based political campaigner and communication specialist, AI is going to be game changing for Indian elections - whether it's communications, messaging or surveys.
There are companies in India which offer sentiment analysis of leader's speeches, also there are startups which are offering AI based real time voice cloning & dubbing of any national leader's speech into hundreds of multiple regional languages.
"Imagine any national leader speaking in Hindi and his message getting delivered in Bangla in West Bengal, Punjabi in Punjab, and that too in real time with the same tonality, lip sync and texture!" Vishnoi said.
He noted that deepfakes hold the power to manipulate people's emotions and since elections are run on people's emotions, deepfakes do hold a possible danger to harm a leader's image, create riot like situations through hate speech and many other unforeseen situations.
Hence, the Indian government is proactive about the dangers deepfakes possess and are coming out with advisory and guidelines in the coming week so that it can't destroy the social fabric of the nation.
"Not only will the advisory be about regulation of content on social media, but defining deepfakes, creating awareness about deepfakes among people and putting disclaimers on synthetically produced media," Vishnoi explained.
It is believed that any political campaigning could now be revolutionised by utilising a personalised language model to generate innumerable customised messages for each voter. Applying reinforcement learning, the model may adapt its messages to become increasingly effective in swaying voter preferences.
Moreover, the said model may also evolve its approach based on user responses and accumulated knowledge throughout a campaign, engaging in dynamic and personalised "conversations" with millions of voters.
Such a technology will represent a potent fusion of automation and behavioural targeting, amplifying the scale and potential impact of political messaging in ways beyond current algorithms.
"The use of approaches like AI-generated ads and AI-assisted cold calls for surveys may become more prevalent in the near future. In essence, we are envisioning a future where the more data one will have, the bigger advantage they will get," said Sakib.
Polling shows the concern about AI doesn't just come from subject-matter experts, politicians or tech giants: general people also appear increasingly worried about how the tech could confuse or complicate things during the already divisive 2024 cycle.
A recent poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy revealed that 6 in 10 adults (58%) in the US think AI tools will increase the spread of false and misleading information during the 2024 presidential elections.
Notably, high-tech fakes have already affected elections around the globe. For example, just days before Slovakia's recent elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but they were shared as real across social media regardless.
According to another report by the London-based Financial Times in December, deepfake usage has been covered in Indonesia, Pakistan, India, and even Bangladesh as well. Several deepfake videos had been promoted on social media through paid advertising leading up to Bangladesh's national elections on 7 January.
Two deepfake videos circulating on 7 January in Bangladesh were verified by two fact-checking organisations, informed Minhaj Aman, research lead at Dismislab, an online platform specialising in media research and verification.
"Most notably, both victims were female candidates running in the national election," Aman said.
He, however, noted that propagation of deepfake deception in countries like Bangladesh is still in an experimental phase.
"It means that if you have some media literacy and notice some anomalies like lip syncing, blinking, moving fingers of hands in the video, you can identify it as a fake," he said.
Diya also made similar remarks. According to her, while AI reduces the cost of producing misinformation, it does not always perform well in all Global South languages, particularly concerning AI-generated video and audio.
Nevertheless, "it poses growing risks to the information ecosystem and how national elections intersect with geopolitics," said Diya.
Meanwhile, Aman further added that as technology advances at a rapid pace, it is possible that such videos, phone calls, and images will be circulated in a more sensitive manner in the future, as has been the case during elections in other nations as well.
Diya concluded by saying that the cost of digital forensics is prohibitively expensive, while most off-the-shelf AI detection models perform poorly and produce inaccurate results in non-native English and non-English content.
"The burden of debunking is being disproportionately shifted to fact-checkers and news organisations, who currently do not have the resources," she added.