Half the world will vote in elections next year – will AI influence their choices?

Posted on : 2023-12-21 17:20 KST Modified on : 2023-12-21 17:40 KST
Some politicians have already adopted AI as a campaign tool, but voters and experts are voicing fears that the technology could be disastrous
During a press conference in January 2016, president-elect Donald Trump points at a CNN report with whom he had an intense back-and-forth and says, “You are fake news.” (Reuters/Yonhap)
During a press conference in January 2016, president-elect Donald Trump points at a CNN report with whom he had an intense back-and-forth and says, “You are fake news.” (Reuters/Yonhap)

At the end of a four-hour call-in press conference/town hall on Dec. 14, Russian President Vladimir Putin fielded a rather bold question. The questioner, who identified himself as a university student from St. Petersburg, had white hair, a suit with no tie, and looked exactly like Putin, including his habit of slashing his hands against the desk and raising both eyebrows while speaking. The questioner first asked Putin whether he used body doubles as is widely claimed and then went on to ask his opinion about the dangers of artificial intelligence.

Putin, who has been dogged by rumors that he uses three or more body doubles because of his fear of being assassinated, denied the allegation. “I see that you look like me and talk like me,” he said. “You’re my first double, by the way.”

Putin then said that it is “impossible” to prevent the development of AI, and that Russia should do what it can to position itself at the forefront of the field.

As suggested by this surreal appearance of Putin’s AI doppelganger, which captivated people around the world, it’s gradually becoming more common for AI-enabled information, images and voices to be used for political ends.

And since 2024 is a “super election year,” with national elections scheduled in 76 countries around the world, it’s also expected to be the first year when AI is used in elections.

Last month, the British weekly magazine the Economist noted that half of the world’s population will be taking part in elections in such countries as India, the world’s most populous country with 1.44 billion people, the US (342 million), Brazil (218 million), Indonesia (280 million), Pakistan (245 million) and Russia (144 million). And all of those voters will be experiencing their first “AI election.”

In fact, AI-powered elections have already become a reality.

Democrat Shamaine Daniels, who is running for a seat in the US House of Representatives, is promoting herself in the party primary with a generative AI chatbot called Ashley that, like ChatGPT, is capable of having voice conversations with users.

“Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’ run for Congress,” the chatbot tells voters at the beginning of phone calls it initiates.

If voters stay on the line, Ashley engages in conversation with them as a regular person might, telling them about the candidate’s background and campaign pledges and putting a human staffer on the line if it runs into questions too hard to answer.

“We intend to be making tens of thousands of calls a day by the end of the year and into the six digits pretty soon. This is coming for the 2024 election, and it’s coming in a very big way,” said Ilya Mouzykantskii, the man behind the chatbot, speaking to Reuters. “The future is now.”

As this suggests, AI could cut the huge cost of campaigning while serving as a useful option for new politicians with little name recognition and candidates with good policy ideas who have struggled to meet voters because of a lack of funds.

Daniels said that by having Ashley talk to a wider range of voters, she can develop policies more quickly and determine her policy priorities more accurately.

“This technology is going to change the character of what campaigning looks like,” she added.

(Getty Images Bank)
(Getty Images Bank)

But this technology has some fatal flaws. Most troubling of all is the false information that could be conveyed by the “deepfakes” of famous politicians. (A deepfake is a fake video that mimics the face and voice of an actual person.)

Since the false information is conveyed by a “person” in the video who looks like an actual politician, it’s certain to have a big impact on voters.

Indeed, a major controversy erupted in May during the presidential election race in Türkiye with the release of a video apparently showing members of the separatist Kurdistan Workers’ Party singing a song of support for opposition candidate Kemal Kılıçdaroğlu, who was then polling within 5 percentage points of President Recep Tayyip Erdoğan.

Erdoğan used the video as a basis for launching fierce attacks against Kılıçdaroğlu, which helped him cement his lead in the final stretch. In reality, the video was false information manufactured by AI.

Voters are well aware of the dangers. Survey findings published on Nov. 3 by the Associated Press and the University of Chicago’s Harris School of Public Policy showed 58% of respondents agreeing that false information during election campaigns was likely to proliferate due to the use of AI tools. In response to questions on the use of AI in political advertising, 83% of respondents predicted that it could be used to produce false or misleading media, while 66% said it could be used to edit or modify existing photographs and videos.

In a piece published on the US site Axios, Tom Newhouse, the vice president of the AI business Convergence Media, was quoted as saying that next year’s US presidential election “will be the AI election,” far more disruptive than the so-called “Facebook elections” of 2008 and 2012.

“I would be willing to bet that the 'October surprise' next year is from AI,” he predicted, referring to events or revelations that tend to emerge just before the election takes place in early November.

The AP similarly quoted poll participants as saying that “AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.”

In the past, numerous issues have arisen with election regulations failing to keep up with the pace of new information technology developments. A representative example is the US presidential election of November 2016, when the situation was influenced by fake news circulating on social media.

According to BuzzFeed News, the 20 most popular pieces of fake news during the months preceding the election earned a combined total of around 8.71 million shares, reactions, and replies on Facebook — a bigger number than the 7.37 million combined responses for the 20 most popular articles published by the New York Times and other major news outlets.

Seventeen of those fake news stories included false information in support of then-candidates Donald Trump and Hillary Clinton. Examples included a story stating that Pope Francis had expressed his support for Trump, which drew 960,000 reactions, and another claiming WikiLeaks had verified Clinton’s sale of weapons to the Islamic State (IS), which drew 790,000.

BuzzFeed argued that even if fake news did not influence the election outcome directly, there was clear evidence of its powerful ripple effects. During the November 2020 presidential election, fake news stories about then-candidate Joe Biden being a “socialist” circulated in Florida and other leading swing states.

Observers have stressed the need for suitable regulatory frameworks to minimize the potential for AI to directly influence elections. For now, major IT companies have begun attempting their own responses.

As of last November — a year before the US election — Google has required that YouTube and other parties presenting political advertisements through their services must clearly indicate the use of AI to produce or synthesize images or voices, in a location that is visible to users. Meta, the company that runs Facebook, is similarly implementing mandatory labeling for political advertisements that use AI.

Countries in different regions have responded differently.

In the EU, a draft “AI Act” agreed upon by the European Parliament and member countries on Dec. 9 included clear obligations to be imposed in cases of AI systems “classified as high-risk, considering their potentially significant impact on democracy [and the] rule of law.” As major examples, it mentioned systems that influenced election outcomes and voter activities.

But the conclusion of legislative procedures and actual implementation is expected to take until 2025 at the earliest.

For the US, the matter has assumed some urgency, with the presidential election taking place next November.

A Senate Judiciary Committee hearing last May with OpenAI CEO Sam Altman testifying included dire warnings that the survival of democracy in the US could be imperiled if the potential threats of AI are left unaddressed.

In May, the US progressive consumer rights group Public Citizen called on the Federal Election Commission to crack down on the abuse of AI in political advertising — but the response has been slow-footed.

On Dec. 16, the US network NBC reported that little progress had been made since the commission’s announcement in August that it would devise measures to regulate the use of deepfakes in election advertising.

Legislation put before the US Senate last September would prohibit the improper use of AI in political advertising. Amy Klobuchar, the Democratic senator who sponsored the bill, stressed that “we can’t solely rely on voluntary commitments” by companies and that “more must be done to ensure our laws can keep up with this changing technology.”

The US states of Texas and California have also moved to develop legislation to ban the circulation of election-related deepfake videos.

In South Korea, an amendment to the Public Official Election Act that would ban all use of deepfakes in election campaigning for a period starting 90 days before election day was passed by the legislation subcommittee of the National Assembly’s Special Committee on Political Reform on Dec. 4. If the amendment clears the regular session hurdle, the use of AI in election campaigning could face legal regulations as early as next month.

Experts are also stressing the urgent need for measures.

Geoffrey Hinton, a University of Toronto professor who is considered one of the leading scholars in the AI field, was quoted in a Dec. 4 interview with Japan’s Yomiuri Shimbun newspaper as predicting that generative AI would “make it much easier for authoritarian governments to manipulate the electorate with fake news that is targeted to each individual.”

He also said that “one of the major political parties [in the US] has tied its fate to the successful propagation of fake news.”

“It would be very good to have legislation making it illegal to produce or share [AI-based] fake images or videos unless they are marked as fake. We already do this with currency,” he said.

By Hong Seock-jae, staff reporter

Please direct questions or comments to [english@hani.co.kr]

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories

Most viewed articles