As election season nears, threat of AI-generated fake news looms larger than ever

Posted on : 2023-06-14 17:35 KST Modified on : 2023-06-14 17:35 KST
The debate on AI regulation is in full swing as the negative effects of chatbots are being recognized as much as the positive changes they can bring
Eliot Higgins, a British journalist, shared on March 20 this image generated by Midjourney, an AI-powered program, depicting former US President Donald Trump being arrested. (from @EliotHiggins on Twitter)
Eliot Higgins, a British journalist, shared on March 20 this image generated by Midjourney, an AI-powered program, depicting former US President Donald Trump being arrested. (from @EliotHiggins on Twitter)

Reporter: Where is Admiral Yi Sun-sin’s Green Dragon Crescent Blade stored?

ChatGPT: Admiral Yi Sun-sin’s Green Dragon Crescent Blade is currently located at the Yi Sun-sin Square within Changwon Marine Park in Changwon, South Gyeongsang, South Korea.

The response from the chatbot was confident — and wrong. The Green Dragon Crescent Blade is the weapon used by the Chinese general Guan Yu in the novel “Romance of the Three Kingdoms.” And there is no “Yi Sun-sin Square” in Changwon Marine Park.

When the errors in the response were pointed out by the reporter, the chatbot gave another incorrect answer: “The Green Dragon Crescent Blade of Admiral Yi Sun-sin is in the Uljin marine theme park in Uljin County, North Gyeongsang Province.”

These sorts of faulty perceptions by ChatGPT are a common occurrence.

ChatGPT hallucinations have potentially disastrous consequences

Hallucinations happen when we perceive things that don’t actually exist. In the case of chatbots, the term refers to instances when they present faulty information as if it were true.

Shortly after ChatGPT became available to the public, South Korean users began amusing themselves by using these hallucinations to “tease” the chatbot.

For example, a query like “Tell me about the incident in the ‘Annals of the Joseon Dynasty’ where King Sejong the Great throws his MacBook” brought up the response, “This incident occurred when King Sejong the Great was drafting the ‘Hunminjeongeum’ [the document outlining the Hangul writing system]. Angry with an official over the interruption of his writing process, he threw him out of the room together with a MacBook Pro.”

Another query read, “Explain about the turtle ships’ lightning bolt firing mechanism.” The response gave a superficially plausible seven-step explanation of the kind of lightning bolt magic used by a Buddhist monk on a turtle ship in a fantasy novel.

Some of these baffling responses have turned into online memes, as various YouTubers have produced videos showing off how sincere ChatGPT was in offering its wrong answers.

If artificial intelligence hallucination phenomena only provided a few giggles, that would be lucky. But when people begin relying on AI for accuracy, those hallucinations can be disastrous.

Ahead of Google’s launch of its AI chatbot Bard, an exchange with it was made public on the website blog last February.

A user asked, “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard gave three replies.

The problem lay with the third of these responses, which said the telescope “took the very first pictures of a planet outside our solar system.”

Astrophysicists posted on social media about the third response being incorrect. After checking with NASA, Reuters published a fact check confirming that the first picture of an exoplanet was taken in 2004 by the European Southern Observatory’s Very Large Telescope.

As reports of the flub circulated, shares in Google’s parent company Alphabet sank by 7.8% in a single day.

In the US, a veteran attorney with 30 years of experience submitted a brief to a court that quoted a false precedent invented by ChatGPT. The New York Times reported that at least six of the precedents cited in attorney Steven Schwartz’s 10-page brief were fabricated.

The situation came to light when the other side said it could not locate the precedents that Schwartz named and suggested they were not actual rulings. Schwartz finally admitted that he had relied on ChatGPT to help him — explaining that he had trusted the chatbot when it insisted they were actual cases that could be found in other legal databases.

In a hearing to determine whether he should be punished, he said, “I heard about this new site [ChatGPT], which I falsely assumed was, like, a super search engine.”

“I continued to be duped by ChatGPT. It’s embarrassing,” he also said.

The hallucinations occur because AI is designed to give answers by combining the most statistically appropriate words, regardless of whether the information is true.

In the case of ChatGPT, these errors are corrected as users provide feedback. If a user today asks it about the “MacBook throwing incident” in the “Annals of the Joseon Dynasty,” the response is that this incident is “not historically factual.”

OpenAI, the developer of ChatGPT, introduced an improvement to mediate the hallucinations on May 31.The method involves providing feedback for each individual step in a chain-of-thought, and training ChatGPT based on the results of such practices.

Since ChatGPT does not provide sources for its information, it is difficult to recognize if the answer it provides is false.

Microsoft’s search engine Bing, which is equipped with ChatGPT, attaches a source and a link to each sentence of the chatbot’s answer to compensate for that limitation.

Misuse of AI for “fake news”

Even more dangerous than the misinformation that generative AI produces with its hallucinations is the misinformation that humans will intentionally create via AI.

“Deepfake fake news that uses AI is going to be a serious social issue in next year’s US presidential election and South Korea’s general election,” says Kang Jeong-soo, the director of MediaSphere and former head of the Blue House digital communication center.

There have been instances in which fake images, videos, and voice recordings created by AI, such as those depicting former US President Donald Trump being led away by police in handcuffs, Ukraine President Volodymyr Zelenskyy telling troops to lay down their arms and surrender, and US President Joe Biden seemingly making transphobic remarks, have gone viral online and stirred chaos and confusion.

In May, an AI-generated image of an explosion at a building next to the Pentagon sent US stock prices tumbling.

On May 22, a fake image in which a building near the Pentagon appeared to have exploded made the rounds on Twitter. CNN reported that this was an AI-generated fake image. (courtesy of CNN)
On May 22, a fake image in which a building near the Pentagon appeared to have exploded made the rounds on Twitter. CNN reported that this was an AI-generated fake image. (courtesy of CNN)

The US misinformation tracking organization NewsGuard Technologies said they had found 166 “unreliable AI-generated news and information sites” with little to no human oversight as of June 12. The number has tripled in just over a month.

These sites are no different from regular news organizations’ news feeds, and they don’t make it clear that the news is written by AI. Readers can’t help but think they’re getting news from human reporters.

Some of these sites are even in Korean. NewsGuard co-CEO Steven Brill warned that anyone could use AI to intentionally mass-produce fake news. “The danger is someone using it deliberately to pump out these false narratives,” he said.

Hwang Kuhn, a professor of media and communication at Sun Moon University, said, “AI with big data has the ability to customize fake news that will attract people’s attention.”

“This will cloud society’s judgment and set back democracy,” Hwang added.

EU’s new AI categorization system: “unacceptable” and “high” risk

The debate on AI regulation is in full swing as the negative effects of chatbots are being recognized as much as the positive changes they can bring.

The European Union (EU) has been the most active in regulating AI, passing an AI regulation bill in May in a standing committee of the European Parliament.

The EU has proposed to categorize the risks of AI into four levels — unacceptable risk, high risk, limited risk, minimal or no risk — and to manage them accordingly.

Areas categorized as “unacceptable risk” would be completely banned from using AI systems. Examples include AI categorizing individuals based on their “social scores” or collecting facial images from the internet or closed-circuit television (CCTV) footage to build databases.

The use of AI systems in “high-risk” areas, which are likely to pose a high risk to human health, safety and fundamental rights, is generally prohibited, but may be permitted subject to certain requirements, including a conformity assessment.

Education is a prime example of a high-risk area. The EU legislation states, “AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood.”

Recruitment and labor management are also high-risk areas. They can have a significant impact on people’s professional achievements, livelihoods, and labor rights, and risk repeating past discrimination against women, people with disabilities, and certain ethnic groups.

The EU also provides for fines for “prohibited uses of AI”. They can be up to 40 million euros or 7% of a company’s annual worldwide turnover.

The new agreement adds regulations for “foundation models” of generative AI, such as ChatGPT. It requires them to disclose that the content they create was created by an AI, and to disclose the copyright of the data they learn from.

These “transparency provisions” will go some way to addressing the problem of fake news generated by AI. However, it will take at least a few years for the legislation to be implemented, as it will have to pass a “tripartite consultation” consisting of the European Parliament, the Commission of the EU, and the European Council.

Sam Altman, the CEO of OpenAI, the company behind ChatGPT reacted to the EU’s proposed AI regulation by saying, “If we can comply, we will, and if we can’t, we’ll cease operating.”

“It seems that Europe, which is behind in AI technology, is trying to assert itself by spearheading AI regulation,” said Han Sang-ki, the CEO of TechFrontier. “But it is correct to prepare AI regulations ahead of time, because the cascade of problems caused by AI can be very large.”

By Jeong Hye-min, staff reporter

Please direct questions or comments to [english@hani.co.kr]

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories

Most viewed articles