As elections near, Meta will require political advertisers to disclose AI use

Posted on : 2023-11-10 17:13 KST Modified on : 2023-11-10 17:13 KST
With the US presidential election only a year away and Korea’s general election in April, more efforts are being made to regulate the use of AI in political advertising on platforms
Meta CEO Mark Zuckerberg leaves the US Senate building after partaking in an AI forum there on Sept. 13. (Yonhap)
Meta CEO Mark Zuckerberg leaves the US Senate building after partaking in an AI forum there on Sept. 13. (Yonhap)

How are we to control artificial intelligence technology that can manipulate data to trick the public into believing that an event that never occurred did indeed happen, and make it seem as if people said things that they didn’t during an election year?

A US Big Tech company has set out to establish new orders on its own platform to address this complicated issue that is vexing countries with upcoming big elections.

Starting in 2024, Facebook and Instagram operator Meta will reject political ads that use AI technology without disclosing the fact that such technology has been used. Starting in the new year, advertisers who want to run political, election, and social issue-related ads on Facebook and Instagram will have to disclose whether videos and images are manipulated using AI technology.

This decision made by this American tech leader, which came a year ahead of the US presidential election, is expected to have an impact on South Korea, which is struggling with its own AI problems ahead of the general election in April 2024.

On Wednesday (local time), Meta said on its blog that it will require advertisers to disclose the use of AI technology in political and social ads from next year.

“We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI,” the company wrote in the blog article, explaining that the policy will “go into effect in the new year and will be required globally.”

Political ads that are subject to self-reporting are those in which any of the following are manipulated by AI for a specific purpose: sound, images, or video.

Meta stated that advertisers will have to disclose whether AI was used if advertisements “depict a real person as saying or doing something they did not say or do; or depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

Ironically, Meta offers AI ad creation tools for its advertisers. Because of this, Meta says that changes such as “image size adjusting, cropping an image, color correction, or image sharpening” need not be disclosed, unless “such changes are consequential or material to the claim, assertion, or issue raised in the ad.”

The company plans to add disclaimers to ads that do disclose the use of AI. Advertisers who repeatedly fail to comply with Meta’s requirements could be penalized in addition to being denied ads, the company said.

South Korea’s National Election Commission holds a seminar on the effects of generative AI on elections and legal and institutional response measures at the FKI Hall in Yeouido, Seoul, on Aug. 30. (courtesy of the National Election Commission)
South Korea’s National Election Commission holds a seminar on the effects of generative AI on elections and legal and institutional response measures at the FKI Hall in Yeouido, Seoul, on Aug. 30. (courtesy of the National Election Commission)

The announcement of Meta’s ad policies comes a year ahead of the US presidential election, which is scheduled for November 2024. Meta, which has 2 billion users on Facebook and Instagram and derives more than 90 percent of its revenue from advertising, went without political ads for four months leading up to the 2020 US presidential election. This was after Facebook was accused of turning a blind eye to Russian advertisements that interfered in the 2016 US presidential election.

South Korea is also deeply concerned about how to deal with AI manipulation ahead of the general election in April 2024. In July, the National Election Commission of the Republic of Korea established the “Operational Standards for Election Campaigns Using Generative AI,” which allows the use of AI but prohibits manipulation of information, and in August, it organized a 500-member AI task force to respond to falsehoods and slander.

However, even the National Election Commission acknowledges that “if sophisticated false information created by AI technology is spread during a short election period, it is expected to be difficult to determine the illegality of the information, such as verifying its authenticity, and to take measures to quickly delete the false information.”

In May, lawmaker Lee Sang-heon of the Democratic Party introduced a bill to partially amend the Contents Industry Promotion Act to require content created using AI to be labeled as such.

By Lim Ji-sun, staff reporter

Please direct questions or comments to [english@hani.co.kr]

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories

Most viewed articles