Google announced on Wednesday that it will require political advertisements on its platforms to disclose when images and audio have been altered or created using tools such as artificial intelligence (AI). The policy change, set to take effect in November, aims to address growing concerns that generative AI could be used to deceive voters. With the upcoming US presidential election looming, Google’s decision comes at a crucial time in the fight against misinformation and digital manipulation.

The move by Google represents an expansion of its existing transparency measures for election ads. The internet giant’s spokesperson explained, “Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated.” By implementing these additional requirements, Google aims to ensure that voters are aware of any manipulated media in political ads.

In June, a campaign video by Ron DeSantis attacking former President Donald Trump exemplified the need for increased vigilance. Images in the video appeared to show Trump embracing Anthony Fauci, a prominent member of the US coronavirus task force, with kisses on the cheek. However, an AFP Fact Check team determined that the images had been altered using AI. This incident serves as a stark reminder of the potential impact of digitally manipulated content on the public’s perception of political figures.

Google’s ad policies already prohibit the manipulation of digital media to deceive or mislead people about political matters. The company also acknowledges the importance of maintaining trust in the electoral process and therefore forbids demonstrably false claims that could undermine participation or trust. Google requires political ads to disclose their financial backers and provides an online ads library where users can access information about the messages.

Under the new policy, election-related ads containing “synthetic content” that depicts real or realistic-looking people or events must prominently disclose this fact. Google specifies that disclosures of digitally altered content should be “clear and conspicuous” and placed in locations where they are likely to be noticed. Examples of content that would warrant a disclosure label include synthetic imagery or audio portraying individuals saying or doing things they did not do, or depicting events that did not occur.

To ensure transparency, Google suggests using labels such as “This image does not depict real events” or “This video content was synthetically generated” for politically motivated ads containing manipulated content. By clearly labeling such content, Google aims to empower users to differentiate between authentic and altered media, enabling them to make more informed decisions.

Google’s decision to mandate disclosure of altered political ads reflects its commitment to transparency and combating misleading content. With the 2022 US presidential election on the horizon, it is imperative to take proactive measures against the potential misuse of generative AI. By expanding its ad policies, Google aims to protect the integrity of the electoral process and empower voters with the information they need to make informed decisions.

Internet

Articles You May Like

Twitter Gets New CEO: Linda Yaccarino
The Importance of Testing the National Public Alert and Warning System
Allen Institute and Amazon Web Services Partner to Build World’s Largest Open Source Brain Cell Database
Generative AI: Revolutionizing Web Accessibility for People with Disabilities

Leave a Reply

Your email address will not be published. Required fields are marked *