Weekly Business Insights from Top Ten Business Magazines | Week 312 | Shaping Section | 1

Extractive summaries and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since September 2017 | Week 312 | September 1-7, 2023

How worried should you be about AI disrupting elections?

The Economist | August 31, 2023

Extractive Summary of the Article | Listen

In the past, disinformation has always been created by humans. Advances in generative artificial intelligence (AI)—with models that can spit out sophisticated essays and create realistic images from text prompts—make synthetic propaganda possible. The fear is that disinformation campaigns may be supercharged in 2024, just as countries with a collective population of some 4bn—including America, Britain, India, Indonesia, Mexico and Taiwan—prepare to vote. How worried should their citizens be?

What could large-language models change in 2024? One thing is the quantity of disinformation: if the volume of nonsense were multiplied by 1,000 or 100,000, it might persuade people to vote differently. A second concerns quality. Hyper-realistic deepfakes could sway voters before false audio, photos and videos could be debunked. A third is microtargeting. With AI, voters may be inundated with highly personalised propaganda at scale. Networks of propaganda bots could be made harder to detect than existing disinformation efforts are.

This is worrying, but there are reasons to believe AI is not about to wreck humanity’s 2,500-year-old experiment with democracy. Many people think that others are more gullible than they themselves are. In fact, voters are hard to persuade, especially on salient political issues.  

Tools to produce believable fake images and text have existed for decades. Although generative AI might be a labour-saving technology for internet troll farms, it is not clear that effort was the binding constraint in the production of disinformation.   Big-tech platforms, criticised both for propagating disinformation in the 2016 election and taking down too much in 2020, have become better at identifying suspicious accounts (though they have become loth to arbitrate the truthfulness of content generated by real people).  The agency regulating elections in America is considering a disclosure requirement for campaigns using synthetically generated images.  Some in America are calling for a Chinese-style system of extreme regulation.

Although it is important to be mindful of the potential of generative ai to disrupt democracies, panic is unwarranted. Before the technological advances of the past two years, people were quite capable of transmitting all manner of destructive and terrible ideas to one another. 

3 key takeaways from the article

  1. In the past, disinformation has always been created by humans. Advances in generative artificial intelligence (AI) make synthetic propaganda possible. The fear is that disinformation campaigns may be supercharged in 2024, just as countries with a collective population of some 4bn prepare to vote.
  2. What could large-language models change in 2024? One thing is the quantity of disinformation: if the volume of nonsense were multiplied by 1,000 or 100,000, it might persuade people to vote differently. A second concerns quality. Hyper-realistic deepfakes could sway voters before false audio, photos and videos could be debunked. A third is microtargeting. With AI, voters may be inundated with highly personalised propaganda at scale.
  3. Although it is important to be mindful of the potential of generative AI to disrupt democracies, panic is unwarranted. Before the technological advances of the past two years, people were quite capable of transmitting all manner of destructive and terrible ideas to one another. 

Full Article

(Copyrights lies with the publisher)

Topics:  Artificial Intelligence, Democracies, Propganda

Be the first to comment

Leave a Reply