The Rising Role of AI in Spreading Fake News and Misinformation

Written and Fact-Checked by 1440 Editorial Staff
Last updated

According to Professor Rayid Ghani at Carnegie Mellon University, artificial intelligence (AI) is “the ability of machines and computers to perform tasks that would normally require human intelligence.” Instead of requiring commands, the system learns to identify information and make decisions for itself.

While AI is changing countless industries, it’s also used for nefarious reasons. Learn more about the role of AI in the dissemination of fake news and how to identify it.

How AI Can Be Used to Create Misinformation

AI tools can generate news articles and spread them within minutes. This is why cybercriminals are so eager to use this tech. They can create fake news and share it instantly, potentially overpowering accurate stories or causing confusion. AI tools don’t just generate written copy, they can also create fake images and videos. 

Here are a few examples of AI tools creating misinformation.

No business, school, or government office is impervious to AI disinformation campaigns and scams

AI Tools Used for Creating Misinformation

One of the main reasons why AI-generated misinformation is increasingly common is that machine learning tools are free or easily accessible. Here are a few common AI tools that are designed to be helpful but can be harmful. 

Most people use these tools to create informational content. However, others may use this technology to manipulate images and video to mislead others. In some cases, AI is used to spread misinformation. 

How AI Can Be Used to Disseminate Misinformation

Here are a few ways trolls spread misinformation with AI: 

Troll farms and cybercriminals can seem legitimate and build a seemingly large following to spread misinformation. 

AI Tools Used for Disseminating Misinformation

Whether trolls are spreading misinformation or disinformation, they have a variety of tools at their fingertips. Here are a few ways cybercriminals can spread fake news:

  • Advertising platforms: Trolls can boost fake news via messages in promoted posts on platforms like X, Facebook, and Instagram.
  • Sponsored content: Many publications get their revenue from sponsored content. Trolls can share AI-generated content through these paid posts.
  • Social media scheduling tools: Some trolls even take advantage of tools like Buffer or HootSuite to schedule posts throughout the day.  

Similar to AI content creation, seemingly innocuous tools can become dangerous in the wrong hands.

Examples of Propaganda, Disinformation, and Misinformation Using AI

Internet users are more likely to come across AI-generated fake news as it becomes more common. Here are types of propaganda, false information, and misinformation you may come across: 

AI-generated propaganda doesn’t have to be complex. A few basic images can spread across the web and confuse the average internet user.

Goals of AI Misinformation

There are several reasons why bad actors use AI to spread misinformation. The goals depend on who creates the content and the victims they plan to target.

  • Political destabilization: Fake news erodes public trust in the media and the government. 
  • Foreign influence: Some governments use misinformation to sway elections, like Russian bots that supported Donald Trump in the 2020 election. Open AI technology has also been linked to firms such as Stoic, a group that uses the technology to promote pro-Israel viewpoints abroad in hopes of influencing the Israel-Hamas war. Chinese disinformation networks have also used the technology to spread disinformation and disrupt elections in Taiwan.
  • Profit: Many AI scams are designed to target companies or individuals to take their money. One common method is for scammers to create fake audio recordings to deceive banks and businesses. AI has also been used to create fake celebrity videos, where a famous person’s likeness is used to sell a product without their consent. 
  • Defamation: Deep fakes of celebrities or public figures are meant to attack their reputations. 
  • Sexualization: Explicit images of influencers, celebrities, and even peers are a form of sexual abuse and can be used to blackmail victims. 

One person may use AI misinformation to target another, or an organization can use it on an extreme scale, attempting to destabilize the largest countries in the world. 

How AI Has Developed

Two key factors have contributed to the rise of AI in the use of misinformation campaigns. The first is the advancement of AI tools. The early days of AI featured robot voices and systems that couldn’t handle complex ideas. In the modern era, machine learning allows AI to learn faster and create more realistic content, including voices that sound human and mimic the speech patterns of celebrities and politicians. 

The second factor that contributed to the development of AI is its ease of access. Nearly 60% of Americans are familiar with ChatGPT, with an estimated 25 million daily users from around the world as of November 2023. Someone who wants to explore AI for the first time to spread misinformation could run a quick search and easily find useful tools.

How To Identify AI Misinformation

Average web users need to get better at spotting AI-generated content and misinformation. Here are a few things to look out for

  • Check for distorted features. AI tools often have a difficult time replicating fine features such as hands. Also, look for blurry backgrounds and messed-up words. 
  • Avoid sensationalism. Overdramatic headlines and unbelievable news stories are often exaggerated, misleading, or incorrect. 
  • Run a reverse image search. Check to see if the image has been shared on other websites or has been discredited as fake news. 
  • See if reputable outlets are running the story. News is often shared on multiple pages. Fact-check one source by seeing if it’s on another. 

You can also decrease your chances of encountering misinformation and fake news by choosing reliable news sources. This will allow you to fact-check content if you’re ever unsure about what you’re reading.

Future Outlook for AI Capabilities

AI tools are going to get better over time, which is why people need to identify misinformation and fake news now. AI tools are getting better at drawing hands and fixing their mistakes. Not only can fake news cause confusion for individuals, but it can impact elections and the actions of governments. 

Either governing bodies will need to regulate the use of AI, or social media sites will need to limit the spread of misinformation. It cannot entirely be left up to individual users to do so.

Share this article

Don't miss out on the daily email read by over 3.8 million intellectually curious readers.