Social media contributes to polarization in several different ways. Some people might end up in echo chambers where they only hear opinions they agree with while others might not be able to identify AI-generated content that is fake on various sites. In greater detail, here are various reasons why social media can affect polarization.
1. Echo Chambers
People like to believe their news outlets are unbiased and accurate, but research shows that may be untrue. The popularly shared Media Bias Chart ranks outlets based on their political leanings and accuracy, revealing there are very few neutral outlets. When people only turn to media outlets that reflect their political leanings, they risk entering an echo chamber.
An echo chamber occurs when you surround yourself with people with the same point of view. You might start to believe everyone thinks like you do, thus confirming that your beliefs are correct. Echo chambers lead to polarization because people are only exposed to one idea and opposing views are immediately shot down. They create a significant “us versus them” mentality without leaving room for nuance or compromise.
2. Easier Access to Extreme Groups
Research shows that more extreme opinions get more likes and comments, which then increases their visibility. This makes it easier for people to find extreme opinions and the groups of people who believe them.
This creates a snowball effect. When extreme ideas get more traction, channels are more likely to share those thoughts and beliefs to keep engagement rates high. This normalizes extreme beliefs.
Before social media, people had to find niche forums and groups that shared their beliefs through in-person meetings. Now, users can engage with people who share extreme beliefs with just a few clicks.
3. Online Anonymity
There may be a direct correlation between online anonymity and the rise of hate speech on the web. People are more likely to feel safe saying cruel and harmful things when they can hide behind a mask. For example, a man might feel comfortable posting misogynistic ideas online that he would never share with his mother or wife. Another person could share racist comments that they would never make in front of their boss or coworker.
These comments can normalize hateful ideas and can cause other people to act based on the anonymous posts.
Online anonymity can also affect America on a national scale. Troll farming — which occurs when governments use hundreds of fake users to spread misinformation — affected previous presidential elections and worsened the spread of the COVID-19 pandemic due to vaccine misinformation. Many web users don’t realize that these online profiles are actually foreign entities or bots.
4. Lack of Conversational Context
A significant percentage of communication over the web occurs via text, through the form of posts and comments. However, this can lead to miscommunication because it is hard to incorporate the nuance of nonverbal messaging.
Researcher Albert Mehrabian found that only 7% of communication involves your word choice and people rely on nonverbal cues for 55% of the messaging and vocal cues for 38%. The facial expressions you choose and the tone of voice you set have a much bigger impact than your actual words. This becomes a problem on the internet when nonverbal indicators cannot be shared. A lack of context can make someone seem more argumentative or prevent users from picking up on sarcasm and jokes.
While emojis can help fight this lack of context, they are not always appropriate to use. As a result, people are more likely to argue online or take a fake story seriously.
5. AI Generated Content
Artificial intelligence is increasingly used to quickly generate and disseminate news stories. However, this can increase polarization in society when AI tools create fake news and specifically target certain audiences. People who are in echo chambers might see false news stories that reflect their beliefs, decreasing the chances that they question whether they are true.
One example is Grock, an AI news generator developed by X. The tool picked up on tweets criticizing NBA player Klay Thompson for “throwing bricks” or missing a lot of shots in the game. However, because it could not pick up on the slang terms, the bot generated a news story that Thompson was tossing bricks through windows across Sacramento.
6. Faster Spread of Information
News stories also travel faster than ever because of social media. A fact-checker can work to disprove a story, but by the time they have the facts, the news could have thousands of shares and millions of comments.
One study showed that fake news travels faster than real information. This is often because the fake content is more sensationalized, attracting viewers, or the fake news is shared by bots and troll farms.
7. Poor Media Literacy
Media literacy is the ability to analyze news for misinformation and bias. It is a valuable tool that helps people sort out incorrect information or news articles that are trying to persuade them toward extremist views. However, not everyone has strong media literacy skills that can help them spot fake news.
Many educators are working to improve media literacy in students. They teach critical thinking skills so students question any information they encounter. They also emphasize the importance of news literacy and assessing the credibility of sources in media.
These skills can help students throughout their lives no matter how the media landscape changes. Adults can also benefit from brushing up on their critical thinking abilities.
8. Strategic Interference With Information Flow
Fake news is hard to stop because it is often strategic. Groups try to push their own agendas by spreading misinformation and attacking factual stories. Individuals, organizations, and governments are potential disseminators.
During the COVID-19 pandemic, there was a significant amount of misinformation about vaccines and masks by trolls and foreign governments. These beliefs were eventually adopted by mainstream groups and polarized the nation.