Skeptoid PodcastSkeptoid on Facebook   Skeptoid on Twitter   Skeptoid on Spotify   iTunes   Google Play

Members Portal

Support Us Store

 

Bermuda Triangle

 

How to Spot Misinformation

Donate Much content online is designed for high engagement, not for accuracy.  

by Brian Dunning

Filed under Logic & Persuasion

Skeptoid Podcast #910
November 14, 2023
Podcast transcript | Subscribe

Listen on Apple Podcasts Listen on Spotify

Share Tweet Reddit

How to Spot Misinformation

Way back in 2007, I did an episode on How to Spot Pseudoscience which was mainly a checklist of fallacious logic you could use to get a good sense of whether a particular claim was based on sound science or nonsense. That was a good list and it still stands, but since 2007, the type of claims we typically want to evaluate has changed. The rise of social media algorithms has increased the number of ways that scientific misinformation can be spun, and also broadened the types of misinformation you're likely to be exposed to. It's no longer just homeopathy and 9/11 Truth claims; today it's political and social claims intended to outrage you and get you to share the content. So in 2022 I did an episode on How to Spot Fake News, and that was also a good list that holds up, but it focused mostly on stories published on news websites (and quasi-news websites). So today we have something optimized for maximum relevance today: how to recognize general misinformation; and just as importantly, how you can help reduce its spread.

One thing that has really emerged and taken a front row seat in the 21st century is something called affective polarization. Affective polarization is the tendency for people with partisan feelings to actively dislike people from the opposing political party. Sociologists are actively studying why it has become such a prominent feature in the world, nowhere better demonstrated than among Democrats and Republicans in the United States.

The cause is likely multifactorial, but one cause has certainly been the rise of social media algorithms, whose growth has closely tracked that of affective polarization. The basic plot — and this is not just some random conjecture, it's been the subject of much study — is that articles on social media platforms get promoted (meaning shown to more people) when they get high engagement. If I post a picture of a potted plant, nobody reacts to it and the algorithm ignores it; if I post a picture of children being sacrificed by cultists, it triggers tremendous outrage and everyone clicks a reaction, reposts it, or makes a comment. The algorithm then promotes that post to even more people, triggering even more reactions, and people spend more time on the platform exercising their outrage. This equals more exposure for advertisements, and thus more revenue.

This system has taken huge advantage of affective polarization. When you look at the recent Presidential elections, Supreme Court case topics, and social and religious divisions, high affective polarization equals more outrage on social media posts and thus far higher engagement. If you've ever clicked the "angry" reaction or reshared an article on social media revealing some horrible new thing that the opposing political party is up to, chances are you were shown that post because the algorithm knew from your past behavior that your political polarization meant you were very likely to take that action on it. And those extra minutes you spent on the site just made someone some money.

When so many people around the world are at the mercy of such an effective influence, it's no surprise that world governments have used it to their advantage to sow division and instability in each other's elections. Perhaps the most famous example of this is Russia Today, a propaganda news agency founded by the Russian government in 2008 to plant divisive articles as fodder for the social media networks, then separately creating millions of bots and fake accounts to amplify the content from those articles online. But that's only one example; to some degree, virtually every nation does this to its enemies. Everyone's gotten a lot more sophisticated since the days of Tokyo Rose and Voice of America; even Samuel Adams during the American Revolution had five stated objectives for his anti-British propaganda, one of which was to outrage the masses by creating hatred for the enemy.

The net result of all of this is a vast amount of online information being shared to both advocate and to oppose just about anything you can imagine, especially anything that shocks and outrages anyone.

And here is a very important point: these online articles and posts seem highly believable to us regardless of their accuracy because of the flip side of the affective polarization coin, which is the tendency to automatically like and trust people of our own political party — the same people who originally posted the articles the algorithm is showing us. So because we see all this content coming from trusted sources — the people we follow online — we automatically take it as fact.

Because of this, misinformation is harder to recognize than ever before. But it's not impossible. So without further ado, let's dive right into the checklist.

Is it a divisive issue that casts some group as the villain?

This is perhaps the biggest red flag that your article could well be propaganda that ranges anywhere from exaggerated to spun to outright false. Is it a negative article about some horrible new action by some group or nation or demographic you already dislike?

Real news articles are not divisive. They report on important events. Sometimes these include crimes or international conflicts, but real unbiased news sites understand that all international conflicts are nuanced and complex; so they generally won't report a one-sided perspective that casts one combatant as the bad guy.

If the article seems to fit your preconceptions a little too perfectly, take it as a warning that an algorithm showed you something it knew you'd react to.

Does the headline blame a divisive political figure?

The classic divisive misinformation article calls out some politician, whether it's a governor, Congressperson, or the sitting President, and is all about some outrageous, unbelievable new thing they are trying to push through. Algorithms love to push these stories because so many people share them, adding comments about how outraged they are.

Now there are two sides to this coin. Highly partisan politicians often will use divisive terminology, and will often point at bogeymen, in order to keep their base fired up and leverage that affective polarization to keep themselves popular. But they don't all always do it. Often you'll find that a report of their outrageous behavior has nothing more than a morsel of truth to it, and that there is much more to the story and their real comments, in context, were not outrageous at all.

So just be aware that the divisive politicians you like may be engaging in the former; and the divisive politicians you hate may be engaging in the latter. There is much more sanity in the world than insanity; it's just the algorithms that would have you think otherwise.

Search for it on an unbiased news site.

If your article is low-quality information intended to spark divisive outrage, then you will probably not find that story at all on high quality, unbiased news sites. So this raises the question of how to find those? What news sources are both reliable and unbiased? Fortunately I can give you my favorite four right now: Associated Press, Reuters, United Press International, and the BBC. These are according to my preferred source, the Interactive Media Bias Chart from Ad Fontes Media (which, incidentally, ranks Skeptoid as both highly reliable and unbiased).

Look for it on a fact-checking website.

If the story's bogus, someone else has almost certainly already done the work for you. Check it out. Search for it on a couple of your favorite of these top four fact checking websites: Snopes, Politifact, FactCheck.org, and BBC Reality Check. (And for those of you springing to your keyboards right now to tell me how incredibly biased those sites are and how could I be so gullible, spare yourself the effort; it's you who has already been fooled.)

Always do a quick double-check on the source.

Is the article from a familiar news site that you know to be legitimate? If it's not, then you'd better do a quick check to see if this site is for real, or if it's a parody or satire site, or just some garbage site that was thrown together recently without a trustworthy provenance.

What to do when you find misinformation

Whenever you see a post on social media that you recognize to be algorithm-driven propaganda, starve it of oxygen. Hide it. Block or mute the sender when appropriate. If it's posted by a friend, message the friend that it looks like it's probably algorithm-driven propaganda — make sure you do it as a private message and not as a comment on the post itself, because any comment (even a negative one) counts as engagement and boosts that article even more.

Finally, I recommend cleaning up your own sources. If you use a tool like Apple News or some other news alert service to bring you the day's headlines, scrub those of any biased sources. Whenever some alert pushes me an article from a source that's off center, I block that source, relying mainly on content from Reuters and AP. It may take a while to get used to it, since many of us have come to enjoy being in our favored echo chamber, and we look forward to each day's outrageous news about a hated political figure. But once you do, what you'll find is that it becomes easier and easier to spot the stuff your friends are sharing as algorithm-driven misinformation.

I would like to close by sharing a personal thought that I've had these past few years. It's something that I first took notice of during the COVID-19 pandemic, and it was a tendency that I caught in myself of prejudging people. I would see a stranger in a store or on the street, and I would make a judgment based on their clothes, their car, their mannerism, or even something overt such as some slogan printed on their shirt, and based on that alone I thought I knew all I needed to know about whether they followed government mandated COVID restrictions, or whether they disregarded them in favor of their own personal freedom or their own research. I made snap judgements on people and decided if I liked and trusted them or disliked and distrusted them. And the moment I realized that I was doing this, I realized that I was an active part of the problem and was a contributor to the worldwide rise in affective polarization. My choice was to prefer to be part of the solution, to the degree I was able; and now when I see a person I'm inclined to dislike I try to see instead something that we share, to find some common ground. Even though I'm unlikely to have any interaction with that person, I still force myself to see them in a positive light. The more I do it, the easier it becomes; and the less susceptible I am to online misinformation.


By Brian Dunning

Please contact us with any corrections or feedback.

 

Shop apparel, books, & closeouts

Share Tweet Reddit

Cite this article:
Dunning, B. "How to Spot Misinformation." Skeptoid Podcast. Skeptoid Media, 14 Nov 2023. Web. 21 Nov 2024. <https://skeptoid.com/episodes/4910>

 

References & Further Reading

Brady, W. "Social media algorithms warp how people learn from each other, research shows." The Conversation. The Conversation US, Inc., 21 Aug. 2023. Web. 7 Nov. 2023. <https://theconversation.com/social-media-algorithms-warp-how-people-learn-from-each-other-research-shows-211172>

CUNY. "Websites for Fact-Checking." CSI Library. City University of New York, 1 Dec. 2020. Web. 7 Nov. 2023. <https://library.csi.cuny.edu/c.php?g=619342&p=4310783>

Editors. "How to Spot Fake news." Resource Center. AO Kaspersky Lab, 26 Sep. 2021. Web. 7 Nov. 2023. <https://usa.kaspersky.com/resource-center/preemptive-safety/how-to-identify-fake-news>

Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N., Westwood, S. "The Origins and Consequences of Affective Polarization in the United States." Annual Review of Political Science. 10 Dec. 2018, Volume 22: 129-146.

Kiely, E., Robertson, L. "How to Spot Fake News." FactCheck.org. The Annenberg Public Policy Center, 18 Nov. 2016. Web. 7 Nov. 2023. <https://www.factcheck.org/2016/11/how-to-spot-fake-news/>

Lelkes, Y., Sood, G., Iyengar, S. "The Hostile Audience: The Effect of Access to Broadband Internet on Partisan Affect." American Journal of Political Science. 1 Jan. 2015, Volume 61, Number 1: 5-20.

Menczer, F. "Facebook whistleblower Frances Haugen testified that the company’s algorithms are dangerous – here’s how they can manipulate you." The Conversation. The Conversation US, Inc., 7 Oct. 2021. Web. 7 Nov. 2023. <https://theconversation.com/facebook-whistleblower-frances-haugen-testified-that-the-companys-algorithms-are-dangerous-heres-how-they-can-manipulate-you-169420>

Torcal, M., Reijan, A., Zanotti, L. "Editorial: Affective polarization in comparative perspective." Frontiers in Political Science. 23 Jan. 2023, Volume 5: 1-3.

 

©2024 Skeptoid Media, Inc. All Rights Reserved. Rights and reuse information

 

 

 

Donate

 

 


Shop: Apparel, books, closeouts

 

 

Now Trending...

Tartaria and the Mud Flood

Decoding the Kensington Runestone

Solving the Haunted Hoia-Baciu Forest

Transgender: Fact or Fiction?

Deconstructing the Rothschild Conspiracy

Medical Error Is Not the Third Leading Cause of Death

Elvis Sightings and You

Valiant Thor: Your Friendly Pentagon Alien

 

Want more great stuff like this?

Let us email you a link to each week's new episode. Cancel at any time: