Skeptoid PodcastSkeptoid on Facebook   Skeptoid on Twitter   Skeptoid on Spotify   iTunes   Google Play

Members Portal

Support Us Store

 

Free Book

 

Lies, Damned Lies, and Polls

Donate Watch out next time you take a poll... is someone trying to learn about you, or manipulate you?  

by Brian Dunning

Filed under General Science, Logic & Persuasion

Skeptoid Podcast #883
May 9, 2023
Podcast transcript | Subscribe

Listen on Apple Podcasts Listen on Spotify

Share Tweet Reddit

Lies, Damned Lies, and Polls

Your cell phone rings, and it's an unrecognized number. Let's say you're one of those people who lives on the edge, and allows unrecognized numbers to ring through, and so you answer it. Surprisingly, this one is not a scam call, but it's the next closest thing: a pollster, calling with a telephone survey. There might be one question, there might be ten. There might be a lightning round of demographic questions at the end of the call. And then you hang up, and wonder what just happened? Because whatever you think may have been the purpose behind that survey, there's a very good chance that you're wrong.

The difference between a poll and a survey is not really a hard and fast one. Both terms refer to the same thing — a questioning of some target group — but a poll is usually short, often with only a single question; and a survey is usually longer with multiple questions. There is one other important difference outside of the formal definition that's what we're interested in today, and that's how they can be used or misused. The proper, intended use of polls and surveys are to learn something. A poll is a quick way to find out where people are at on some important issue; while a survey is a way to collect deeper information.

Let's say you're designing a new car, and you want to know what features you should put into it. You're going to want to know what people like and dislike these days; you're going to want to know what they can afford; you're going to want to know what demographic will be interested in it. So you'd probably do a survey to ask all kinds of questions so you can find trends in the data and get a solid handle on what you should be building, at what price, and for whom. Surveys are best for when you really want to learn something, especially nuanced information. You might have to pay people a few bucks to take the survey, but that's OK, because the knowledge is worth it to you.

But let's say you're putting in a new sports stadium, and you've got the choice to build it in town or outside of town. There's no point in a long survey with a bunch of questions; you just want a simple vote from as many people as you can get. That's a job for a poll. You probably don't need to pay anyone to answer your poll, not only because it's so quick and easy, but because there are plenty of people who want their vote heard and are happy to give their opinion.

A poll is also a fine way to learn something. It's like a hammer. Boom, you get one data point, very quickly. But, also like a hammer, you have to hit it square or you'll bend your nail — or skew your results. That means you have to word the question very precisely. Here's a recent example. In April 2023, Navigator Research conducted a poll asking Americans if they support or oppose Donald Trump being criminally indicted. 52% supported it, and 40% opposed it. But when they asked the same question, but this time including the details of what he was indicted for, support went up to 54% and opposition went down to 39%. What does Navigator Research do with that information? Which results should they report?

There are other subtleties to even the simplest of polls. You can't just ask "Do you prefer Candidate 1 or Candidate A?" because even the order of the options matters. And guess what, we thought we had a very simple thing to do — ask a single question — and now all of a sudden we are already mired in science. When there are multiple options, pollsters have to account for primacy bias and recency bias; the tendency for respondents to select one of the first or last options, respectively. So the order must be randomized for all respondents. Primacy bias is the stronger, so when you have only two options, the first one offered will be selected more often, all else equal. So if you want to get the most accurate data, you need to put Candidate A first just as often as Candidate 1. You have to randomize.

But that's just one small issue with polls, and it assumes that the pollster is trying to collect accurate data. Are there any other purposes for polls and surveys, particularly in the way their results can be used? Of course there are. Learning something is not necessarily the reason many polls are conducted. Consider who hired the pollster, and why they were hired.

All of the complications that can skew the results of a survey or poll are problems that knowledge-seeking survey designers have to be aware of and account for; but to the spin doctor, or the political campaign, or the think tank, or anyone else in the propaganda business, they comprise a toolbox of nifty little tricks to get the data to say what they want it to say. Primacy bias is just the beginning. Here are a couple others.

  • Acquiescence bias: People tend to go for the friendly answer, to answer "Yes" to a yes/no question or to "Agree" with an agree/disagree question, where their acquiescence does not actually reflect their true feelings. Simply agreeing requires less thought, it seems like you're being nicer to the interviewer, and we tend to perceive authority in the questioner and assume they know more than we do. An unbiased question will present the actual choices, rather than asking whether you agree or not with one of them.

  • The context effect: The order of the questions can matter, because some may contain information that skews our perception of later questions. For example, one survey might open with the question of how well we approve of the job the President is doing; while another survey might make that the third question after asking what we thought of that time he bombed civilians in Iraq and what we thought of the economy taking a giant dive. This second survey has primed us with negative information about the President before asking what we think of him.

Typically, unbiased surveys use what's called probability sampling to determine who to poll. This means that the sample you select will, statistically, match the population at large. Two factors are needed to do this correctly:

    1. Every person in the population must have an equal chance of being selected.

    2. You must be able to determine each person's chance of being selected.

If both of these are done, then you can randomly choose which people to poll — via any method that actually is random — and be assured that your answers will be a true representation of the overall population's views.

For example, in the 2016 Brexit Referendum, pollsters failed to do this. They ended up oversampling younger and more educated people who favored remaining in the European Union, and undersampled older and less educated people who favored leaving. The resulting expectation was that remaining in the EU would probably win, though by a slim margin, when in fact the referendum went the other way.

Nonprobability surveys are those in which you're not trying to measure the whole population, but a deliberately targeted subset. You might want to find out what features are important to pickup truck owners, so obviously you would want to limit your population to pickup truck owners.

While there are countless legitimate cases for limiting your survey population, it's another obvious way that pollsters with ulterior motives can produce desired results instead of actual results. If I go to churches to ask whether people are religious, I'll be able to report that nearly everyone polled is religious. If I work for the Japanese Sumo Federation's marketing department, I might go to a sumo tournament to ask people what their favorite sport is, and guess what survey result I'll be able to publish.

This type of population selection is called sampling bias, and it's used to fool the public all the time. A political candidate looking to report that his campaign is favored might have his pollsters go to one of his own rallies to conduct the poll. Recently a religious anti-abortion group reported the results of a survey that found women do not actually want the right to choose; it turned out the survey was seen and taken only by visitors on their own website.

There's also something called the mode effect, and this refers to the mode, or method, by which the poll is administered: telephone, mail, Internet, in person, etc. When you're face to face with a person, you might be less likely to give honest and open answers to certain sensitive questions. Pew Research conducted an experiment where they asked people about their financial status. When asked online, 20% admitted it was poor; but when asked in person, only 14% did. The 6% difference was a mode effect. When they asked a question that was not quite so personal, like what they thought of healthcare laws, no mode effect appeared.

The mode effect can be used deliberately to take advantage of another common bias seen in survey respondents:

  • Social desirability bias, also called the Bradley effect: Named after Los Angeles mayor Tom Bradley, a Black candidate who lost the election to a white candidate, despite leading in the polls. People tend to give pollsters answers that are more socially acceptable, such as indicating a willingness to vote for a non-white candidate. Then when they get behind the anonymity of the voting machine curtain, they vote the way they truly feel, social desirability thrown to the wind.

Let's say you're a pollster hired by a political campaign that wants to report that Americans don't care very much about social justice issues. You're well aware of the mode effect and of social desirability bias, so your best bet is to ask your survey questions in an impersonal way, like via mail or a website, so that people don't feel pressured to give more socially desirable responses. But if your goal is the opposite — say, to report that social justice is more important to Americans than ever — you should probably ask these questions face to face, because fewer people are likely to admit in person that they don't care about women's rights, LGBT rights, and so on.

There is an even darker side to the polling business, and it's called the push poll. A push poll is one where the poll itself is little more than a ruse; probably nobody's ever going to even look at the responses. The most common type of push poll is one to trash a political candidate. The pollster might ask "Would you vote for Joe Biden knowing that he'd be the oldest man ever to take office?" You don't care how people answer; the whole point was just to raise alarm about Joe Biden's ability.

One such smear campaign was conducted against candidate John McCain in 2000. Telephone pollsters asked probable McCain voters, primarily white Christians, "Would you be more likely or less likely to vote for John McCain for president if you knew he had fathered an illegitimate Black child?" He hadn't, of course, it was just an attempt to scare away his voters.

This episode is not a comprehensive exposé of the many ways surveys and polls can be used and abused to both inform and deceive us. My hope is that it is at least enough of a spark to prompt you to investigate further on your own. The more people learn to recognize when a survey is legit and when it's not, the better informed we'll all be, and the less effectiveness tools of the misinformers will have. Just remember: whenever anyone approaches you with a poll, your first reaction should always be to be skeptical.


By Brian Dunning

Please contact us with any corrections or feedback.

 

Shop apparel, books, & closeouts

Share Tweet Reddit

Cite this article:
Dunning, B. "Lies, Damned Lies, and Polls." Skeptoid Podcast. Skeptoid Media, 9 May 2023. Web. 28 Mar 2024. <https://skeptoid.com/episodes/4883>

 

References & Further Reading

Kennedy, C. "Mode effects." Methods 101. Pew Research, 7 Feb. 2019. Web. 5 May. 2023. <https://www.pewresearch.org/methods/2019/02/07/methods-101-mode-effects/>

Kennedy, C. "Survey Question Wording." Methods 101. Pew Research, 21 Mar. 2018. Web. 5 May. 2023. <https://www.pewresearch.org/methods/2018/03/21/methods-101-video-question-wording/>

Kuru, O., Pasek, J. "Improving social media measurement in surveys: Avoiding acquiescence bias in Facebook research." Computers in Human Behavior. 1 Apr. 2016, Number 57: 82-92.

Sabato, L. "When push comes to poll." Washington Monthly. 1 Jun. 1996, Volume 28, Number 6: 26-31.

Schuman, H., Presser, S., Ludwig, J. "Context Effects on Survey Responses to Questions About Abortion." Public Opinion Quarterly. 1 Jul. 1981, Volume 45, Number 2: 216-223.

Stuart, G., Grimes, D. "Social desirability bias in family planning studies: A neglected problem." Contraception. 1 Jan. 2009, Volume 80, Number 2: 108-112.

 

©2024 Skeptoid Media, Inc. All Rights Reserved. Rights and reuse information

 

 

 

Donate

 

 


Shop: Apparel, books, closeouts

 

 

Now Trending...

Tartaria and the Mud Flood

The Truth About Remote Viewing

The UFO Rogues Gallery Takes Over America, Part 1

Environmental Working Group and the Dirty Dozen

The UFO Rogues Gallery Takes Over America, Part 2

The Siberian Hell Sounds

On Railroad Tracks and Roman Chariots

Foo Fighters

 

Want more great stuff like this?

Let us email you a link to each week's new episode. Cancel at any time: