Skeptoid PodcastSkeptoid on Facebook   Skeptoid on Twitter   Skeptoid on Spotify   iTunes   Google Play

Members Portal

Support Us Store

 

Bermuda Triangle

 

Don't Try It Before You Knock It

Donate Try it before you knock it… unless you want to know if it really works.  

Skeptoid Podcast #533
Filed under Logic & Persuasion

Listen on Apple Podcasts Listen on Spotify

Don't Try It Before You Knock It

by Brian Dunning
August 23, 2016

One of the biggest challenges in science writing when discussing unproven or implausible therapies and products is that people tend to trust their own personal perceptions more than any other source of information. We tend to go by what we've experienced ourselves, rather than by what other people say they've experienced. Consequently, people are rarely moved by the results of testing and experimentation if the results contradict their own experience. "I know it works," they tend to say, "because it worked for me." And so often this gets projected back onto me: Because I have not tried the product myself, but only reported the results of testing, I should not comment on it. "Don't knock it until you've tried it," I am told. Today I'm going to explain exactly why it is not only appropriate to avoid a personal experience — as the best experimenters do — but it's actually a better way to learn about something. Today I say "Don't try it before you knock it."

Here is the main reason we have that rule, and it's the reason that experimenters seldom include a personal experience when they evaluate something. Human beings are wonderful at appraising experiences, but we are terrible at collecting data. Our senses are faulty, prone to error, and everyone's are calibrated differently. We all bring preconceived notions and expectations. We have personal biases. We are subject to all manner of perceptual errors. We make interpretations of our experiences, and we all interpret them differently. We have moods, up days, down days, personalities, tendencies. To expect any one random person's assessment of an experience — filtered through his own prejudices and biases and preferences — to be a truly objective and factual representation, is a fool's errand.

The scientist knows this; he knows his own experience is a worthless indicator, and he knows that an objective evaluation should not be tainted with it.

So what do we do instead of trying it? We rely on controlled testing. When we want to know if something works as advertised, we design a test. A properly designed test employs controls and randomization that cancel out all of the biases and other weaknesses we've discussed. If you want to know whether — for example — listening to a binaural beat audio file will make you fall asleep, a true science fan knows not to bother trying it to see. She knows her sleepiness varies throughout every day, she knows that the expectation that it's supposed to make her sleepy skews her perception. Instead, she looks at properly controlled testing that's been done. Those subjects didn't know what they were listening to, they didn't know what it was supposed to do to them, and some of them unknowingly listened to a placebo recording. She knows the difference between real, statistically-sound data and one person's anecdotal experience.

So whenever we hear:

"Don't knock it until you've tried it."

We know a personal experience is about the worst way to learn about something, for all the reasons discussed above; but even more than that, having tried it personally skews your ability to objectively interpret the data. This doesn't just go for products like new medical treatments, but also for experiences, like watching a psychic give supposed "readings" to audience members, or seeing an apparition believed to be a ghost. In cases like these, the personal experience can be very powerful — especially for those not familiar with the underlying science — and can make it extraordinarily difficult to objectively understand what the data say.

If you truly want to learn about a phenomenon by following the scientific method, there are precious few cases where you should ever include a personal experience in your analysis.

"I was a skeptic until I tried it."

No. The skeptic would have known not to try it, for the reasons we just covered. The scientifically literate skeptic would have chosen a far more disciplined protocol for learning about this thing. Simply handing over money to obtain a worthless personal anecdote means that you were gullible, not skeptical.

"I know it works, because it worked for me."

This is perhaps the single most common misunderstanding: misinterpreting a personal experience as a universal one. One person's subjective assessment of their own feeling at one moment certainly doesn't mean others will make the same assessment, or even that that same person would experience the same thing at a different time under different circumstances.

If we look at the 2016 Rio Olympics, many athletes from the USA (a world leader in unscientific alternative treatments) and even a few from other countries proudly sported cupping bruises or brightly colored elastic kinesio tape, two alternative therapies completely lacking in either evidence or plausible theoretical foundation. Yet despite a solid evidential basis showing these things do not help in any way, world-class athletes use them, and their professional world-class trainers administer them. Why? Because their personal experiences told them they worked. In the Olympics, an athlete is given excellent nutrition, regimented rest, massage, icing, and all manner of professional attention to every detail. In such an environment, any non-functional addition to this regimen is going to be correlated with recovery and maximum performance. Michael Phelps would have still won 23 medals if I danced around him and shook a rattle; and if his trainer told him it was part of his therapy, he might have become a believer in that too.

Crediting something with efficacy because it appeared to work when you tried it is a perfectly rational conclusion for an intelligent person to make. It just happens to also be unscientific, and no more likely to be true than your mood being the cause of today's weather.

Here's another example of a popular way that many people place personal experience above empirical evidence. Have you ever heard a Young Earth Creationist ask of a scientist who says the Earth is billions of years old:

"Were you there?"

The obvious implication being that if the scientist was not there personally observing the Earth a billion years ago, he couldn't know anything about it. This also shows ignorance of how science works. In fact, scientific conclusions are never based simply on personal reports, but upon direct measurements of testable evidence. Nobody's been to the Sun, either, but we know a great deal about it because we can directly measure and analyze the various types of radiation it puts out. Challenging a scientist with "Were you there?" is like assessing his expertise on thermodynamics based on whether he has watched a YouTube video made by a perpetual motion crank. The science of thermodynamics exists independently of YouTube; the geological evidence of the age of the Earth exists independently of what observers were or weren't there.

Here's a question I get nearly every time I do an episode on some spin-the-wheel-and-invent-a-new-alternative-therapy:

"What's the harm if it makes someone feel better?"

The harm is that the new thing probably isn't what made them feel better; but now that they think it is, they'll spend their time and money on it instead of on something that might actually help. This is the basic danger of anecdotal thinking: it encourages us not only to embrace the unreal, but to abandon the real.

"I know what I saw."

No you don't. You know your brain's current interpretation of whatever part of the experience it abstracted and stored away. We know for a fact that all our memories change dramatically over time, and were incomplete to begin with. And who knows how good was the data that your brain had to work from. Lighting conditions came into play, perhaps movement, distractions, backgrounds; expectations of what should be seen, possible misidentifications, and perceptual errors all had a part in building your brain's experience.

Science conclusions are never based on a person's visual sighting. It is anecdotal evidence; and the value of anecdotal evidence is to suggest a direction for research. If you know for certain that you saw the Loch Ness Monster in Urquhart Bay, apologies, but the science community is not going to call the mystery solved based on that alone. However, it might make sense to then go look in Urquhart Bay to see if we can find anything that can be collected and tested. Then we'll know for sure what we've got, regardless of how close it may or may not match what you know you saw.

The scientific method's unwillingness to accept anecdotes as evidence often turns into a mudslinging contest of who's open minded and who is closed minded, thus:

"Science is closed minded."

You wouldn't try it because your mind is closed, or it didn't work for you because your mind is closed. This popular charge against scientists has always boggled me. Closed mindedness is not just an unwillingness to consider new ideas; it's more importantly the stubborn refusal to change your mind no matter how much evidence piles up proving you wrong. Open mindedness should be the willingness to change your mind when you discover you're wrong; yet take virtually any alternative belief that has mountains of absolutely conclusive disproof, like homeopathy or the Flat Earth or the vaccine-autism link, and adherents who reject the disproving evidence — shutting out all but their preferred belief — always accuse the science-based perspective of being closed minded. It's a bizarre charge.

People who say this generally regard science not as a process, but as a rigid set of assertions, dogmatic edicts from on high, from which no departure is tolerated — or as one astrologist recently put it, "Knowledge rubber stamped by some Orwellian Ministry of Truth." It's a straw man caricature of science, of course. It's hard to criticize a process that emphasizes thoroughness and verifiability, but it's very easy to criticize an authoritarian set of doctrines. Such a set would be closed minded. But a process that encourages constant change and improvement — like science — is, by its very nature, open minded.

It is a stubborn insistence on remaining open-minded that compels the followers of the scientific method to emphasize only the best data, and to avoid our personal interpretations and preferences. This is why we do not "try it before we knock it". We'll test it before we knock, or embrace it, or react however the test results prescribe; but there is a huge difference between scientific testing and personal dalliance. Most assuredly, we need not personally sample a pseudoscience in order to form a well-informed opinion of it.


By Brian Dunning

Please contact us with any corrections or feedback.

 

Shop apparel, books, & closeouts

Cite this article:
Dunning, B. "Don't Try It Before You Knock It." Skeptoid Podcast. Skeptoid Media, 23 Aug 2016. Web. 21 Dec 2024. <https://skeptoid.com/episodes/4533>

 

References & Further Reading

Beyerstein, B. "Why Bogus Therapies Often Seem to Work." Quackery Related Topics. Quackwatch, 24 Jul. 2003. Web. 22 Aug. 2016. <http://www.quackwatch.org/01QuackeryRelatedTopics/altbelief.html>

Clark, J., Clark, T. Humbug! The skeptic's field guide to spotting fallacies in thinking. Brisbane: Nifty Books, 2005.

Damer, T. Edward. Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments. Belmont CA: Wadsworth Publishing Company; 3rd edition, 1995. 224.

Morier, Dean; Keeports, David. "Normal science and the paranormal: The effect of a scientific method course on students' beliefs." Research in Higher Education. 1 Jul. 1994, Volume 35, Number 4: 443-453.

Porter, Burton Frederick. The Voice of Reason: Fundamentals of Critical Thinking. New York: Oxford University Press, 2002.

Sagan, C. The Demon-Haunted World: Science as a Candle in the Dark. New York: Random House, 1995.

Shermer, Michael. Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time. New York: Henry Holt and Company, LLC, 1997. 63-123.

Walton, Douglas. Informal Logic: A Pragmatic Approach. New York: Cambridge University Press, 2008.

 

©2024 Skeptoid Media, Inc. All Rights Reserved. Rights and reuse information

 

 

 

Donate

Donate



Shop: Apparel, books, closeouts


Now Trending...

Tartaria and the Mud Flood

Chemtrails: Real or Not?

The 1994 Ruwa Zimbabwe Alien Encounter

Exploring Kincaid's Cave

Skinwalkers

The Siberian Hell Sounds

Deconstructing the Rothschild Conspiracy

Mystery at Dyatlov Pass

 

Want more great stuff like this?

Let us email you a link to each week's new episode. Cancel at any time: