In Part I, I discussed why it is that anecdotes are convincing, and why me might not want to simply assume that they represent general reality. In this part, I will discuss an example of how anecdotal evidence, even when considered “mainstream”, can turn out to be very wrong. On the flipside, I will also go over some cases where the “authorities” — namely, the FDA and CDC — make use of anecdotes to help them detect potential trends of adverse reactions based on what amounts to anecdotal data, which can be very useful in saving lives before a drug or method becomes too widespread.
Do anecdotes predict the evidence?
Continuing from the set of comments that inspired this article:
‘Defining anecdotal evidence as “what really happens to real people and they tell you about it,’ the editors of Diabetic Reader (Fall/Winter 1996) stated, “We find anecdotal evidence, especially in diabetes, is often ahead of the scientific pack and is often right!
As best I can tell, “The Diabetic Reader” was a newsletter published by June Biermann and Barbara Toohey, neither of whom are/were doctors. Rather June was a sufferer of diabetes and their co-authored books and newsletters were meant to provide advice, personal stories, etc to other sufferers (obviously a good thing). It’s possible that the authors included citations for the specific claim about diabetes, but the only reason that the Diabetic Reader — or anybody, for that matter — can make such a claim (assuming the quote is accurate) about the evidence being “ahead” or “right” is because of subsequent research that establishes its correctness. This has not occurred with homeopathy, acupuncture, reiki, or the supposed dangers of aspartame. In fact, the opposite has occurred. When individual studies and, more importantly, systematic reviews are performed, they find effects no better (or worse) than a placebo.
So it may well be true that “anecdotal evidence is often ahead of the scientific pack and often right”. But that is not the same as “this anecdotal evidence is right”. Anecdotes can be a precursor to evidence. And one would think they would generally be compatible with the evidence once it has been found (if the anecdotes are about the cause of things). But sometimes anecdotes (which can easily take on the form of urban legends and “old wive’s tales”) are flat out wrong.
Even mainstream anecdotal beliefs can be wrong
Scientific skepticism obviously spends a lot of time pointing out the lack of evidence for alternative medicine or product claims (both positive and negative), but that is not the only time that anecdotes can turn out to be wrong. Let’s take a look at some cases where the received anecdotal wisdom turned out to be quite wrong.
Ulcers were famously thought for a very long time to be caused by stress. No doubt many people still believe this. But in the late 80s Barry Marshall of Australia and a colleague stumbled on the fact that H. pylori could survive in stomach acid and hypothesized that this could be the cause of ulcers (prior to that, it was accepted wisdom that no bacteria could live in the stomach). After experimenting (including self-experimentation!) both on cause and cure, they had demonstrated their case well. Now it is accepted knowledge, with good evidence, that peptic ulcers have very little to do with stress.
A portion of an interview in Slate magazine (actually part of a larger series on being wrong in general) with Marshall is illuminating:
[Marshall] .. this tradition emerged that ulcers were caused by stress or turmoil in one’s life. I don’t know where the data came from, but there was this idea that stress caused high acid levels; maybe there was a small amount of evidence for that, although I haven’t been able to find it when I’ve looked. Anyway, all those things added up to convince people that ulcers were caused by stress. There was no proper data of any kind….
[Slate] Are you saying that there was no basically no empirical evidence to support the stress-and-acid hypothesis?
You can always find stress in someone’s life if you want to. You ask a few questions and eventually it’s, “Yes, I admit, I was worried about something recently.” So they tried to find evidence for stress causing ulcers, and whenever they had an experiment which worked, it would just be blown out of all proportion, and everyone would get so much publicity out of it that you would think, “Ah, at last, it’s proven.” But the data was very bad. And in fact there was plenty of evidence showing that stress didn’t make much difference.
This sounds very similar to what I have encountered in my research into aspartame. You have a lot of people who really, honestly, believe that aspartame is/was the cause of their ills and they’ll jump on any study that could even be remotely interpreted to support their case. The same applies for homeopathy, there is no plausible theoretical explanation for its supposed power, but adherents will jump on any study that demonstrates some “strange” effect in water, regardless of its relation to the supposed explanation of homeopathy. Or take acupuncture — the theory holds that there are very specific “meridians” associated with parts of the body, but adherents will point to studies that use random points, or don’t even pierce the skin, and somehow think it scores a point for them. And so with aspartame, folks will jump onto any study about methanol or formaldehyde and claim it “proves” aspartame will kill you or give you MS or fibromyalgia.
There is also the widely held belief that drinking milk will produce a bunch of mucus when you have a cold, and should therefore be avoided. I certainly believed this until recently, and could have even told you that I “remembered” it happening to me. But when it was looked at in actual studies, there turns out to be no relationship between production of mucus and consumption of dairy products during an infection.
But are anecdotes actually ignored by the scientific and governmental authorities?
The bulk of this article has discussed anecdotes in terms of how they cannot really be treated as hard data. But it is fair to actually say that “They” ignore anecdotes entirely? No, it’s not. In fact, follow-through and statistical analysis of anecdotal reports can help identify potential issues or trends related to vaccines, drugs and other products.
FDA Adverse Events Reporting System (FAERS)
There are two systems which help support the “surveillance” of drug and vaccine effects. The FDA Adverse Events Reporting System (FAERS) and the Vaccine Adverse Events Reporting System (VAERS). These systems exist solely so that individual consumers can report any side effects that they, or their doctors, believed occurred as a result of a vaccine or particular drug or food additive. Note that the data collected here is not the same as a study that demonstrates actual side effects. Rather it, like all anecdotes, can serve as a precursor to actual evidence. If, for example, thousands of people report a skin rash after taking some particular medicine then the FDA can look deeper, or order the manufacturer to do so, into that particular issue and determine if it’s a general problem or specific to a subset of the population. At that point, there would be evidence, not before. You can actually download FAERS raw data. The FDA will releases quarterly (or so) lists of “potential signals” based on the submitted data. But they caution:
The appearance of a drug on this list does not mean that FDA has concluded that the drug has this listed risk. It means that FDA has identified a potential safety issue, but does not mean that FDA has identified a causal relationship between the drug and the listed risk.
For example, the most recent report from June 2012 identified some potential for fatal reactions to codeine sulfate after some pediatric surgeries. Looking deeper at the data (three deaths at the time), they identified that the deaths seemed to be related to people who were “ultra-rapid metabolizers”. There is a full page of reports for 2012 available as well. Because the relative numbers were so small, it was unlikely to be found in the smaller clinical trials. But unlikely events can affect real people when there is a large enough sample.
So it is unfair to say that the mainstream community ignores anecdotal reports that have not been collected via “gold standard” double blind, placebo controlled trials or epidemiological studies. The FDA and CDC take very seriously any reports of adverse effects from drugs and food additives. Even though drugs go through quite a few studies before being marketed, the general availability during post-marketing is when rare effects may stand out.
Vaccine Adverse Events Reporting System (VAERS)
As noted, a similar system exists for vaccines: VAERS. There are in fact acknowledged side effects for vaccines, which you can clearly read on the packaging. The nurse will usually also tell you might get a rash or a minor cold, depending on the particular vaccine. But despite the anecdotal belief prevalent on the Internet that vaccines “cause” autism (or other neurological disorders), there simply is no evidence of this when looked at in epidemiological studies or even when analyzing VAERS data. The VAERS data (without identifying information) is available for free to the public: http://vaers.hhs.gov/data/index
But, like for FAERS, similar to VAERS, the data must be taken with a grain of salt (emphasis mine) (Varrichio 2004):
However, it is essential for users of VAERS data to be fully aware of the strengths and weaknesses of the system. VAERS data contain strong biases. Incidence rates and relative risks of specific adverse events cannot be calculated. Statistical significance tests and confidence intervals should be used with great caution and not routinely. Signals detected in VAERS should be subjected to further clinical and descriptive epidemiologic analysis. Confirmation in a controlled study is usually required.
The CDC has said the same (emphasis again mine) (Zhou 2003):
VAERS is a passive surveillance system: reports of events are voluntarily submitted by those who experience them, their caregivers, or others. Passive surveillance systems (e.g., VAERS) are subject to multiple limitations, including underreporting, reporting of temporal associations or unconfirmed diagnoses, and lack of denominator data and unbiased comparison groups. Because of these limitations, determining causal associations between vaccines and adverse events from VAERS reports is usually not possible.
Even with all those limitations, the FDA takes the VAERS data quite seriously (from same CDC/Zhou document):
FDA medical officers review all reports of death and other serious events, and they also look each week for clusters within the same vaccine lot. In addition, FDA medical officers evaluate reporting rates of adverse events by lot, as needed, looking for unexpected patterns. During the 11 years, no lot needed to be recalled on this basis.
(Note, on the reports of the death, earlier in the document they note that all except one were traced to SIDS, consistent with the normal statistics for time of year, and those reports went down along with the drop in SIDS with the “Back to Sleep” campaign”).
Anecdotes, both individual and more generally believed, can turn out to be quite wrong when actually studied. But this does not mean that anecdotes are always wrong, or that they are ignored by scientific and medical authorities. If enough people report an issue and further investigation reveals the cause to be likely related to a drug they took, the FDA and CDC can issue warnings for the affected population or even pull the drug entirely. But it is important that proper investigations and statistics are actually performed rather than simply assuming that your individual experience is accurate, or that a group of community of people’s experiences are even accurate (recall Part I’s “Availability cascade”).
Schulz, Katherine. “Stress Doesn’t Cause Ulcers! Or, How to Win a Nobel Prize in One Easy Lesson”. Slate.com. Published 9/9/2010. Visited February 2013.
USA FDA. “2012 Drug Safety Communications”. Updated 12/19/2012
Varricchio, Frederick, John Iskander, Frank Destefano, Robert Ball, Robert Pless, M. Miles Braun, and Robert T. Chen. “Understanding vaccine safety information from the vaccine adverse event reporting system.” The Pediatric infectious disease journal 23, no. 4 (2004): 287-294.
Zhou, Weigong, Vitali Pool, John K. Iskander, Roseanne English-Bullard, Robert Ball, Robert P. Wise, Penina Haber et al. “Surveillance for safety after immunization: vaccine adverse event reporting system (VAERS)—United States, 1991–2001.” MMWR Surveill Summ 52, no. 1 (2003): 1-24.
Pinnock, Carole B., Neil M. Graham, Arul Mylvaganam, and Robert M. Douglas. “Relationship between milk intake and mucus production in adult volunteers challenged with rhinovirus-2.” American Journal of Respiratory and Critical Care Medicine 141, no. 2 (1990): 352-356.
Wüthrich, Brunello, Alexandra Schmid, Barbara Walther, and Robert Sieber. “Milk consumption does not lead to mucus production or occurrence of asthma.” Journal of the American College of Nutrition 24, no. suppl 6 (2005): 547S-555S.