Lie DetectionPolygraphs are no better than a roll of the dice at determining whether you're lying. Skeptoid Podcast #422 by Brian Dunning
A lot of people, like police officers and gamblers, think they can tell when a person is lying. But what we've always longed for is hard data; testable, mechanical proof that a subject is telling the truth or lying. For a long time, the standard has been the polygraph machine. Unfortunately it's also widely believed to be unreliable and to be inadmissible in a court of law, so today we're going to look at the hard data to see what polygraphs can and cannot do, and what other lie detection techniques may be on the immediate horizon, and how they fare in comparison. So put out that fire on your pants, and sit back. Polygraph machines haven't changed much since the earliest versions were introduced at the beginning of the twentieth century. They combine readings of blood pressure, respiration, pulse rate, and skin conductance, graphing these out with moving needles on a paper scroll. The idea is that these readings will change based on your stress level as you tell a lie. While that basic concept is sound, the problem — and it's a big one — is that any real effect is lost under a sea of other variables. Not only can the subject manipulate all of those readings with simple actions (biting the tongue, poking oneself with a hidden sharp object or fingernail, or even clinching the anal sphincter muscle), but the results are highly dependent upon the interaction between the subject and the polygrapher. A large part of a polygraph test consists of the presentation. The machine is intended to be intimidating, as are all the wires and sensors attached to the subject's body; as are actions by the polygrapher such as marking with a pen on the scroll at mysterious intervals. The polygrapher always begins by making you feel that you are very easy to read; for example, by asking you to lie to an innocent question like whether you're wearing blue jeans, and then looking at the results and reacting as if you are the most comically easiest person to read ever. The whole show is designed to make you anxious about lying; so that if you do lie during the test, your stress will hopefully rise high enough above the noise level to actually give a useful reading. If you go in knowing all of this, knowing that you're not overmatched and that this is a fair fight, you've got a great chance of yielding no useful results, whether you have anything to hide or not. But more than that, the reading of polygraph results is completely subjective. There was a famous case in 1978 of a man named Floyd "Buzz" Fay, arrested for a murder he had nothing to do with, and who was convicted based on a polygrapher's analysis of a lie detector test. Fay's appeal included reports from four other polygraphers who examined the same charts and concluded there was no evidence of any deception. Fay was ultimately released when other investigations found the true killer, and he then became a keystone of the fight against the use of polygraph tests in courts. Fay was not the only data point. In 1983, soon after Fay was released, the U.S. Congress Office of Technology Assessment published Scientific Validity of Polygraph Testing: A Research Review and Evaluation. This technical memorandum found:
As of 1993, the United State Federal Rules of Evidence follow what's called the Daubert standard, which requires a judge to accept data based only on proven science. The fallout from this allowed polygrapher testimony on a case-by-case basis. Whereas lie detector tests had been virtually unheard of in courtrooms since 1923, the 1983 case United States v. Scheffer used Daubert to allow a defendant to present polygraph data based on his Sixth Amendment rights in cases where the court could not conclusively disprove this particular polygraph test to have been unscientific! In short, the Scheffer decision allows the Daubert standard to be used in exactly the opposite way it was intended; somewhat along the lines of Mark Twain's comment that "The first and last aim and object of the law and lawyers was to defeat justice." But in other cases, the government stood firm on the science. In 1998, the Employee Polygraph Protection Act was established to prevent most private employers from requiring employees and potential employees to take lie detector tests for any reason. And in 2003, the National Research Council published The Polygraph and Lie Detection, 416 pages of research analysis pertaining to the use of polygraphy in security screening, which concluded:
In response to such blows, the lie detection industry has turned to other technologies. Perhaps the worst is voice pitch analysis, used over the telephone by some insurance companies. Software looks for changes in the voice pitch of the customer on the other end of the line, variously called "Layered Voice Analysis" or "Voice Risk Analysis". The vendors of such software point to reductions in fraudulent claims. But in a 2009 paper, researchers determined that any benefit realized was simply the result of the customers being informed that lie detection technology was in place. Said one of its authors, any reduction in fraud "is no proof of validity, just a demonstration that it is possible to take advantage of a bluff." Social psychologists refer to this tendency for people to be more honest when they believe they are being monitored as the "bogus pipeline." With existing methodologies for detecting lies essentially all discredited, an arms race began, with the lie detection industry (and all who might benefit from it) hot in pursuit of a reliable technology. Tracking of eye movements and pupil dilation has been studied for a number of years now, based on the theory that your brain has to work harder when it's lying. This workload, called cognitive load, keeps the brain busy and results in a subsequent reduction in the number of random eye movements. In one 2012 study, researchers assigned test subjects to watch a video of a crime and then either answer questions about it truthfully, or make up an unrehearsed lie about it, or retell a rehearsed lie. They used discriminant analysis, which is a statistical method for separating objects into two or more classes, and achieved 69% accuracy at which of the three groups a given subject belonged to. This is double what random chance would predict, but still wrong a third of the time; not reliable enough for most real-world applications. But the Holy Grail for lie detection is to look directly into the subject's brain to see definitively whether they're telling the truth or not. For many, this suggests the use of fMRI (functional magnetic resonance imaging) which can show where blood oxygen usage is most active within the brain in real time. The hope is that lies and truths will show different areas of the brain being used, but this is a complex prospect. First, we don't understand the brain well enough to make any predictions about what we'd expect to see; second, everyone's brain is different; and third, there's no reason to suspect that fMRI lie detection would be any more immune to countermeasures than would polygraphs. Nevertheless, neuroscientists have been working with this idea for nearly as long as we've had magnetic resonance technology. A 2013 review published in Frontiers in Human Neuroscience sought to determine whether we might reasonably expect answers to these questions, including broader questions such as whether it will ever be politically or socially acceptable to allow direct intrusion into our brains — the ultimate loss of privacy. But it seems we might not even get to that point, as even the underlying science, at least so far, has been shaky. The authors wrote:
In particular, they noted a 2003 paper published in Cerebral Cortex in which investigators found initially encouraging results, that rehearsed lies registered quite differently from spontaneous lies:
But then they went on to note some discouraging qualifications to this apparent success:
And the more research done, the more disheartening the results have been. A study published in NeuroImage in 2011 gave participants the task of trying to defeat the fMRI lie detection using techniques as simple as wiggling a finger or toe in association with a given stimulus. Without the countermeasures and with cooperating subjects, the researchers were able to discriminate between lies and truths up to 100% of the time after practice on any given single subject; but once the subjects used these simple countermeasures, accuracy dropped to 33%, significantly below random chance. The authors made three concluding points:
An understatement, it would seem. This is never going to be an easy problem. We can measure a heart rate. We can tell whether a bone is broken or not. But determining deception is not a binary question, and never will be; because lies cover spectrums in multiple directions. As Greek statesman Demosthenes said, "A man is his own easiest dupe, for what he wishes to be true he generally believes to be true." The gradations among deceptions will always be as complex as every human mind, augmented with the subtleties of every situation and every story. It seems that for the foreseeable future of our understanding of the mind, reliable lie detection will always remain a fool's errand. For our episode on lie detection via body language, tells, and cues, go here.
Cite this article:
©2024 Skeptoid Media, Inc. All Rights Reserved. |