The Transportation Security Administration (TSA) has been criticized for engaging what many people call “security theater” – going through elaborate security protocols that give the superficial impression of security but that are not very effective. The result is a significant inconvenience to passengers (they make you take off your shoes, rummage through your luggage, and confiscate your honey), but it is not clear how much of an inconvenience it is to would-be terrorists.
In the past 10 years the TSA has invested $900 million on a technique called behavior analysis – picking out potential terrorists from the crowd by analyzing behavior. A recent Government Accountability Office report on the SPOT program (Screening of Passengers by Observation Techniques) concluded that the program does not work and future funding should be limited. ( http://www.gao.gov/assets/660/658923.pdf)
“Meta-analyses and other published research studies we reviewed do not support whether nonverbal behavioral indicators can be used to reliably identify deception. While the April 2011 SPOT validation study was a useful initial step and, in part, addressed issues raised in our May 2010 report, it does not demonstrate the effectiveness of the SPOT indicators because of methodological weaknesses in the study. Further, TSA program officials and BDOs (Behavioral Detection Officer) we interviewed agree that some of the behavioral indicators used to identify passengers for additional screening are subjective. TSA has plans to study whether behavioral indicators can be reliably interpreted, and variation in referral rates raises questions about the use of the indicators by BDOs.”
The idea is not bad in theory. When someone is consciously lying or engaging in deception they have to think about it, which should cause cognitive cues. Further, their emotional state is likely to be affected, potentially causing emotional cues.
In fact the literature shows that such cues exist. For example, eye movements with respect to “critical images” relating to a deception are statistically different between deceivers and non-deceivers.
In practice, however, using behavior to determine deception or ill intent does not work well. The overall success rate, according to the GAO review, of using non-verbal cues to detect deception is about 54%, which is only slightly better than chance.
So what’s the problem?
The difficulty lays in the fact that human cognition and behavior is incredibly complex. Therefore, there is a tremendous amount of behavioral noise in the system, making it very difficult to tease out any signal. In addition, as the GAO review above, there is often subjectivity in the evaluation, which introduces bias.
All of these problems also plague lie detectors using physiological cues, which is why their performance is also only slightly better than chance.
Part of the noise is that different people have different baseline behaviors. Some people are just “fidgety,” for example, or anxious. Different people have different levels of comfort with eye contact and personal space. There are also cultural differences.
Further, it is possible with training and practice to erase the cognitive and emotional cues of deception – to beat any lie detection system. Getting into character, like any good actor, and behaving “as if” one were just another businessperson on a routine trip, might be enough to beat the system.
The literature shows that there is no one “Pinocchio” response in humans – no one behavior that most or all people display that betrays lying.
In brief, while there are statistical differences in behavior when lying, these differences are lost is a sea of behavior noise, variability among people, cultural differences, and good acting. Therefore, in practice, such systems are of extremely limited utility – so much so that they may be counterproductive.
However, I would not say that the entire endeavor should be scientifically abandoned. The process of deception detection is just an order of magnitude more difficult than was previously suspected, but that does not necessarily mean that it is impossible. What I think it will take, however, is a computer algorithm analyzing multiple detailed behavioral parameters back by a large amount of statistical information.
What the GAO report indicates, it seems to me, is that the TSA prematurely tried to implement a system that was not scientifically ready. They would have been far better off investing some of that $900 million in additional basic research, or studying very limited translational research.
Perhaps eventually we will have systems that can be 80-90% effective in picking out high probability suspects for further detailed screening. I doubt any system can approach 100% because there is likely always to be some people that are just too good at acting innocent so that they can keep their deception signal below noise levels.
Steven Novella, M.D. is the JREF's Senior Fellow and Director of the JREF’s Science-Based Medicine project.