I watch a lot of crime shows and listen to a lot of true crime podcasts, so naturally I’ve spent a lot of time with charismatic FBI profiler characters.
It’s really hard to overstate how much this character archetype has penetrated pop culture. Just one profiler, John Douglas, is reportedly the basis for at least four fictional characters: Jack Crawford from the Hannibal novels/movies/TV show; both the Mandy Patinkin and Joe Mantegna characters on Criminal Minds; and Mindhunter’s Holden Ford (played by Jonathan Groff). Manhunt: Unabomber focuses on Jim Fitzgerald (played by Sam Worthington and based on the profiler of the same real name).
TNT’s The Alienist, which I still need to see, features a psychological profiler working in 1896 in New York City. We hardly knew anything about human psychology in 1896!
And the trouble is, while we know a lot more now, we don’t know enough. It’s a real, honest-to-God bummer, but criminal profiling doesn’t appear to work. At all. Even if it did, it’d be a misallocation of intellectual energy.
Malcolm Gladwell made this case in his trademark narrative, somewhat elliptical way back in 2007 (I don’t mean that as a dig, it’s a great piece and informed a lot of this post, but it’s also long and New Yorker-y). The research literature is genuinely strange. The consensus is that profiling isn’t very effective, and even profiling-sympathetic people are reduced to arguing that criminal profiles by the professionals are marginally more accurate than ones written by completely untrained people off the street.
And here’s the thing: They’re not much better than random people off the street! A 2007 meta analysis by criminologists Brent Snook, Joseph Eastwood, Paul Gendreau, Claire Goggin, and Richard Cullen compared four studies where self-described criminal profilers were tasked with analyzing crime scene data and coming up with a profile, and compared their predictions to other groups like normal detectives or students.
They find that profilers do only slightly better than random people at predicting traits of offenders. “We contend that, in any field, an ‘expert’ should decisively outperform nonexperts (ie lay persons),” the authors write. They didn’t find that. They conclude that profiling is a “pseudoscientific technique,” of limited if any value to investigators.
A group of researchers at the University of Liverpool with the psychologist Laurence Alison have taken a different approach by evaluating the central assumption of profiling: that characteristics of a crime and crime scene can predict useful traits about a criminal. In a bracingly blunt 2002 journal article called “Is offender profiling possible?” Alison and his co-author Andreas Mokros conclude, basically, “No.”
They looked at 100 British rapists: all men, all targeting women 16 and older, and all rapists who attacked strangers rather than acquaintances or significant others. Were people who committed crimes similarly, with similar modi operandi, likely to be similar demographically, too? Nope, not at all. “Neither age, socio-demographic features nor previous convictions established any links with offence behaviour,” Mateas and Alison concluded.
In other words, the central assumption of criminal profiling is nonsense. You can’t look at a crime scene and conclude stuff like, “The offender is a 25- to 34-year-old white man who dropped out of high school.”
But criminal profiling also has an opportunity cost: There are a lot of really hard problems in the world that progress in psychology would help address, and from which criminal profiling might be a distraction.
Mental health struggles are an obvious example, but there are less obvious ones too, like getting better at predictions. Philip Tetlock at the University of Pennsylvania has been, for decades, studying how experts and laypeople make predictions about future events, and holding tournaments to isolate the factors that lead to good, accurate forecasts.
The social consequences of being able to forecast the future better are immense. “If we could improve the judgement of government officials facing high-stakes decisions — reducing their susceptibility to various biases, or developing better methods of aggregating expertise — this could have positive knock-on effects across a huge range of domains,” Jess Whittlestone notes. “For example, it could just as well improve our ability to avert threats like a nuclear crisis, as help us allocate scarce resources towards the most effective interventions in education and healthcare.”
This is even clearer if you look to the past. If the European powers had been able to foresee an intractable bloody stalemate as the consequence of joining Austria’s war against Serbia in 1914, they almost certainly wouldn’t have jumped in as enthusiastically; maybe Austria would’ve restrained itself, too. If investment banks had more accurate forecasting models of the mortgage market in the mid-2000s, or knew enough to listen to accurate models that housing bubble bears were making, perhaps the financial crisis could’ve been averted. World War I and the mortgage crisis were huge, complicated events, but they were also, in part, forecasting errors.
So imagine you’re a psychology Ph.D. student and, instead of working on that, or instead of trying to advance our understanding of what causes schizophrenia or major depression, you decide you want to catch serial killers using the power of your mind. Does that really feel like the highest use of your talents? Few psychologists, to be fair, do this now; most go into clinical practice or do basic research as academics. But we’ve allocated a weird amount of cultural capital to this especially pointless subset of the discipline.
In Alec Wilkinson’s profile of Thomas Hargrove, a remarkable data journalist who has built an algorithm that can help identify serial killers based on similar locations, MOs, etc., Wilkinson notes that the FBI thinks less than 1 percent of annual homicides are by serial killers. Hargrove thinks it’s higher. But there were 19,362 homicides in 2016. Even if 2 percent of those people were killed by serial killers, that’s 387 people a year.
By comparison, about 480,000 to 540,000 people die in the US every year due to cigarettes, about 88,000 due to alcohol, and between 3,000 and 49,000 due to the flu. Closer to the world of psychiatry, more than 40,000 Americans die annually from suicide; given that we know severe mental illness increases non-suicide mortality too, the true death toll of depression and other mood disorders is significantly higher.
Maybe increasing clearance rates for serial killers is more tractable, an easier lift than bringing those numbers down. But I have my doubts. And that’s just thinking about the US. If distributing bednets through the Against Malaria Foundation saves a life for every $3,687 spent (a rough number to be sure), and 2 percent of US murders are from serial killers, then for only $1.4 million a year you can save as many lives with bednets in Africa as you would from ending serial-killing in the US entirely. It’s impossible to imagine ending serial killing for only $1.4 million a year.
I don’t mean this as a knock on Hargrove personally. Spending all day catching serial killers sounds absolutely awesome, and it’s cool as hell to do it with big data — and more to the point, even if it’s not the biggest problem in the world, it’s big enough that having one really smart person working full-time on it probably makes sense.
I just wish all the super brilliant, talented scientists and FBI agents from my favorite shows would move to Philadelphia and help Philip Tetlock forecast world events, rather than hanging out in Quantico and trying to catch Hannibal Lecter.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.