by Roger Koppl
I’ve been railing against epistemic monopolies for a while now, particularly in forensic science. This project complements Peart and Levy’s work on experts. (See their symposium the 2008 Eastern Economics Journal, vol. 38 starting page 103.) I keep insisting that we need redundancy to reduce error rates. Economists, forensic scientists, and philosophers have all pressed me for data on error rates. How big a problem is this really?
We can construct some estimates. A 2005 Science study found false or misleading testimony by forensic scientists in 27% of the first 86 Innocence Project exonerations and forensic science testing errors in 63% of them. Professor Michael Risinger has shown that the “wrongful conviction rate for rape-murders in the 1980’s” is at least 3.3%, and likely higher. If only 2% of the 1.15 million felony convictions of 2004 were false convictions, then there were approximately 23,000 false felony convictions that year. This estimate and the Science study point to at least 6,210 false convictions attributable at least in part to false or misleading testimony by forensic scientists and at least 14,490 false convictions per year attributable at least in part to forensic science testing errors. We probably have at least 5,000 false felony convictions per year because of false or misleading forensic testimony alone and at least 10,000 false felony convictions per year because of some combination of false and misleading forensic testimony and forensic science testing errors.
We can estimate error rates in forensic science. Some philosophers, economists, and forensic scientists want to measure the error rate. Eric Sahota’s comments on an earlier post exemplify such requests. He says, “it would be nice to know how big the problem is before making procedural changes. We just don’t have that information.” But it doesn’t really make sense to measure, rather than estimate, the error rate in forensic science. If we had an epistemic device that could tell us whether any given forensic science analysis was wrong, we could rely on that epistemic device rather than forensic science. But only forensic science can perform a forensic science analysis. Some errors are exposed anyway. A clear video of the accused stabbing his victim in front of several eyewitnesses may expose the error in an exculpatory DNA analysis. But there is no generally applicable check on forensic science besides more of it. There is no general end-around to forensic science. This impossibility of getting around forensic science creates a kind of endogeneity problem, whereby our judgment of the truth is endogenous to our method of discovery. In many fields we can “triangulate.” We apply complementary methods to converge on the truth. But when we have only one method, we get the endogeneity problem.
When you combine the endogeneity problem with monopoly epistemics everything comes to depend on trust. Do you believe the expert’s opinion? Should you believe the expert’s opinion? I’m reminded of a Buddhist lesson. “If I tell you that I have a gem hidden in the folded palm of my hand,” explains Walpola Rahula in What the Buddha Taught (pp. 8-9 of the Grove Press edition of 1974), “the question of belief arises because you do not see it yourself. But if I unclench my fist and show you the gem, then you see it for yourself, and the question of belief does not arise.”