The Technical Obsolescence of Forensic Fraud

December 6, 2008

by Roger Koppl

I gave a webcast yesterday on “How to Improve Forensic Science.”  Online questioners challenged me on a point that I now recognize to be underappreciated: The “ACE+V methodology” of fingerprint examination lets you shop your verifications.  Let me explain.

Fingerprint examinations are performed using the “ACE+V methodology.”  The acronym stands for Analysis, Comparison, Evaluation, and Verification.   The system requires  “verification” of any “individualization.”  (An “individualization” is just their word for a match.)  The official guidelines say “all individualizations shall be verified.”  They permit, but do not require blind verification.  In practice, most fingerprints are subject to non-blind verification.  (One survey of 42 labs found only one lab, the FBI lab, conducting blind verifications and that in only about 5% of the cases.)  Typically, then, we have non-blind verifications, which are provided by colleagues in the same facility and requested only when a match is thought to exist.  Many people have seen that this is not much of a check against error.  There have nevertheless been cases of examiners shopping their verifications and the official guidelines provide at best inadequate discouragement of the practice.   Here are the two examples I know of.

A Seminole County fingerprint scandal erupted in Spring 2007 when latent print examiner Tara Williamson issued a memo accusing her co-worker Donna Birks of misbehavior and incompetence.  One of her specific charges regarded shopping verifications.  Birks could not get verification for a particular print from two persons she approached in the lab.  “The print was then sent to a retired [fingerprint] examiner [from the same office] who one year earlier medically retired early and had admittedly lost his eye for latent prints.  This examiner should not have been deemed ‘competent’ and no allow  (sic) to verify the print for such reasons (see SWGFAST Quality Assurance Guidelines for Latent Print Examiners. Page 5. 4.2.4).” (The underling was in the original memo.) Notice that the problem was seeking verification from someone who was not competent.  The problem was not shopping the verification!  

The second example comes from an official report on the case of Brandon Mayfield, whom the FBI mistakenly identified as the source of a print left at the scene of the Madrid train bombing.  On page 115, the report says:  

“The [FBI’s Latent Print Unit] Quality Assurance Manual provided that if the second examiner reached a different conclusion, the matter “must be referred to the supervisor and/or the Unit Chief for resolution.” No formal statistics regarding the frequency of this occurrence have been maintained by the LPU, but LPU witnesses interviewed by the OIG stated that a refused verification was as an extremely unusual event. One option available to the supervisor was to select another verifier if the first verifier declined to confirm the identification. In that  instance, there was no policy requiring that the first verifier’s disagreement be  documented in the case file.”   

The report does not suggest that there was verification shopping in the Mayfield case.  But it does reveal that it was considered just fine to shop your verifications. The official guidelines say, “When examiners have conflicting conclusions, a quality review shall be conducted.”  It does not say, however, that the failed verification must be included in the case file.  A quality review shall be documented and include” several items, the first of which is a “review of case documentation.”  Apparently, such documentation is not meant for the case file, where no “review of case documentation” would be required. 

The logic of verification shopping perfectly fits the argument of Feigenbaum, Susan and David M. Levy. 1996. “The Technical Obsolescence of Scientific Fraud,” Rationality and Society, 8: 261-276.  The examiner who can shop his or her verifications has a powerful tool exclude competing views and thus protect an epistemic monopoly on interpretation of the evidence.

12 Responses to “The Technical Obsolescence of Forensic Fraud”

  1. Boyd Baumgartner Says:

    I think a few things need clarification.

    1) ACE-V is a specialized form of hypothesis testing that applies specifically to fingerprint comparisons. The Verification phase is analagous to a peer review in the scientific methodology. Peer reviews are not blind, as they are a scrutiny of the methods used to arrive at a conclusion to make sure they are reproducible, and justified. (Read Ashbaugh, Champod, or search the JFI for more on ACE-V as a scientific methodology)

    2) Because sufficiency thresholds are based upon qualitative and quantitative data, they are unique to the examiner and are based upon training, experience and skill.

    Therefore situations of conflict can arise where a less experienced examiner could be asked to verify a case completed by a more skilled/experienced/trained examiner which results in the verifier not being able to reach a threshold of sufficiency.

    Conflict resolution is not a ‘shopping of verifications’ it is a step in the quality assurance process to ensure accurate conclusions through a review of the methods leading to the conclusion. If the methods are unjustified or not reproducible there needs to be a review of why this is. Conflict requires further consensus for resolution, in the same way it would in scientific research.

    3) SWGFAST publishes guidelines, not SOP’s. Agency SOP’s may reflect all, part or none of SWGFAST guidelines. SWGFAST is not authoritative over an agency’s SOP’s. Referring back to SWGFAST over the agency’s SOP is an apple to oranges comparison.

    This having been said, it seems misguided to refer to conflict resolution as ‘shopping verifications’ as there seems to imply a disingenuous motive behind doing so, when in fact the scientific method defers to consensus in the same way I referenced above.

    In the Birks case it is noted that “Mallory(the supervisor) also allowed Birks to violate a print-reading rule by having a trainee with just three weeks of experience verify another of Birks’ identifications, according to the memo.”
    http://thefloridamasochist.blogspot.com/2007/05/more-on-donna-birks-seminole-county.html

    So, this was a violation of their SOP, plain and simple. Instead of the more lofty ideas that you put forth in your webcast, perhaps all that is needed is ASCLD accreditation to ensure that standards exists and then an audit of these standards to ensure compliance.

    Lastly, your webcast states that an independent epistemic check is needed in fingerprint casework. However, the Mayfield case employed this very element (Ken Moses), and he failed to catch the error. How do you explain this?

  2. Amy Hart Says:

    Regarding the statement in your webcast when you said that someone who sees no difference between a burglar and a murderer is a bad person…You are applying a personal/moral standard to a professional situation. That is how the idea of contextual bias becomes so grossly overstated. People cannot imagine that any moral person with integrity can overlook the horror of a particular crime enough to be unbiased in their judgement. Therefore, they can only be unbiased if they do not know details. That’s not how it works when it’s your job. It is my professional duty to approach each case with no preconceptions about whodunit. I might speculate during my break time and be horrified then, but I cannot let that affect any decision that I make. Because then, I really would be lacking in integrity.

    Regarding the contextual bias studies: While I think that the idea of contextual bias certainly deserves research, basing any conclusions on the two limited studies that have been done is premature. The first study involved people who were not connected with forensics. They were not trained. They were not subjected to a lengthy background check. They did not get to enjoy the war stories of veteran forensic scientists and/or law enforcement agents (which is an excellent way to numb yourself to man’s inhumanity to man). They were not told that their careers would be destroyed if they made a mistake. These factors are an aspect to training that tries to impress upon you how weighty the decisions you make have the potential to be. On the one hand, who wants to have a murderer running the streets? But, on the other hand, who wants to be responsible for an innocent person being wrongly convicted?
    As far as the second study goes – a limited number of experienced analysts were told they were looking at one of the most infamous bad ID’s to date. Looks like the desire to not be the one to accuse the innocent man is very strong.

    I’m not saying that contextual bias does not exist. I am saying that people project their own experiences onto others. You might be so horrified by the details of a crime that you want someone to be punished for it. If the police present you with a viable suspect, you might believe what they tell you. I, on the other hand, will wait for the prints that I develop to tell me what they have to say, regardless of their effect on the case.

    Finally, verification shopping is not OK. Consulting with a third party when there is disagreement is necessary. There has to be a final decision. That is not verification shopping. Verification shopping occurs when someone presents you with a valid argument for why they don’t agree with you, and you discount that information, and keep asking until you find someone who does agree. I guess the simple difference is this. When you consult a third party, you tell that party that someone else has disagreed with you. Basically, you ask them to be the arbiter. When you shop for a verification, you don’t tell anyone that you’ve already asked someone else.

  3. Roger Koppl Says:

    Hi Boyd,

    Thanks for the response. Reading is over, I’m not sure I see that you’ve said anything contradicting the claims of my post. I don’t think I should comment on everything you’ve said, but I will address a few issues, including the points that I think relate to the contents of my post. Please forgive me (and correct me!) if I’m missing a connection between my original post and your comment.

    You say SWGFAST guidelines are not agency SOPs. Sure. My claim, however, was that “The “ACE+V methodology” of fingerprint examination lets you shop your verifications.” That is, the guidelines do not forbid SOPs that allow verification shopping. As far as I can tell, no statement in your reply contradicts that claim, except that you dislike the term “shopping” in this context. Perhaps I should add that I didn’t say SWGFAST or anybody else extols verification shopping, just that SWGFAST guidelines don’t quite rule it out.

    Citing a blog reproducing an article originally appearing in the Orlando Sentinel, you note that the discredited fingerprint examiner’s supervisor allowed her “to violate a print-reading rule by having a trainee with just three weeks of experience verify another of Birks’ identifications, according to the memo.” Sure, but that fact 1) is not the only one at issue and 2) does not contradict any claim I made. Again, the whistleblower’s memo tells a tale of verification shopping, but (quoting my post) “the problem was seeking verification from someone who was not competent. The problem was not shopping the verification!” Yes, the discredited examiner violated agency SOPs, but not by verification shopping.

    I guess I should offer two quick replies to your closing remarks. First, I favor accreditation, I just don’t think it’s enough. Second, it is true that the court allowed Mayfield and his attorney to pick an examiner, who then supported the identification. The individual in question was the founder of the Crime Scene Investigations Unit of a major city’s crime lab. He was something of a pioneer in the use of automated fingerprint systems and advised the FBI on the implementation of such systems. In “More than Zero,” Simon Cole reports that this examiner had all case information. Thus, it seems reasonable to guess that observer effects may be the key fact helping to explain the error.

    Getting back to my original post . . . Boyd, I don’t think you indicated any factual errors in my post. If I’m wrong about that, please tell me what factual error(s) I made.

  4. Roger Koppl Says:

    Thanks for your comment, Amy. I wonder if you misunderstood the point of the remark you cite. I was trying to be clear that “observe effects” in forensic science are not evidence of any sort of moral limit of forensic scientists. When I talk to forensic scientists they seem to think that I mean to impugn them by raising the issue of observer effects. No, no, no. It’s a human universal; it’s part of our cognitive architecture. You tend to see what you expect to see.

    Getting back to verification shopping, do you agree that if you don’t get a verification from an examiner, 1) SWGFAST guideline provide no general or absolute prohibition in seeking verification from another examiner, and 2) SWGFAST guidelines do not require the failed verification to be entered into the case file? Are those two statements true, Amy?

  5. Eric Sahota Says:

    Roger-

    Throughout this discussion you miss two key points.

    One, observer bias and/or domain irrelevant information does not always lead to negative outcomes. In fact, “observer bias” can enhance the quality and accuracy of latent print examinations. See Motivated Thinking by Molden and Higgins “The Cambridge Handbook of Thinking and Reasoning”.

    Second, there is nothing but anecdotal evidence that supports the widespread use of blind verification. In fact, the data we have collected so far indicates that blind verification “may” be useful for a small subset of latent print comparisons involving low quality prints and/or complex image distortion. Blind verification is of little value in other cases. I submit that this also mitigates any negative effects of “verification shopping”.

    There’s a whole host of scholarly research and data on these issues, Roger. Some of this data contradicts your positions. It would be nice if you addressed these issues in your discussions.

  6. koppl Says:

    Thanks for contributing to the discussion, Eric.

    I’m not sure how either point would negate my claim that SWGFAST guidelines do not clearly prohibit verification shopping. Am I missing the connection?

    I don’t think anyone has ever said that observer effects necessarily or always take you away from the right answer. In the context of scientific tests, observer effects make it more likely the answer will depend on something besides the science. I don’t imagine you’re saying forensic science analyses should be influenced by non-scientific factors, are you?

    In citing Molden and Higgins, I assume you mean to invoke regulatory focus theory. I must admit, I don’t see how that would somehow contradict the claims of Risinger and others about observer effects.

    I don’t see how you can call the literature on observer effects “anecdotal,” Eric. Anyway, it is true that these issues are more important the more ambiguous the evidence. Dror and Charlton reach that result as well. Eric, I don’t know of any studies measuring the percent of crime-scene latents that involve “low quality prints and/or complex image distortion.” Could you give some cites?

    Finally, a word on scholarly research. The context here is a blog post on verification shopping. I don’t think such a post requires a cite to, e.g., your co-authored paper in the Tulsa Law Review. In general, don’t think I neglect relevant literature in my work, but I’m always ready to be corrected on that score.

  7. David Says:

    Roger I think that you have a skewed view of this issue. “Verification shopping” is not an accepted practice within the discipline and thus there is not this great need to have it banned in bold letters in some SWG document.

    There is a difference between the levels of training and experience causes an identification to not be verified and that of someone searching for his bad ID to be called.

    The SWGFAST guidelines in the QA document spell out several procedures for actions to be taken in the event that the conclusions of examiners differ.

    The guidelines state that the individual agencies to have documented procedures in place to deal with it. The guidelines are in no way permissive of “verification shopping” and if conflict should arise it is assumed that it would be documented and corrective actions be taken.

    You seem to be taking good QA procedure out of context and trying to make it into a kind of back door that the unethical examiners can slip through.

    If you read the QA document in spirt, rather searching for the words “verification shopping” you will find that it does indeed seek to eliminate that practice by requiring documented case review in the event of a disagreement. Even in that of a failed verification. So on that count I do believe you are factually incorrect.

    Quality Assurance Guidelines for Latent Print Examiners Section 4.4

  8. Roger Koppl Says:

    Hi David,

    Ah! Your comment was very helpful. Thanks for that. I did not mean to suggest that verification shopping is “accepted practice within the discipline.” I completely see how you would get that idea, however, from my statement that ACE+V “lets you” shop verifications. I meant only that verification shopping is consistent with the methodology as laid out in SWGFAST documents. But saying, “lets you” easily gives the wrong idea. Sorry about that.

    I chose my words more carefully when I developed the point. I said, “the official guidelines provide at best inadequate discouragement of the practice.” The point matters, I think, given my two examples.

    You cite a SWGFAST document to show I committed a factual error, but I don’t see it. I cited (with a link) the very document and section you refer me to. As I said in my original post, “Apparently . . . documentation [of disagreement] is not meant for the case file, where no ‘review of case documentation’ would be required.” You say the document “does indeed seek to eliminate” verification shopping. I certainly don’t think the SWGFAST guidelines were written with the specific intention of enabling verification shopping. Hardly! I didn’t mean to suggest that and I hope I didn’t seem to. Is this the “lets you” problem again? Anyway, good intentions notwithstanding, there is no requirement in the SWGFAST guidelines that the documentation on a disagreement be included in the case file.

    With the misunderstandings I tried to clear up behind us, David, is there anything false or misleading in what I’ve said?

  9. Forensic Insider Says:

    http://ronkayela.com/2008/10/fingerpointing-in-lapds-finger.html
    http://www.msnbc.msn.com/id/27233798/wid/%2011915773/

    In a caustic culture, “shopping” gets a whole new face. Factions develop and the real experts are marginalized as nay-sayers. You want someone for a crime, shop around until the right “expert” gets you what you want. Compassionate analysts say things like, “oh well, he would have committed a crime soon enough anyway.” As if that justifies institutional corruption.

  10. Roger Koppl Says:

    Insider:

    I think you’re right to link to the LA fingerprint scandal. Thanks for that. As your links reveal, one of the problems there seems to be that verifications were not blind and independent. That’s not really expert shopping, but I think the two issues are related: What is the right organizational design for redundancy?

    Do we have sort of measure of how big these problem are? I think it’s worthwhile to reorganize things to make expert shopping harder. Nevertheless, I’d like to have a solid, empirically based estimate of just how big the problem is. Any suggestions on that one?

  11. Eric Sahota Says:

    Roger-

    I don’t believe I referenced any publication of mine. When I say “we” I refer to latent print examiners as a community and the data that has come from studying latent print examiners. For example, studies from Glenn Langenburg regarding verification and contextual bias. Studies which I might add that support the robustness of latent print examination.

    With respect to verification shopping, rather than debate the semantics of SWGFAST guidelines or an examiner’s professional ethics, I’m suggesting that in most cases it simply doesn’t matter. You can base that on the expert experience of a latent print examiner. Or you can refer to more empirical sources such as the likelihood ratios generated by computer programs such as Pianos.

    If I identify a left slant loop to whorl, I can shop the verification all I want, but I doubt I will find someone to verify it. So back to my prior comment, this becomes an issue when dealing with latent prints of low quality/clarity where the risk of error is inherently higher. Sadly, we don’t have an accepted metric to document these types of identifications or how often they occur. But we certainly should not assume that this is common. Instead, we should be focusing on collecting this data in order to determine what if anything should be done.

    Roger you wrote: “In the context of scientific tests, observer effects make it more likely the answer will depend on something besides the science.”

    It seems to be your goal to prove that observer effects exist in forensic science and that they influence forensic analysis. Well, I don’t think anyone denies this. The question is what role do observer effects play and how do they affect the correctness of conclusions? Your general tone suggests that the mere presence of observer bias is cause for concern. But even the Dror studies remind us that latent print examinations are inherently trustworthy. Let me explain.

    The overwhelming concern is erroneous identifications. The large frequency of references in publications and lectures to the Brandon Mayfield case illustrates this point. And unless I misunderstood your lecture, preventing erroneous identifications is the center piece of your “redundant analysis” design. I do not recall any emphasis or discussion of the costs or benefits behind reducing false exclusions (where the examiner “misses” a correct identification).

    The data we have so far doesn’t link observer effects to erroneous identification. None of Dror’s subjects made an erroneous identification when presented with contextual bias. Out of all the trials, only one false ID was made by an examiner in the control group (no biasing information was present). This seems to suggest that contextual bias actually lowers the error rate!

    If you point to the Mayfield case, the OIG report does cites observer bias. However, you should not overlook another critical finding in the OIG report: the unusual similarities between the latent print and Mayfield’s print. Indeed, I submit that this was the proximate cause of the error. Again, this is an example of a latent print with reduced quality/clarity.

    Certainly, scientific conclusions should be the result of scientific analysis. But scientists aren’t machines. We can reduce but not eliminate observer effects. Hence, it is important for us to know how bias impacts our conclusions and whether it can lead to false incrimination.

    One other note on “verification shopping”. I suspect your asking for too much from ACE+V and SWGFAST guidelines. “the official guidelines provide at best inadequate discouragement of the practice.” I’m not sure what type of discouragement you’re looking for. As you also mentioned, it would be nice to know how big the problem is before making procedural changes. We just don’t have that information. There is quite a bit of data that needs to be collected on error rates, quality metrics, etc. before we’re equipped to make discrete changes.


  12. […] science. Some philosophers, economists, and forensic scientists want to measure the error rate.   Eric Sahota’s comments on an earlier post exemplify such requests.  He says, “it would be nice to know how big the problem is before […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: