Written by Jennifer Miller, Data Forensics Coordinator
October 27, 2014
At the end of the movie, Dumb and Dumber, Lloyd, the doofus-turned-hero, who saved the day for the beautiful damsel in distress, Mary Swanson, asks her, in the hopes of sparking a romantic relationship, “What are my chances?” Mary quickly replies, “Not good.” Lloyd pauses and then says, “You mean ‘not good’ like, one out of a hundred?” “More like, one out of a million,” she says. Lloyd, with a gleam in his eye exclaims, “So…you’re saying there’s a chance!”
Data forensics uses statistics to analyze test response data looking for anomalies (or testing irregularities) that may be indicative of cheating. When testing irregularities are found, testing program managers typically review and ultimately may invalidate scores based on evidence that the scores are not representative of the examinees’ knowledge.
Often the statistical results are presented in the form of probabilities. For example: The probability of two examinees having selected the same answers on the test form by chance alone is 1×10-12, or one in one trillion.
Such statements often confuse testing program stakeholders, as many of the statistics used in data forensics analysis are perplexing to the layman. Does the above statement represent “proof” that the examinees copied from each other? The results say there is a chance, albeit small, that this event indeed occurred randomly. How small should a probability be before I need to take it seriously? How was this probability calculated? What does the probability even mean?
If your audience doesn’t understand the statistical results, including their limitations, it can cause more harm than good when testing program managers seek to use them as evidence to support score invalidation. This can devalue your data forensics program, casting skepticism not only on these results, but also on future data forensics results. It is best to prepare before presenting results to stakeholders. Here are a few ideas to help with preparation:
- Write down what you plan to say and how you plan to say it. Remove unnecessary and confusing details.
- Practice first with a smaller audience. Present your results to an “internal” audience before going live. Get their feedback on what was helpful and what was confusing.
- Consult with your statisticians about the results. Is there something you need to fully understand before you can explain it to someone else?
Bottom line: Don’t try to explain anything until you can explain it in a way your audience can understand. If they can’t understand it, either they’re going to feel dumb, or more likely, they’re going to think you’re dumb(er).
[Blog writer’s note: I would like to acknowledge the work of Dennis Maynes, Aimee Hobby Rhodes, and John Fremer who have influenced my formulation of these tips.]