You can manage and you can measure!

The Association of Test Publishers (ATP) Conference of 2008 ended yesterday. As always, it was a good conference. In 2004 we stated, “You can’t manage what you don’t measure.” Being a sponsor of the conference, we placed a bag of M&M’s (i.e., manage and measure) in each attendee’s conference packet. And, we printed the message on the hotel room key cards.

I have just completed analyses for three testing programs and I am so impressed with what they have done that I want to share their results with you. Good news concerning exam security is refreshing in the midst of so many cheating stories. We recognize dramatic acts of heroism, but often ignore the good that happens with steady, persistent progress. I am so proud of these three programs. They are achieving their common goals: “Reduce cheating, strengthen exam security and emphasize ethical test taking.” The data demonstrate this convincingly. Caveon’s message at ATP this year was, “The answer is in the data.” So let’s look at the data.

Figure 1: Percent of anomalous tests for three programs

Side-by-side comparison of cheating reduction
Let me describe the data in Figure 1. The percent of anomalous tests for successive analyses are plotted for each program. A trend line has been fit to the data to aid your eye in visualizing the trend pattern. An anomalous test is one that deviates from normal test taking, and will exhibit at least one of the following: aberrance (answering hard questions correctly and missing easy questions), large numbers of erasures, inexplicable score changes from a previous test score, or excessive similarity in the selected answers with at least one other test. An anomalous test does not mean the test taker cheated. For example, when we observe excessively similar tests it is very likely that one person cheated (the copier) and the other person did not (the source). The percent of anomalous tests does not measure the precise number of people who have cheated, but it is highly correlated with that number.

These data are important because they demonstrate that all high-stakes testing programs, irrespective of industry or application, can effectively reduce cheating. They illustrate that reductions in cheating can occur with persistence and dedication. Let me briefly describe each program and some of the positive steps they have taken.

Program 1: This program provides a professional certification with high security. We estimate that there was a 45% reduction in cheating in three years. They have followed up on every case that appeared to be a security violation and every test site that appeared to have lax security. They have emphasized proctor training. They are now reviewing their test taker agreements, proctor training, identification procedures, and physical security with the intent of using the best known security protocols.

Program 2: This program is a public education program. We estimate that there was a 72% reduction in cheating in two years. They have rewritten their test administration manuals and have begun test administration monitoring. They assign a conditional status to extremely anomalous test results and require local review of those test results. They are receiving reports that the students being flagged are admitting to having cheated.

Program 3: This program administers tests in the service industry. We estimate that there was a 78% reduction in cheating in one year. They have stressed ethical test taking. They have revised their test taking agreements and strengthened test administration policies to allow for scores to be invalidated with an appeals process. They have refreshed test forms which appeared to be exposed. They are researching the next phase of security improvements: test site monitoring and appropriate disciplinary measures for test administration personnel who may be helping test takers inappropriately.

These very different programs were the same in one important way: They started where they were, they created a plan, and they were not discouraged. Each was taken back by the first data forensics report (we always find something disconcerting), but they pressed forward and executed their plan. Best practices used by these programs include: test site monitoring, emphasis on ethical test taking, invalidating scores as per policy, refreshing tests which appear to be over exposed, and updating their security procedures.

Let’s give credit where credit is due. The numbers are impressive and the data do not lie. These programs have earned our respect and admiration.

Dennis Maynes

Chief Scientist, Caveon Test Security

Leave a Reply