March 2, 2016
Written by Saundra Foderick, Test Development Manager
Who steals my purse steals trash; 'tis something, nothing;
'Twas mine, 'tis his, and has been slave to thousands;
But he that filches from me my good name
Robs me of that which not enriches him
And makes me poor indeed.
Wm. Shakespeare, Othello 3:3
The success or failure of an assessment program rests, in large part, on its reputation. When an assessment program can qualitatively and quantitatively demonstrate, over time, that their exams are fair, meaningful, reliable, and accurate, people in the field will value the resulting credential or grade: It has worth. This is the heart of reputation for any assessment program, generating reputational capital. And, like any form of capital, reputational capital can offer support when things go wrong.
Still, reputational risk is present in every interaction with a customer, examinee, or interested constituency.
An exam with lax security or leaked contents will not be accepted as a fair arbiter of competence in a given field. Biased or poorly written tests, or over-exposed items, will send an unintended statement that becomes part of the narrative about a program and the value of its assessments.
So, how can an assessment program mitigate reputational risk? The best plan is one that controls the narrative about your program, starting with test development and following through with expert oversight after your exam is published. The more you adhere to standards and the more often you produce valid exam scores over time, the more persuasive the positive narrative about your program. Best practices include:
- Identifying clear policies and procedures: ideally, someone who is certified in exam security and guided by the policies and procedures in your program’s test security handbook should lead your program. A security audit can provide an independent third party evaluation, identifying the strengths and weaknesses that may exist in a program.
- Delineating the concrete steps that will be taken to deter test theft and cheating during development: ideally, your processes should limit item exposure at every step during development and delivery, and include clones, robust item banks, Trojan horse and DOMC items, and SME credentialing.
- Requiring training and oversight in order to craft psychometrically sound exam items and content-aligned materials: ideally, your exam process should be an iterative loop, with subject matter experts, project managers, psychometric editors, bias and technical reviewers, and program leads working together to fully vet each item before it is released.
- Reviewing exam performance and potential exposure after publication: ideally, your processes after publication should include web patrolling to continually and systematically find and track threats to your intellectual property, item analysis to identify problematic content, and data forensics, using sophisticated statistical analyses of test-response data to identify patterns indicative of test fraud, including cheating and piracy.
How do these activities pay off in terms of reputation? Your program is able to document due diligence. If an exam or portions of an exam are challenged the supporting information is readily available. Your program is able to swap out items quickly if an unintended exposure occurs. Your program will know when unintended exposures occur so that a quick response is possible. In sum, these best practices send a message, saying that you believe in the value of your assessments and work hard to ensure that only those who have the needed competence are credentialed or given a passing grade.
Think of the dollar in your pocket. It only carries value because users believe that it is backed up by a strong assurance of value. Without that belief, it is just another piece of paper, no more useful than a scratchpad. By mitigating your reputational risk, you can help to keep your reputation for value intact.