Testwiseness can Put Your Testing Program in Jeopardy!
If you are a fan of the TV game show Jeopardy! (and even if you are not) you have probably heard of the recent contestant phenom, James Holzhauer. With over 32 consecutive wins, he amassed over $2.4 million dollars in winnings, only second to Jeopardy! winner, Ken Jennings with $2.5 million dollars. His reign of winning came to a close earlier this week as he finally lost a match with an incorrect response in the Final Jeopardy! round.
Nobody questioned the intelligence of this candidate or his depth of knowledge on a wide array of challenging topics. However, what was surprising, was how Mr. Holzhauer changed the answering strategy of Jeopardy!. You might say he had “gamewiseness”. His answering strategy was to answer the hard questions first and generate a sizable lead over the other contestants early. He then sought out the elusive Daily Doubles quickly. This drove his winnings even higher and allowed him to effortlessly win each match.
I got to thinking, what will the showrunners of Jeopardy! do now that James Holzhauer has ‘cracked the code’ with his gamewiseness strategy? Will the game makers and question creators change their strategy for question asking?
Now let’s turn our thoughts over to testing. Because as you see, the same problem of gamewiseness exists with test-taking. It’s called “testwiseness”. It is the ability for test-takers to use test-taking strategies to achieve a successful outcome., e.g., a test-taker might use the testing strategy of answer option elimination to deduce a correct response.
For many years, test-takers have learned how to obtain successful testing outcomes using testwiseness. Testwiseness is also perpetuated by test prep companies who teach test-taking strategies to help test-takers increase their likelihood of achieving a higher test score. Like the game of Jeopardy!, there is a need to rethink our item development strategies.
Recent innovations in item development are proving successful; they not only reduce testwiseness, but they are efficacious in providing valid measures of performance. Studies (Willing et al. 2014; Papenburg, et al., 2017) show that Discrete Option Multiple Choice (DOMC) items reduce testwiseness because answer options cannot be compared in order to determine a correct response. Additionally, DOMC items on average only disclose 2.5 answer options each time an item is rendered. This reduces item exposure, a long-time nemesis of psychometricians and testing program managers alike. Lastly, because of the random correct response attribute of the DOMC item type, sharing test content becomes irrelevant; one test-taker’s correct response will likely not be the same for next test-taker.
According to Caveon CEO and founder, David Foster, “It is time for us to embrace innovation and technology and bring the multiple-choice question into the 21st century where it can evolve to address our current needs and proactively tackle the problems of the future. “
Taking tests certainly is not the same as playing a TV game show. However, testing programs must begin thinking about the use of other item strategies that negate the use of game-like strategies to gain a positive outcome. Otherwise, they are putting their testing program in Jeopardy!