Ask An Expert: Nikki Eatchel

Written by Nikki Eatchel, Chief Assessment Officer, Scantron

Nikki Eatchel, Chief Assessment Officer for Scantron, has more than 23 years of experience in the assessment industry. She has spent the majority of her career focused in the areas of assessment development and psychometrics and has served in executive leadership positions for these areas in a number of global assessment organizations. In her positions she has worked in various testing segments, including education, certification and licensure, and employment testing. She has also served as an assessment and business consultant for various national and international organizations. She has been personally responsible for large-scale assessment development for international programs as well as state-wide clients, working in some capacity with all 50 states within the U.S. She has added to her well-rounded approach to the assessment industry by serving in leadership positions in program management, client support, and product development. She has also worked in the arena of employment litigation support, preparing testimony for teams representing both the prosecution and the defense.

Nikki is the currently serving on the Executive Board for the Association of Test Publishers (ATP) and was Chairman of the Board in 2017. She has also served as the Chair of the Security Committee in 2011-2012 and the Co-Chair of the Security Committee for 2013-2014. Additionally, she has contributed on a number of industry committees, including the Operational Best Practices Committee for ATP and has presented numerous papers at such conferences as ATP, E-ATP, the Council of Chief State School Officers (CCSSO), the International Personnel Management Association (IPMA), and the Council on Licensure, Enforcement, and Regulation (CLEAR).

What are the most pressing issues that contribute to unfairness in testing today?

“As an assessment person I see immense value in testing for all stakeholder groups involved. As with any industry, however, there is certainly the potential for unfairness. Though unfairness could stem from a number of factors, there are two particular issues that are a focus for me. The first is the use of tests for purposes for which they were not intended. Assessment development is a complex process that requires a clear purpose and a specific evaluation target (e.g., an identified content domain, a particular set of knowledge, skills, or abilities). When evaluating the validity and psychometric soundness of an examination, that evaluation is directly linked to the stated purpose of the test (including the knowledge, skills, and abilities intended to be evaluated, and the population for which the exam was developed).  Unfortunately, assumptions are sometimes made about alternative uses for tests (and the resulting data) that are not supported by research. Such a situation can certainly lead to unfairness in testing. For example, using an assessment that was designed to measure student performance as a measure of teacher effectiveness can lead to unfair decisions about educators if the appropriate research has not been conducted to support that use. In addition, requiring a general aptitude test for a specific, skill-based job can be detrimental to candidates if that assessment has not been shown to be directly related to successful job performance. Assessments can be fantastic tools providing valuable information for decision making. When they are used and interpreted incorrectly, however, this creates an unfair testing environment.

A second issue that can create unfairness is the reliance on singular data points to make complex and impactful decisions about students and candidates. Assessments used for high stakes decisions (clinical, educational, employment, certification) have to be valid, reliable, and legally defensible. Assuming that an assessment meets those requirements, it is still highly risky to assume a singular data point can provide all of the information necessary to make a sound decision. As individuals in assessment and measurement are driven by data, they understand the limitation of a single data point. But there are a variety of stakeholders that purchase, administer, and use assessments that may not have the education or training to understand the need for a holistic view of student and candidate performance. Though portfolio approaches to student evaluation have increased over the last decade, as has the use of different types of candidates assessments (e.g., written, practical, simulation, performance-based, gaming, etc.) over-reliance on singular data points (potentially due to time and cost considerations), is still a threat to fair assessment practices.” ……

To read the rest of this interview, please click HERE.

You will be automatically re-directed to Caveon’s new electronic magazine “The Lockbox”.

Guest Contributer