Written by John Fremer, President, Caveon Consulting Services
August 1, 2014
Many of you doubtlessly recognize the opening words from Charles Dickens “A Tale of Two Cities.” I see them applying in a stunning way to the current climate in state assessment. There is a striking amount of innovation underway not just with respect to the major move to technology-based delivery of the basic assessment instruments in many states, but impacting the way tests are designed, delivered, and their results used. So if you really welcome change, state assessment work these days is right up your alley. Much as we like to assert how flexible and open to new things we are, being able to operate on familiar ground and to be able to predict reasonably well what is going to happen next makes our lives more comfortable and gives us the sense that we are in control of our own work life and of the testing programs that we manage or contribute to.
Staying with the positive side of my quotation, I see experimentation in testing companies, departments of education, and school districts. Basically I see interesting projects and new developments in many of the places that I look helping clients prevent cheating on tests or detecting it when attempts are made to undermine fairness and validity of state test results. For example, there are major efforts under the heading of “evidence centered design” to promote in depth understanding of the process of developing tests used in education.
In another domain, although some applications of technology to test delivery amount to little more than what some critics dismiss as “computer page turning,” in other settings there are serious efforts to draw from many areas, new or rarely employed approaches to assessing skills and knowledge, sometimes under the heading of “going deeper.”
The issue of what different kinds of assessment would be most helpful to all users of results is also getting sustained attention. Part of the motivation is the widespread sense that we are doing too much testing and that there “must be a better way.” Can we find strategies to reduce our dependence on major tests at the end of a grade or sequence of schooling by accumulating student results over time? We might be able to synthesize a series of tests, project grades, and teacher evaluations to obtain richer information about individual student learning as well as program and teacher effectiveness.
Along with the enthusiasm that marks the work of many innovators in the assessment area, there appears to me to be a fairly pervasive sense of being nearly overwhelmed by the pace and extent of the changes in assessment. I have seen this close up talking with my youngest daughter who works at the classroom level helping teachers hone their skills. Simply keeping track of what changes are in place in assessment and how they are being implemented is proving to be a formidable task for her and her colleagues at the central office and school level. She sees assessment programs changing in the course of a year and some developments stopped in mid-implementation. She asks how this could be happening. Don’t the designers and developers of testing programs know how disruptive and confusing this is to teachers and other school personnel?
Usually state assessment programs are designed over a multi-year period with well-defined milestones as to when different stages of test program specification and planning will take place. Even under those circumstances, great care and skill is needed by many professionals to deliver a high quality outcome in a timely manner. In some states today, directions as to what standards will be employed and what tests used are occurring much closer to the time when tests will be administered than is desirable. The pressures to create, implement, and deliver tests are heightened when faced with timelines measured in months, rather than over several years.
I see some of these phenomena in my helping role with the challenge of preventing and detecting cheating. The measurement profession and industry has developed some very effective tools to minimize the chances that inappropriate behavior before, during, and after testing will undermine the fairness and validity of tests. Addressing these test security issues is just one of many, many challenges facing state assessment staff, however, and their efforts to adopt and implement best practices to prevent and detect cheating are occurring in a “fishbowl” atmosphere. Media representatives have learned that stories about possible cheating on tests for state assessments as well as other testing applications are of major interest to many readers. Trying to carry out prudent and essential reviews of what may have happened in a particular school or class about which concern has been raised has become a much more difficult task.
I am very much an optimist by nature and I look at the number of conscientious professionals working in Departments of Education, testing companies, and other venues and lean towards the “it is the best of times” interpretation of our situation in testing. We need to help each other through the hardest periods, sharing what we are learning and gradually building our skills and knowledge. I see this happening throughout the testing community and expect that testing will be much better even a few years from now than it is currently. Whether the pace of change will moderate is another story. My advice there is akin to what you hear on airplanes “keep your seat belt fastened, there may be unexpected turbulence ahead.”