Plan B or Not Plan B: That is the Question
“Plan B or not Plan B” – this is not simply a turn on the start of Hamlet’s soliloquy, but a major and pressing issue for many state assessment staff. “Plan B” (aka the contingency plan) is the label being used to describe alternatives that many states have crafted, in the event that the consortium to which they belong is not ready with its assessments for the 2014-15 school year or, ready or not, if the state opts not to use them.
Based on discussions last month at the National Conference on Student Assessment, it seems that virtually all of the approximately 40 states that are either in the Smarter Balanced or PARCC consortia have devoted some attention to their own Plan B; which most often involves continuing with a revised version of their current program (usually more aligned to the CCSS) for at least another year. The leaders of the two major consortia express confidence that they will deliver the core summative assessments as scheduled. State staff and testing vendors involved in the process have a good deal of experience rolling out computer-based testing; so I don’t view the assurances as “whistling in the dark.” Senior measurement professionals who have faced major delivery challenges before are on the case and have conveyed to me that they understand the magnitude of the remaining challenges and that they have the will and resources that they see as needed.
There has been a good deal written and spoken about the technology challenges facing each consortium and about the very wide variety of degrees of technological readiness and preparedness of school districts within and across states. At least this issue has surfaced and is being addressed by states and vendors. I want to comment briefly in this blog on two other issues:
- The security of the consortia’s test items
- The likely difficulty of the consortium tests at the high school level
Security of Test Items
In discussions regarding test security, you frequently encounter the phrase “a chain is only as strong as its weakest link.” Yes, a cliché but also a profound observation. The same consortium assessments will be given over a multiple- week period across more than 20 states. The test results have consequences for the teachers we rely on to oversee the administration of the assessments and the incentive to compromise the test items will be very strong.
There needs to be a full exploration of the ways in which the exams will be vulnerable, followed by careful piloting of how each identified vulnerability will be dealt with in design and program implementation. This needs to be done prior to the upcoming field test in 2014. Trying to slot in security features of a testing program after all other dimensions are built and close to deployment is a recipe for trouble; even possibly disaster. Losing a large part of an item pool is a very major problem for any assessment program. Having a major security compromise that impacts 20 or more states could be a catastrophe. All the other fine work to build the item banks and delivery systems would have been for naught.
Descriptions of the test items being created often use words such as “greater depth” or “more rigor.” As someone who spent many years developing tests, what that translates to me in the simplest terms is “harder.” This seems especially worrisome to me at the high school level where tests are used as part of a graduation requirement. I truly question whether the introduction of significantly harder tests for high school students is a sustainable development. My interpretation of the history of using tests as a graduation requirement is that a final failure rate of less than 10% must be an essential feature of such programs. The actual acceptable rate may even be as low as five percent. Announcing plans to use tests that will result in a large fraction of students failing; who otherwise would have graduated would almost certainly lead to a retreat from these plans.
So as various other kinds of pilots are being planned and executed, I strongly recommend attention to the consequences for students regarding the level of exam difficulty being introduced. At some level, one could argue that it is possible to adjust the standard for very hard exams to take into account these consequences. If that results in standards not much higher than chance level performance, no one will be satisfied.
I view the consortium tests and related development as sort of like our Interstate Highway construction project for the educational testing field. It has the potential to transform the world of state testing in a number of very positive ways. All of us with relevant skills and expertise need to be ready to pitch in and make sure that we do everything that is needed for success, regardless of whether a Plan B has been implemented in particular states. We owe that to the students who will take these tests and to the teachers and other educators who will be central to the program’s contribution to schools and society.