Written by John Fremer, President Caveon Consulting Services
[Blog writer’s note – I wrote this on my 74th birthday, based on 51 years (and still counting) in the testing profession and industry, reflecting on the world of state assessments.]
Once upon a time, in a land different from ours, the wondrous features recalled here were visible to all.
No Department of Education Directives
There were no directives from the Department of Education for states to adhere to. There were no Department of Education reports to write. There was no need to seek waivers as exquisitely detailed specifications for state assessments did not exist. Some readers already know why this “directive-light” situation existed. There was no Department of Education until 1980. There was an Office of Education prior to that date that did pay some attention to state level testing.
Choosing a State Test was Fairly Easy
The selection of a state assessment instrument was fairly easy as one picked the existing commercial test battery that was most attractive. The “Big Three” testing companies The Psychological Corporation, California Test Bureau, and Riverside Publishing were the providers. (NCS, forerunner of Pearson, primarily did test scoring.) At one point The Psychological Corporation had only two state assessment contracts – the state of Hawaii and one grade in Maine. PsychCorps used those two data points to say that the Stanford Achievement Test, their leading battery, was the most widely use test for state assessments. That remains my favorite example of “positive product positioning” in the world of state assessments.
Measurement Expertise was Found Mostly in Testing Companies and Universities
There were some state assessment staff with first class measurement training, but they were the minority. Research and innovation was going on inside the testing companies and in a few university programs. The State Assessment Directors met at the ETS Invitational Conference, but this was as much a social event as a professional gathering.
There was Little Media Attention to State Assessments
State level assessment staff fretted about the lack of interest on the part of the news media in the results of state assessments. Strategies such as mock administrations for reporters were employed and model stories about the value of state assessments circulated among the states. There were virtually no stories about cheating on state tests, probably because the consequences of the tests were limited for students and educators alike. In general, the notion was that the state assessments were a kind of educational census, not to be taken too seriously.
Early National Assessment Explorations
There was some sentiment that a national educational skills census would be desirable and an “Exploratory Committee” was formed to look into this possibility. I was a new assessment worker in the 1960’s and I had the assignment of going to schools to try out items that involved balance beams and other simple types of equipment. I also got to ask nine year olds some profound questions. I remember a nine year old’s response to the question “What do you think are the three most important problems in the world today?” His first two answers “War” and “Poverty.” He puzzled a bit and added “and late school buses.” After all these years, that ranks up there with my favorite testing experiences.
What is the same?
Some attributes of testing remain the same from fifty years ago, when I started working in testing. That “different land” is, of course, the US in the 1960’s. For example, there were forceful and often articulate critics who thought too much attention was being given to testing and who decried the use of multiple choice items. There was a “mission creep” issue where tests made for one purpose were being used in ways not envisioned or recommended by the test makers. Testing was growing and some thought that its growth was unsustainable. How much testing could we productively employ? Most teachers and other educators felt that they did not know very much about testing, even though they had been trained at a time when a testing course tended to be part of their program, a practice later pretty much abandoned. It was also the case that workers in testing felt that they carried a great deal of responsibility, but most often were not at the table when policies were being set and major decisions made about the future of testing. So it was, so it is, probably so it will continue to be.
Re-reading what I wrote, I guess my conclusion is that despite some major changes, key features on the world of state assessment and the place of measurement-trained staff in it remain pretty much the same over the past 50 years. For example, our work is still being criticized and we continue to have little control over major decisions about what will be tested and how.