This is the 3rd in a new series from Education Reform Now that will track
the implementation of the Every Student Succeeds Act (ESSA). Click here for more.
One of the biggest changes to the recently reauthorized federal education law is around states and school districts’ new flexibility regarding the choice of student assessments. The biggest wrinkle under the Every Student Succeeds Act (ESSA) is a pilot program through which a limited number of states can test out innovative local assessment programs.
Related: ESSA Implementation: Protect Parents & Students
I’m in favor of assessment innovation – it’s sorely needed – but I’m also extremely skeptical about the feasibility of local districts developing assessments that are valid, reliable, rigorous and comparable with each other and with those in use statewide. These local assessments, after all, must also adhere to the law’s statutory requirements for state assessment systems: 1) Meet professional psychometric standards; 2) Be challenging and aligned with what’s required for enrollment in credit-bearing college coursework; and, 3) Are comparable across all students in the state.
One of the reasons most states moved to statewide assessments back in the 1990’s was that traditionally, poor and minority students, English Language Learners, and students with disabilities have been held to lower standards. There is always enormous pressure at the local level to portray school performance in a positive light and to whitewash achievement gaps. That’s not something much talked about by those who continue to paint Norman Rockwell-like pictures of local control. If locals have assessments that reflect lower expectations for students, resources that are now allocated on the basis of accountability systems geared to a single and comparable set of state tests— those, for example, for early childhood education, extended learning time, tutoring, teacher training, technology, new curricula, and access to advanced coursework— might be misdirected away from areas that actually need them most, because each district or school would now be measured by different standards and yardsticks.
Related: ESSA Implementation: What Would a Fair Neg-Reg Process Look Like?
One would be hard-pressed to conceive of any locally designed assessment system that could overcome the political and technical obstacles necessary to meet the law’s requirements that such systems be valid, reliable, comparable, and at least as rigorous as those in use statewide. While some psychometricians claim that it is possible, there is virtually unanimous consensus that no one has yet come anywhere close to doing it.
For example, the first time these issues were debated in the late 1990’s, Congress requested the National Academy of Sciences to convene a panel to answer the question: “Can scores on one test be made interpretable in terms of scores on other tests? Can we have more uniform data about student performance from our healthy hodgepodge of state and local programs?”
And the result was, as chronicled by Michael Feuer:
“After deliberation that lasted nine months, involving intensive review of the technical literature and consideration of every possible methodological nuance, the committee’s answer was a blunt ‘no.’”
More recently, even a program director who is working with New Hampshire to pursue an innovative local assessment system under an ESEA waiver admitted that there are “a lot of technical hurdles to overcome.” As Jennifer Davis Poon, program director of the Innovation Lab Network at the Council of Chief State School Officers, said last December in an Eduacation Week piece entitled “ESSA’s Flexibility on Assessment Elicits Qualms From Testing Experts:” “A particular challenge in New Hampshire is figuring out how to get comparable results across locally developed tasks that vary from one district to another.” This means that even those experts who are farthest down this road still haven’t resolved one of the most fundamental problems of such systems.
Because of the high stakes here, I was surprised that no one addressed the local pilot issue head on in the recent Fordham Foundation Competition for new accountability system design ideas.
Perhaps for the above reasons, only 3 out of the 10 proposals in the chart above even mentioned local assessments or indicators as having any role whatsover in new accountability systems. Even in those three, the role of local assessments was fairly limited. None proposed using local assessments as a replacement for statewide assessments, although Richard Wenning envisioned using them as a long-term goal.
So the general consensus for now is that local assessments that drive both excellence and equity are still a ways off, if they are even possible at all. We’ll take a closer look at the local assessment issue in future posts.