Annual Testing in ESEA Reauthorization: A Red Herring?

Blogs, Letters & Testimonials

December 18, 2014

red_herring

By: Charles Barone

When it comes to the reauthorization of the Elementary and Secondary Education Act, Fordham’s Mike Petrilli is right about one thing: “Senator Lamar Alexander, Representative John Kline, and their respective staffs have successfully freaked out sizable portions of the education-reform crowd – especially those who spend our days inside the Beltway bubble – by threatening to eliminate No Child Left Behind’s annual testing requirement.”

Petrilli thinks Kline, Alexander et al. have not yet “come to their senses”. But he “expects they will.” I’m more inclined to believe that annual testing is a giant red herring.

Threatening “annual testing” could be a brilliant negotiating tactic. The more freaked out the “education-reform crowd” is about annual testing, and the more singularly they stay focused on “annual testing” to the exclusion of what are equally important issues, the easier it is for Kline and Alexander to take everything else off the table.

Whether intended or not, the signals that the Kline/Alexander team is sending are having that effect. They’re also doing a whole lot more. Some, primarliy on the left, are dangling out their support for doing something that, at the end of the day, could be called “annual testing” but defining it in a way that renders Petrilli’s implicit definition meaningless. Stranger things have happened.

When Petrilli says “annual testing” he implies, I think, and most people hear statewide testing that is done for every student, in every grade, 3-8. The most important thing about annual statewide testing is that all kids in each grade take the same test and all results are comparable from year to year. But over the last few months, the definition of “annual testing” has started to morph.

There’s a way to get to something that can be called “annual testing” that drops the “statewide” part. Some test (in fact a number of them) would be given to every student every year. But it could be a mix of state and local tests. And that mix could differ from grade to grade, a state test one year, all local tests the next. All of that would be cobbled together to create a state accountability system that determines school and student progress over time.

This – a system in which “annual testing” is actually a year-to-year kaleidoscopic hodgepodge of state, local, district and even classroom-level assessments – is something about which, as Mike Petrilli might say, people should be equally “freaked out.” If you think this sounds like hype, it’s not. Please read on.

The apples-to-apples comparisons facilitated by current statewide accountability systems do a lot of good things. Parents can be assured the test or tests used to rate their child’s school are aligned to the same degree with state academic standards as the tests kids take at every other school. They can be reasonably sure – absent cheating – that those ratings aren’t inflated by local political pressures or biases or sheer lack of technical capacity and expertise to make things look better than they are for their child and their school.

This, by extension, becomes an equity issue. Any effort to advance equity requires comparability of student circumstances across zip codes, incomes, race, disability, etc. Any accountability system that drives to improve the achievement of those students and target resources toward them is out the window if every school is held accountable based on a different set of numbers.

You don’t have to take my word for it. The Board on Testing and Assessment of the National Academy of Sciences unanimously concluded that putting a mix of different national, state and local measures together in a way that allows valid and reliable comparisons is simply “not feasible.” Year to year comparisons are among those that would not be statistically reliable or valid if one cobbled together all those things.

Nonetheless, earlier this year, Linda Darling-Hammond and others authored a paper that called for measuring yearly progress through a hodgepodge of assessments. Some tests would be statewide, others would be specific to local districts or, the way I read it, maybe even schools or classrooms. Some would be given in one grade, but not in the next. They don’t quite call it “annual testing.” But they come pretty close. My hunch is that that was intentional. When I asked people this week if an ESEA bill containing what Darling-Hammond et al. dubbed “51st State” would qualify as “annual testing,” many thought it might.

I get why some of this attractive. Most of the assessments in use now (aside from “PARCC,” “Smarter, Balanced,” and others that are still being piloted on which the jury is still out) are clunky. Embedding state test items in a students’ everyday schoolwork would reduce a lot of the pressure that surrounds “sitting for a test” and enable faster, more sophistacted reporting of results. The types of “formative” local tests that Darling-Hammond touts would allow more instructional customization according to individual student progress.

But that’s very different than asserting that a statistical mash-up of those elements – and potentially much, much more – could provide parents intelligible information about whether their child is on, or moving toward, grade-level performance. It’s not a system I or most statisticians would trust as a cornerstone on which to build societal efforts to realize the dream of equal educational opportunity, something many of us consider to be a civil right.

The problem for ESEA reauthorization is that 51st state is both a logical extension of Kline and Alexander’s stated goal of providing maximal federal flexibility and also something that, at the end of the day, could be called “annual testing.” Some people are already calling it that. Put into federal law, at a time when most Governors are already embroiled in a bunch of education debates where logic isn’t always carrying the day, it may not be quite the gift to states that Alexander et al. have in mind.