Trends in Innovative Assessment Pilots Reveal Opportunities for Actionable Data as well as Equity Concerns

Blogs, Letters & Testimonials

December 9, 2020

Overview

The Innovative Assessment Demonstration Authority (IADA) is one of the many provisions in the Every Student Succeeds Act (ESSA) designed to give states more flexibility around K-12 accountability as compared to ESSA’s predecessor, the No Child Left Behind Act (NCLB). Thus far, four states—Georgia, Louisiana, New Hampshire, and North Carolina—have been approved to participate in IADA and have had their applications published by the US Department of Education. IADA frees states of many federal requirements, including that the same summative assessments be administered in math and English Language Arts (ELA) in grades 3-8 and that all students in the state, with some exceptions, participate in the same statewide assessment.

In the aggregate, the new assessment systems work to address some of the widespread criticisms of standardized tests that developed in the wake of NCLB and the Common Core and other state standards:  that statewide, summative assessments take away from valuable instructional time through test prep and administration; that assessments only capture a snapshot of what students know and are able to do at one point in time;  that data is not immediately available and actionable for informing instruction; and that the results don’t fully capture what students’ knowledge and skills. IADA states are responding to these concerns by proposing various through-year assessments that limit testing time and provide educators with valuable and timely information they can use to inform and personalize instruction, all of which could reduce testing backlash and improve outcomes for students.

As a part of these efforts, IADA programs have also embraced local control—in varying degrees—through assessment choices and locally developed test items. This can achieve the benefits of added educator buy-in and assessments more directly aligned with curriculum. But it also comes with risks, the most concerning of which is comparability: when assessments cannot be compared against one another, students in different local education agencies could be held to very different standards, even though they would ultimately be applying to the same colleges and competing for the same jobs.

While local innovation is to be valued and encouraged, we should be mindful of the reasons that statewide standards and assessment systems were implemented in the first place: ensuring equitable educational opportunity by holding all students to high standards and measuring their progress against them through means that are comparable and allow for the disaggregation of outcomes for students from historically disadvantaged groups. As the Department of Education works to increase participation in the program, we thought it prudent to review the existing programs with a specific focus on equity, since the approved applications will likely serve as models for new states looking to join IADA.

Through-Year Assessments

All states participating in IADA have opted to include through-year assessments—effectively breaking-up large year-end tests into smaller tests that are administered over the course of the school year. Through-year assessments have a number of potential benefits, including that they:

  • Allow for flexible administrations, which can align test timing with associated instructional units;
  • Produce results throughout the year that teachers can use to inform and personalize instruction for students;
  • Provide multiple data points on student achievement, reducing the concern that student performance judged through a single moment in time; and
  • Reduce total testing time and largely eliminate large, disruptive, days-long test administrations.

The states have significant variation both in the number of assessments conducted throughout the year and how these test impact student final reported achievement scores. In New Hampshire, students complete as many as 25 “performance tasks” per year for each tested subject, while Louisiana and North Carolina each have three assessments per year. While in Georgia, the number of assessments administered throughout the year will depend on which of two pilot assessments is ultimately selected by the state: an adaption of NWEA’s MAP Growth test will be administered three times a year and a competency-based system called Navvy will allow students to retake assessments over the course of the year.

These systems largely call for scores from each assessment throughout the year to be combined into a single score (though the formulas are still in development), with two notable exceptions. North Carolina state law prohibits this practice, so only the year-end test will “count,” while the rest will inform teacher instruction as well as the test questions students see in the end of year test—something that already exists in many districts as “NC Check-Ins.” And Georgia’s Navvy test score will be a numeric representation of the number of competencies/standards a student has mastered by the end of the year.

The biggest strength of through-year assessments—real-time data to inform instruction—also has the potential to be IADA’s largest missed opportunity. As states, districts, and schools focus on ensuring a smooth transition to a new assessment system, training teachers to understand and use the data these tests produce can easily fall through the cracks—particularly in under resourced schools. If the ultimate goal of assessments is to improve student achievement, this cannot be an afterthought for states.

Louisiana, in particular, has stated a commitment to teacher professional development—citing its work improving the access and implementation of high-quality curriculum around the state as evidence that the state has the will and capacity to so. New Hampshire also has put a great deal of emphasis on professional development, including offering it to teachers in school districts that are not participating in their assessment pilot. By contrast, teacher training appears to be less of a priority in North Carolina, with its IADA application making only passing references to webinars available to educators as their new assessment, NCPAT, is rolled-out statewide. 
Increasing Local Control

In the spirit of ESSA decentralizing power away from the federal government, the IADA pilots—with the exception of North Carolina—pass along a lot of decision making to districts.

  • When scaled statewide, Louisiana districts will be able to choose between the traditional LEAP 2025 English Language Arts (ELA) assessment or the new, innovative LEAP 2025 Humanities test, which will combine ELA and social studies content, as well as contain through-year assessments. And within LEAP 2025 Humanities, districts can select three assessment modules from a bank of five—a total of 10 combinations.
  • In New Hampshire, district-developed “performance tasks” make up the majority of students’ total achievement score. And like Louisiana, there isn’t a state mandate to adopt the new system. Instead, the state is using, what they call a “social movement” method to build buy-in, improve implementation, and avoid backlash.
  • Georgia is taking a different approach to local control by harnessing existing local assessment innovations, creating a competition among two different systems. Over the course of the pilot, the two different systems are relatively free to develop as they see fit. But unlike NH and LA, Georgia will ultimately select a single assessment system (including the existing, “non-innovative” test) that all schools will be required to use.

Though local control may indeed increase buy-in and produce exciting innovations, accountability systems that allow differences between districts should be cause for alarm. Even while giving states flexibility, ESSA is not ambiguous in stating that all students in a state, with a few exceptions, need to take the same assessments. Without this, assessments may not fulfill their primary function of holding all students to the same high standards and comparing the performance of students in different schools and districts so that resources and supports can be directed to those schools and students that truly need them most.

New Hampshire’s Performance Assessments for Competency Education (PACE) system, has the largest hurdles to jump in order to prove their district developed assessments are comparable, particularly in terms of content and rigor—and that teachers are grading these assessments similarly across the state. As of now, there appear to be few quality controls in place to ensure all students are being held to the same high standards. 
Equity Concerns

Of the participating states, Louisiana is the most explicit in their attempt to use IADA as an instrument to improve educational equity. The new assessment, LEAP 2025 Humanities, acknowledges that assessments cannot be content agnostic, and that content knowledge can trump skill mastery in reading comprehension assessment scores. As a result, the test draws on the state developed curriculum, ELA Guidebooks 2.0, with each assessment module using texts that students will have already encountered classroom, avoiding giving more advantaged students a potential leg up.

However, as mentioned above, Louisiana’s decision to allow districts to opt out of the Humanities assessment may undermine this commitment equity by limiting comparability. Not only does this open the door to hold traditionally disadvantaged students—poor and minority students, students with disabilities, and English Learners—to lower standards, but resources might be misdirected away from areas that actually need them most, if districts and schools are effectively being measured by different standards and different yardsticks.

Another risk to equity exists in the subset of districts currently piloting IADA assessments. In both New Hampshire and North Carolina, the students participating in the pilot phase of the program are not representative of the racial/ethnic populations of the state as a whole. While both states are committed to addressing this as they expand the pilot, any studies of comparability may not be valid for underrepresented student groups, limiting their usefulness in informing test adjustments and scaling efforts.

Models for New IADA States?

As states consider joining IADA, it’s critical that they consider the potential benefits and risks of the various innovations in assessment currently underway. It may not be worth the investment of time and resources to make changes that only tinker around the edges, like North Carolina’s NCPAT system, which effectively just unifies two existing tests under one name. However, developing a complex system of local assessments similar to New Hampshire’s PACE will be difficult to translate to larger, more diverse states and may strain already limited local capacity. To be clear, innovation in assessment should be encouraged, but states need to approach IADA with a clear understanding of the risks and benefits associated with making such changes.