By Kathleen Nugent, DFER NJ State Director
One of the more commonly heard criticisms of education reforms enacted over the last several years is the perceived overuse of data. Collecting information about student achievement through interim assessments and standardized tests, for example, has become a centerpiece of initiatives aimed at increasing teacher accountability and improving student outcomes.
Opponents frame these reforms as intrusive or unnecessary. However, people tend to forget that prior to recent concerted efforts to collect, disaggregate, and analyze data, there was no clear way of tracking a school’s impact on student learning apart from graduation rates (which have their own set of issues and arrive too late for remediation efforts with the applicable cohort).
Moreover, the term “data” is often misused as if it is synonymous with standardized testing, but in reality data are not limited to standardized tests alone. Student outcomes such as literacy and numeracy levels, graduation requirements, and attendance rates, for instance, are being tracked more vigorously now than ever. Classroom observations and student surveys are increasingly integrated into teacher evaluations. These indicators can provide pertinent information about our school systems’ impact on student learning.
While the increase in data accessibility that now exists is a tremendous asset, there are legitimate questions as to whether the pendulum has swung too far with data collection and how best to use what is measureable. At a recent meeting in New Jersey with a diverse array of statewide education leaders in attendance, it was noted that districts gather great amounts of data, but few people have committed the time and energy to determine ways to make that data meaningful. Furthermore, requirements for data collection and inputs have changed over time. For example, proficiency cutoffs of standardized tests as well as methods for calculating graduation rates have been adjusted. This makes longitudinal comparisons (which tend to be the most enlightening) challenging, if not impossible. All of the meeting’s participants agreed the solution was not simply to eliminate the use of data from important decisions, but rather to define multiple measures and their optimal use so the data can be engaged to identify and help solve problems.
The meeting’s discussion recalled several points outlined in a commentary in Education Week posted by Brad C. Phillips and Jay J. Pfeiffer entitled, “Dear Data, Please Make Yourself More Useful – Sincerely, teachers and students,” (see article here), in which the authors note:
Factions are setting up camp at two extremes: one for those who believe data is the Holy Grail, and the other for those who shun it. Meanwhile, our students are counting on us to help them learn and be successful. Consequently, we believe there is a way to acknowledge that both sides have valid concerns, while applying a ‘usefulness’ standard to make sure we’re collecting information that actually can be drawn upon to change schools for the better.
Phillips and Pfeiffer outline five suggested guidelines to make data useful:
1. “Engage teachers and decision-makers in the design of the tools used to collect data.”
2. “Create regular opportunities to huddle around the data.”
3. “Tailor reports to your audience.”
4. “‘Useful’ means many things and has many audiences.”
5. “Continuously hone validity and accuracy.”
The authors go on to compare useful data collection in education and eventual longitudinal student profiles to electronic health records that assist doctors in better treating their patients by granting access to full medical histories, persistent problems, and previous remedies. Of course, Phillips and Pfeiffer note the timeliness of access to data is key and that too often data points come too late:
Instead of an array of indicators that teachers can use to make midcourse corrections and revised lesson plans that acknowledge their students’ needs while learning is in full swing, the emphasis is on summative test-score results, which measure learning at the end of a course of study.