The Soft Bigotry of Low Expectations Has No Place in K-12 or Higher Education (Part 2)


February 27, 2015

By Mary Nguyen Barry and Michael Dannenberg

We’ve been hard in our opposition to recommendations by organizations calling on the Department of Education (ED) to adjust the results of their planned college ratings system based on immutable student characteristics. So called “risk adjustment” proposals effectively suggest there be different expectations for different groups of students based on demographics alone.  It’s what former President George W. Bush decried as the “soft bigotry of low expectations.”

But it’s not enough to be opposed to those who call for risk adjustment.  Instead, we offer an alternative for ED to consider: compare unadjusted outcomes of similar colleges serving students with similar levels of academic preparation.  That’s different from demographic characteristics per se.

From an accountability perspective, it makes very little sense to compare an outcome, like graduation rates, at a college like Hofstra University in New York with those at Harvard University. Those two schools enroll students with completely different levels of academic preparation, not to mention are institutions with vast differences in size and wealth.

Hofstra University, and all colleges, should be compared to similar colleges that serve similarly prepared students. When one does this “peer institution” comparison, you see that while Hofstra University graduates its first-time full-time students at a rate similar to the national average (61 percent compared to 59 percent), Hofstra University underperforms almost all of its peers in educating its students.Hofstra does an even worse job educating its underrepresented minority students as compared to its peers: just over half (54 percent) graduate within six years.

A peer comparison analysis would ask why do similar colleges with students with similar levels of academic preparation, like Syracuse University and Fordham University, both also in New York, graduate their students at much higher rates?



The same peer comparison analysis can also identify extremely poor performers. We’ve found that 9 times out of 10, a college with a graduation rate below 15 percent falls in the bottom of its peer group. These are the colleges – and there are over 100 of them – that ED’s rating system should identify and warn students and families against.

In short, whereas a risk adjustment model embraces different and lower expected outcomes for some students, based on race for example, a peer institution comparison technique avoids the embrace of artificially deflated expectations.

The trick is how do you identify peer groups of similar institutions.  For the above analysis, we used the College Results Online (CRO) algorithm identifying peer groups.  It has been peer reviewed and in use for 10 years.  We suggest ED use a similar algorithm, but with a slight modification to remove the consideration of student wealth.  We submit that ED should only consider key institutional characteristics such as students’ academic preparation, as measured by entering freshman high school GPA and SAT/ACT score, and institutions’ size, sector, admissions selectivity, and funding levels when constructing peer groups for accountability purposes.  Once created, ED should:

  • Identify high, middle, and low performers among the ultimate outcomes it chooses for access, affordability, and success;
  • Measure an institution’s improvement over time by examining changes in its position within its peer group. Consider San Diego State University, for example, that steadily rose from the bottom third of its CRO peer group in 2002 with a 38 percent six-year graduation rate to the top third of its peer group since 2005, with a graduation rate now at 66 percent; and
  • Guard against perverse incentives by rewarding successful access, affordability, and success outcomes among disaggregated groups of underrepresented students, such as racial minorities, low-income students, adult students, and upward transfer students. That’s what many states’ performance-based funding systems – like Tennessee, Ohio, and Indiana – do.

Finally, in order to encourage positive decision-making among students and families, we recommend that ED create a second tailored peer group for presentation purposes to consumers. This second peer group would compare schools to other colleges a student is likely to apply. That’s because a student’s choice set – which may be driven by factors like geography or reputation – likely differs greatly from a national peer group of colleges that serve similarly academically prepared students. These informative consumer selection peer groups can be constructed based on groups of colleges that students list on their FAFSA applications and/or on their ACT and SAT submissions.

By following this two-level, institution peer group approach, ED can ensure its rating system is both fair to institutions and helpful to students and families.