By Mary Nguyen Barry and Michael Dannenberg
This past Monday, Education Reform Now submitted recommendations to the Department of Education (ED) regarding the President’s proposed college ratings system. In case you don’t want to dive into our 12 pages of comments, we’ll give you some highlights over the next two blog posts. First up: A general overview.
The Good
Overall, we support the concept and need for a federal college ratings system.
Despite improvements over the past 50 years, the American higher education system still calcifies economic inequality rather than acts as an engine of socioeconomic opportunity. College access for students from low-income families has improved, but the gap in degree completion rates between those from low and upper income families has grown. Rising net prices, driven by state higher education funding cuts, have outstripped growth in wages for the poor, working-class, and even middle-income families. The result is heavier debt burdens, especially among low-income families, that are exacerbated by low completion rates and a long time to degree even among those who do complete.
We support rating colleges as “high-performing”, “low-performing,” and middle.
Not all colleges contribute equally, nor solely, to overall postsecondary education underperformance. There are high-performers – colleges that buck the trend and enroll and serve students from low-income families well – and low-performers – colleges that act as “engines of inequality,” “college dropout factories,” or “diploma mills.” It’s much easier, in the initial rounds, for ED to identify the “best and worst” colleges and to leave more nuanced gradations for later iterations of the ratings system.
We support using the ratings system to drive improved accountability and information to consumers.
The three-tiered rating system lends itself to rapid accountability provisions. We’ve suggested previously that the federal government at least begin the accountability process by identifying the “worst of the worst” colleges on a variety of access, success, and post-enrollment success metrics. These colleges should lose access to certain federal grant, loan, and tax benefits. Or, at the very least, they should be subject to a loss in competitive standing when pursuing non-formula based discretionary grant fundingand separately, heightened scrutiny, including Department ‘program reviews’ of regulatory compliance.
On the consumer front, students and families need clear indications of a college’s performance along a streamlined set of outcome measures. The identification of the “worst of the worst” colleges would also send a bright signal to consumers that these are institutions that should be avoided. We’ll discuss in more detail in the next blog as to how the Department’s ratings can serve both accountability and consumer purposes.
The Bad
We do not support ED’s consideration of proposals to adjust outcomes for student characteristics or institutional mission.
We cannot stress enough our philosophical opposition to proposals (like the Association of Public and Land-Grant Universities’) that call for adjusting institutional outcomes based on personal student characteristics. Such “risk adjustment” consecrates a different set of expectations for different groups of students based on immutable characteristics, such as race and gender. It could also allow colleges to escape responsibility for providing quality service to every student they voluntary enroll. It’s what former President George W. Bush referred to as “the soft bigotry of low expectations.”
Never before has there been any outcome adjustment in federal higher education policy. In fact, the Obama administration firmly rejected this approach in the past during the gainful employment debates. ED insisted back then that it was appropriate to hold all institutions to certain minimum standards irrespective of student demographics. ED should apply that same principle in the context of a ratings system applicable to all degree-granting institutions of higher education.
What would an alternative be? Unadjusted outcomes should be compared among similar colleges serving similarly academically prepared students. This can be accomplished by creating “institutional peer groups.” We’ll discuss in more detail how that would work and highlight individual performers in our next post.