What is to be done about rankings?

January 20, 2015
President, Campus Compact

Join me in a thought experiment. Let’s start with the premise that colleges and universities—those that are publicly funded and those that receive a public subsidy through tax exemption—should serve the public. From there, we can conclude that rankings should reward colleges and universities that serve the public well (even if they also reward other things). How do our current ranking systems do?

Before answering, I want to note that rankings are on the agenda for people who care about the public purposes of higher education all over the world. I was recently at the Talloires Network Leaders  Conference in South Africa, a global gathering of educators and students. Over and over, we heard first-hand accounts of university leaders being pressured to push democratic and community goals to the background in order to advance in the rankings. Why would a university organized by residents in a rural community in Rwanda focus on research publishable in U.S. and European journals rather than on research that serves community needs? What else would we expect when national education officials tell them their funding depends on it? And what else would we expect from national officials when tuition and research dollars follow the rankings?

Rankings, however absurd, matter because they drive behavior. Right now, they drive behavior that undermines rather than serves public ends. In the U.S. News and Times Higher Education World University rankings, institutions do well if they turn away most students and accumulate vast wealth. The Times Higher Education ranking also gives universities credit for producing research that meets the standards of a narrow range of journals.

These rankings don’t ask whether institutions serve students from underserved communities. They don’t ask whether institutions support research that addresses issues of importance to their communities. They don’t ask whether graduates of these universities contribute to the common good.

There are some alternative rankings that head in this direction. Washington Monthly has taken the lead in developing rankings that try to capture institutions’ contribution to the country. Their rankings are clunky and arbitrary—you get as much credit for a research dollar spent on creating sweeter soft drinks as for one spent on improving early childhood education—but they ask the right kinds of questions. The New York Times  ranked institutions using a college access index, with the proportion of low-income students captured through Pell eligibility. That is not a perfect proxy for low-income students—high-wealth families with little income sometimes qualify—but it prompted a useful conversation.

So what can we do? One answer is to generate alternative systems of meaningful evaluation. The Carnegie Classification for Community Engagement is a great example. It’s hard to game because it is based on labor-intensive in-depth analysis. On the other hand, for that very reason it can’t be done every year and doesn’t result in a ranked list. So it’s never going to generate the media attention of rankings.

Because the rankings are unlikely to go away, what should those of us who care about the public purposes of higher education do? Many college presidents already express doubts about the rankings, with little evident impact. Perhaps the best answer is for institutional leaders to join together in collective action to (1) spread the word about the limits of rankings and (2) pressure rankers to design better ranking systems. College and university presidents could refuse to share data with ranking entities, refuse to use rankings in their marketing materials, and use their public voices to condemn the rankings. And they could collaborate on reports to public and private funders helping them see how existing ranking systems steer higher education away from serving the public.

The least likely institutions to join this fight, it might seem, are the winners in the current rankings game. On the other hand, many of those institutions have bolstered their reputations in part by establishing themselves as leaders in advancing higher education’s public goals. Nearly all are members of Campus Compact, which means they have committed themselves to re-shaping higher education to advance the common good (Presidents’ Declaration). Just as some of America’s wealthiest individuals have added their voices to calls for preserving inheritance taxes, perhaps—through organizations like Campus Compact—even the rankings winners might join a broad coalition of college and university leaders organized to take on the rankings. It’s worth finding out.

All posts are authored by Andrew J. Seligsohn, President of Campus Compact.

2 thoughts on “What is to be done about rankings?”

  1. Thank you for raising the issue of rankings, especially in the context of the community engagement movement. About 5 years ago, I designed a graduate course on ranking systems in higher education. I developed the course for several reasons. First, in interviews with engaged scholars and administrators on engaged campuses there were many complaints about the rankings regime as having taken over reward systems, resource priorities, and national attention. In other words, colleagues I met positioned the rankings as “in the way” of making their campuses more engaged. Second, the dominant ranking systems that receive the greatest attention (e.g. USNWR, Times, Shanghai) do not reflect what I consider the most important parts of my undergraduate experience—either the quality of education I received, or the way I was cared for as a person, mentored, or inspired to make meaningful contributions in the world. Third, I designed this course on rankings to teach myself exactly how they worked, the organizational behavior they were encouraging, known outcomes of this behavior—so that I might better critique them.

    Five years later, my students and I have learned a great deal. We approached the issue using a typical input-process-outcome model and came to understand early on that the dominant rankings primarily measure inputs—such as student selectivity, faculty research productivity, institutional age and size of endowment. Such inputs are so highly correlated with outcomes that students might as well hang out in a waiting room for four years, and the outcomes would be the same.

    Of particular concern is the organizational behavior that the rankings regime has facilitated which we have sought to understand in detail and continually evolves. In order to move up in rankings institutions have shifted more of their admissions to early action or early decisions, offered more merit aid, recruited more out of state and international students, added more honors programs and reduced remedial programs, created spring enrollment programs to avoid reporting lower SAT scores of these admitted students, and spent more money on competitive amenities like climbing walls, hot-tubs, and expensive athletic facilities. As they compete to move up in the rankings, institutions recruit faculty stars, increase research expectations for tenure and promotion, encourage more faculty grant writing, and expect less in and out of class time with students. Institutions striving for prestige can become more competitive, greedy workplaces, that reframe “real work” and can fragment communities as they push toward even greater individualism. There have been many examples of institutions reporting data incorrectly and intentionally cheating because of the pressure put on administrators to move up in rankings. Accompanying this “striving behavior” is a change in institutional narrative and vision. Institutions talk about how they are becoming better, world class, or moving up in the rankings—at the exact time that they are also becoming places with fewer Pell grant students, local or regional students, with more disciplinary ties and fewer local and regional ones. Also, in many countries governments are merging institutions, making richer institutions richer and starving others of needed resources to get on the world university rankings lists. Overall, rankings enhance the natural tendencies in the system toward strategic imitation, standardization, conformity, and discourage distinctive institutional missions.

    Through studying the literature and speaking with representatives from USNWR, Washington Monthly, VSA, and national experts on world rankings, our ability to critique what is happening and how it is effecting our institutions in troubling ways was sharpened. Striving to move up in rankings replicates and reproduces inequalities in higher education in very concrete ways; the rankings legitimize status hierarchies.

    However, I have also come to see that we are all part of the rankings regime. Every time a speaker is introduced using the name of a prestigious institution where they attended, our search committees use peer networks and reputation to choose candidates, and advertise the rankings on our institutional websites, we participate. Some have argued persuasively that for-profit rankings simply replicated the prestige system that already operated within our own system prior to 1983; they just legitimized it with an argument that they had objective data to back it up. Although the first rankings were started by academics themselves to rank graduate programs, it was not until USNWR came on the scene that they gained such “space” in our system. If we had had a more compelling way to communicate the value and contributions of higher education to the public, the rankings would not have found and occupied such a powerful place in the system.

    Also, I have come to see that the dominant rankings pushed many institutions to create more transparent, public information and institutional research than existed before. We now talk about the difference between graduation rate and predicted graduation rate, the relationships between increased costs and striving, and understand which campuses are best serving veterans, diverse students, and the goals of social mobility because of the alternative rankings that have emerged in reaction to the dominant ones (such as those you mentioned—Washington Monthly, NYT). Although the dominant rankings were created as a product for consumers, not to push higher education to make better data more transparent, this has been a positive by-product.

    What is to be done about the rankings? I think we need to recognize that there is also a psychology to rankings wherein people love to categorize, rate, and rank as part of the way they make sense of the world. Higher education institutions are knowledge-intensive institutions and like using data to make decisions and comparisons. Also, there is a natural tendency toward competition and loyalty (e.g. just as my husband wants the Patriots to win in the super bowl, I quietly root for Loyola University of Baltimore, the Buckeyes or the Terps (my alma mater) to win in any contest—whether I know the criteria are real or not). What we need to do, and has already started to happen—is to find more ways to use this instinct to measure things that really matter. We need to use the scaffolding of rankings, and American instincts for competition to shine a spotlight on other goals and outcomes.

    The Carnegie classification for community engagement is a fantastic example, as you said,—as it encouraged data collection and recognition/prestige around an engagement agenda. Campuses are now placing their new classification on websites and touting it in similar ways that they do other rankings.
    In a similar way, in my rankings class this winter, students joined teams to use publicly available data to rank groups of institutions based on how well they were serving undocumented students, women faculty, and veterans. They used what they know from higher education research to avoid the mistakes of other rankings by creating appropriate “fields” of institutions by size, endowment, public/private and other inputs. They used social science research to back up their choice of each criteria and the weight they would give to it. They considered the striving behavior each criteria would encourage. They were fantastic projects! In past years, students have created alternative rankings for the degree to which Jesuit institutions were living their social justice missions, public institutions were really helping students get jobs, and land grants were accomplishing equitable outcomes (majors, retention, etc) for under-represented minorities, just a few examples.

    I think we can take this kind of approach and use in the engagement movement. For example, are community engaged institutions treating their contingent faculty well? How about their staff? Which institutions use endowments to support university-community partnerships that help transform local schools and economic development? Which institutions are truly stewards of place? Sure, it is challenging to find publicly available data, but that hurdle can be overcome. It is challenging to put the information in a form that is easily understood by consumers of the information—but again, we can do it.

    Higher education institutions are cultural resource driven organizations that will always trade in a currency of some degree of legitimacy, prestige, and reputation. But we can use that energy to better measure and legitimize the ways in which our institutions are making a difference that matter most to us.
    Thank you to my good friend Saul Petersen from NJ Campus Compact for letting me know of this post and suggesting I respond.

    P.S. I provide a few pieces I wrote with colleagues on this subject.

    O’Meara, K. (2007). Striving for what? Exploring the pursuit of prestige. In J. Smart (Ed.),
    Higher education: Handbook of theory and research, 22 (pp. 121-179). New York, NY: Springer.

    Meekins, M., & O’Meara, K. (2011). Ranking contributions to place: Developing an alternative model for competition in higher education. Public Purpose. Washington, D.C.: AASCU.

    O’Meara, K., & Bloomgarden, A. (2011). The pursuit of prestige: The experience of institutional striving from a faculty perspective. Journal of the Professoriate, 4(1), 39-73.

    O’Meara, K., & Meekins, M. (2012). Inside rankings: Limitations and possibilities (Working
    Paper, 2012 Series, No. 1). Boston, MA: New England Resource Center for Higher

Comments are closed.

  • update-img-new

    Get updates on what's new in the Campus Compact Network