Metrics and Rankings

I think that’s one of the more boring subject headers I’ve used for this blog. Honestly, though, I love metrics. I love objective measurements and rankings and ratings.

Today I was part of a conversation at work that involved a company that’s working metrics for ranking Newman Centers (Catholic centers on secular campuses) across the country. Such a thing doesn’t currently exist.

Isn’t that odd–something so simple that doesn’t exist yet?

It got me thinking–what else out there hasn’t been ranked by a metric system. This isn’t just fun and games. This is business. What do you think of when you think of the U.S. News & World Report? Their college rankings, right? They’ve cornered that market.

This is something that’s so easy–you don’t have to own anything. You just have to create a metric that makes sense, and then package it with something that makes money. The company that creates the Newman Center metric is going to be known as the company for evaluating Newman Centers. That’s a big deal when you consider the amount of fundraising those centers do, and they pay well for consulting.

So what else is out there that doesn’t have  a metric-based rating?

0 thoughts on “Metrics and Rankings”

  1. USN+WR has cornered the market – for people who still care about those rankings.

    Newman center rankings are off the Esoteric-o-meter. You’re not going to pickup and switch colleges ‘in medias res’ due to said rankings. And no one chooses grad school based on the Newman Center. These rankings would be actionless – the end-user utility seems to be zero, thats why these rankings dont exist.

    Its another game for insiders to measure themselves against each other, like USN+WR, but ultimately outsiders (namely: students) wont care.

    But here’s another way to look at it – applying the wrong model to the data. The “model” of rankings is always a linear model (1-100, etc). Rather, if you “had” to rank them, a more appropriate model might be more binary or discrete data models – “thriving”, “shrinking”, “highly participatory”. Grouping Newman’s based on size+participation would reveal enough information for the end-user to make a decision. Anything beyond that is either a) too subjective or b) objective but uncorrelated with the end-user experience.

    …whew…. sorry that took so long to write.

    • Ah, Joe, I’d expect more from you. You’re focusing on the wrong end-user. Sure, some students may be curious about Newman Center rankings. We get plenty of prospective freshmen in the Wash U Newman Center that are curious about the quality of the place. But they wouldn’t be the main users of the metric.

      The primary users would be the Newman Centers themselves. Organizations are always comparing themselves to similar organizations to see how they’re weighing in. Newman Centers are no different. They want to know (a) how their ministry stacks up against similar ministries and (b) how their fundraising and development stack up against similar ministries. Both are valuable tools when you’re raising funds.

      • Ok… so if the end users is the Newman organization, and the organization is paying attention to the metric – aren’t they taking their eyes off the prize – the students? Not that fundraising isn’t a priority, but how much of a priority should it be? One worthy enough of its own ranking system, consulting gig, etc? Is a “Newman” that much different from any other church that the consulting need be Newman-specialized in order to be useful? I doubt it.

        • You’re right, the “prize” is a quality campus ministry. But as you said, the students–for the most part–aren’t the ones interested in that quantification. The actual employees of the Newman Centers are. The tertiary users would be potential donors.

        • Not to chime in here on something I don’t know ANYTHING about (doesn’t sound like me AT ALL), but a couple of thoughts on the debate raging here.
          1) Metrics can be infinate. (# of attendees vs. the size of the target audience; qualitatively collect info on attendees experience at the Center; Analyse which centers are getting the most bang for their buck, by associating funding to a calculus of # of attnedees vs quality of experience). The question is what do you want to know. What is important, and how can I get actionable data. It’s all about asking a wide base of people the right questions.
          2) Applying business analyses to a religious initiative makes me a little icky inside. I did a tour of the St. Louis Archdiocese last Lent. 5 churches in 40 days. Culturally, St. Mary Magdelan (Brentwood)is very different from St Cronan’s; and The Basilica is different than St. Gerard Majella (I didn’t make it to St Stanuislaus or St. Alphonsas of the Rock). But I don’t like Dogma’s “Buddy Christ” approach. I know churches need to be financially vaible, but it’s a slippery slope from Magazine Subscription Fundraisers to “This Holy Communion is brought to you by Panera.”

          • I see what you’re saying about the “icky” side of applying business metrics to a religious institution. But, from an insider, I’ll admit that it’s tough to measure success. Sometimes a success is a single student who grows due to the CSC’s ministry. In fact, that’s a huge success. But how do you measure that on a larger scale? Is one student enough? The more people you’re talking about, the more quantitative your analysis becomes. It has to.

            • Agreed. There are qualitative research methods, some of which I am applying at work as we speak (determining what buckets various comments fall under). But as soon as you talk about “measuring success” you’re looking for a way to quatify. if this is your goal, I have always been a fan of surveys. From 1-10 (1 being My significant other draggs me here when I’d rather be sleeping and 10 being The Newman Center has had a Positive life-altering impact on me, and I am paying it forward to to others), how has your attendance at The Newman Center affected you. The problem is that even then, how much is enough? If 10 people take the survey, and return only 30 points, are you not successful? How much is enough/What is your goal? If you can never acheive enough success, then why measure it? And if you’re going to change your tactics to acheive success, how is that not pandering to yuor audience?

              • The one thing I’d add here is that I think these types of metrics have to be created without surveys in mind. That way you can create the metrics across the board without requiring all the various institutions to disseminate the exact same survey to the same constituents.

  2. Actually, Jamey, its even easier than you think. Newsweek’s highly successful (and idiotic) ranking of US High Schools has proven that your metric need not even make sense. So I’m sure you can feel free to leave the “makes sense” part out of your equation as well. All you need is a ranking, however arbitrary it might be, and a way to make money.

      • Wow, I’m scared of what’s going on up there, so I don’t really want to get involved, but to answer your question, Newsweek ranks US High Schools based on how many AP/IB exams are given divided by the size of the senior class. Who ever gives the most AP/IB tests per senior is the best school in America! You tell me if that makes sense.

        I think you will agree that, unlike what Red suggests, they are not asking the right question.


Leave a Reply

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading