Everyone knows that the annual US News & World Report “league tables” that purport to rank American law schools are deeply flawed. US News, of course, likes to keep up the pretense of statistical validity by periodically assuring us that it doesn’t just stick a pin in a sheet of paper, but actually uses an algorithm to put the rankings together.
Algorithms are all the rage these days. Google’s is probably the best known, and it seems to do the job pretty well, so US News clearly hopes that the fact that it uses one at least gives its tables some minimal degree of plausibility.
But algorithms don’t invent themselves. While they enable the processing of a huge number of variables, they still require humans to decide upon the weighting to be given to each variable. Google, for example, is constantly tweaking its search results algorithm. It no longer relies, for example, on sites’ using tags and keywords. In fact, nowadays it will punish with very lowly rankings those sites that overuse key words in an attempt to game the system. It would be interesting to know when, why, and how, US News last made a meaningful change to its algorithm because it was subject to abuse.
The other fundamental problem with algorithms is that, even if they are a model of perfection, the degree of success in their application still depends on the quality of information being processed. I have yet to see anything from US News that suggests that the information it uses has been appropriately curated.
Nothing new here, you say — time to move along! Except that I have just received two letters from US News, each asking me to complete a survey form on American JD programs (one for full-time programs, one for part-time). It is worth quoting here one particular sentence. It is taken from the letter relating to full-time programs, with the slight differences in the text of the letter relating to part-time programs indicated by square brackets:
Surveys are [This survey is] being sent to the law school dean, dean of academic affairs, chair of faculty appointments, and the most recent tenured faculty member at each law school accredited by the American Bar Association [which has a part-time-program].
Fair enough, you might think. Except that I’m not — and never have been — the dean of Stetson’s law school. Nor have I ever been Stetson’s dean of academic affairs. I was last chair of our faculty appointments committee over six years ago, and was awarded tenure at Stetson over ten years ago.
If US News’s league tables represented a genuine attempt at a rigorous and defensible survey, and its authors had established some degree of credibility, the sensible response would be for me to contact one of them to explain the error.
I’d then expect there to be an investigation to find out whether the error involved simply a one-off mistake, or a problem likely to invalidate the whole research project, with the authors then acting accordingly. But US News has no such credibility, and it seems unlikely that, if I were to contact the report’s authors, they would do any such thing.
Several things nevertheless intrigue me about this:
- How many others have received these survey forms without falling within any of the listed categories of intended respondents?
- How and why does US News think that I (and any others in a similar position) fall within one of these categories?
- How many such mistakes are made each year?
- Does US News care?
But, perhaps most damning of all the questions worth asking is this one: is there something special about the designated categories of respondents that make for a better quality of survey?
I rather doubt it.
Adopting these categories looks to me like an attempt to cloak the project with spurious rigor. I suspect that US News knows that full well. So, if some people receiving these survey forms do not fall within one of these categories, why should they care?
Or perhaps the real question is: Should anyone care?