Over the past year, I’ve noticed several respected blogs and websites (here, here, and perhaps most notably, Blog Emperor Caron, here) link to a site called “Law School Advocacy” (formerly “Best Moot Court Programs”), which purports to be a running ranking of law school moot court and trial advocacy programs. To be clear, I have no problem with the general premise of a rankings system. Nearly every law school in the country—mine included—trumpets its advocacy program as among the “best in the nation,” despite the fact that it’s mathematically impossible for all 196-plus schools to have “top ten” programs. From that standpoint alone, it would be nice to have a rankings system to acknowledge those programs that are truly a cut above the rest. My complaint regarding the “Law School Advocacy” site, which I suspect isn’t much different from the criticisms of the US News law school rankings, is that the methodology used by “The Ranker” is so severely misguided.
I think the first thing that should be recognized is that the self-proclaimed “ranker” is not a professor, nor a judge, nor a well-practiced attorney, nor a long-time observer of law school advocacy competitions. He’s Brian Koppen—a 2007 law school graduate who, like thousands of others each year, competed in moot court as a student. Of course, that doesn’t mean his rankings should be disregarded. Indeed, if they were grounded in solid methodology, his experience would be less relevant. And I do give him credit for attempting to do something that has never been done before. But I am a bit puzzled that nobody seems interested in asking whether the rankings bear any degree of reliability, or whether the ranker himself has the knowledge and expertise to issue those rankings. (To his credit, Professor Yates expressed mild hesitation on this issue before linking to the rankings).
I recognize that my criticisms are likely to be construed as sour grapes by a professor at a school that fails to crack Koppen’s current “top 72.” But I arrive at Texas Tech as the just-departed Director of Advocacy at tiny Texas Wesleyan University School of Law, which Koppen ranked 10th in 2007, and which currently sits at eighth in the 2008 rankings. As proud as one would be of a top-ten program (and I do believe that Texas Wesleyan is a top-ten program regardless of the rankings used), I can’t ballyhoo the results of a methodology so riddled with problems. I’d have these objections whether my program was 1st, 6th, or 60th.
A few of my criticisms:
1) You can’t accurately rank an advocacy program solely on team results in national competitions. Those of us who have been doing this for any amount of time know that a top finish—while a nice affirmation of a school’s performance—can be as much attributable to luck as it is to skill. Anyone who plays poker will understand: Over the long-term, you won’t win if you’re not good. But anything can happen at a single competition. A terrible team can skate through to a top finish on a brief that inexplicably scores well. A great team can lose a round and be sent packing when one judge thinks an advocate looked at him funny. My point? Finishes at competitions are extremely important, but they don’t even begin to paint an accurate picture of the merits of a law school’s moot court or mock trial program. Moreover, by solely focusing on results, there’s likely to be a great degree of variance from year to year. One school may be top ten today, and entirely unranked next year. Credible ranking systems don’t feature that degree of volatility.
2) You can’t give equal weight to, for lack of a better term, “unequal” competitions. Obviously, not all moot court competitions carry the same degree of distinction. Some (such as the National Moot Court Competition and the ABA National Appellate Advocacy Competition) are far more prestigious than others, yet Koppen makes no adjustment for that. Instead, he blindly awards points based on how many teams entered a competition and excludes from his rankings competitions that don’t fit his arbitrary number of sufficient teams (thankfully, he’s recently reduced his qualifying number of teams from 24 to 16).
The problem, of course, is that you can’t judge the prestige of a competition by merely eyeballing the number of teams that enter. Some competitions are (often by design) small and yet incredibly prestigious. Others are large merely because they’re held in a fun place (like the Tulane Sports Law competition held in New Orleans during Mardi Gras). Koppen’s system is akin to ranking Division III football teams alongside Division I football teams based solely on wins or number of teams in their division, without taking into consideration the fact that the divisions are completely unequal in prestige or strength.
An example: This year’s 16th-ranked team is the University of Hawaii, whose only (albeit impressive) accomplishment was finishing in the top three of a single competition. But because that competition fielded 70 teams, the points Hawaii racked up puts it on the verge of a “Top-15” program. Perhaps they are a top-15 program. But we’re willing to say this based on one semi-final finish at one competition? Really?
3) The rankings ignore regional champions and runners up at the National Moot Court Competition. He’s of the opinion (likely because he never competed in the National) that the competition’s regional rounds are too small to be worthy of inclusion. What he doesn’t understand is that most law schools send their very best team (or teams) to the regional rounds of the National, so winning an eight-team competition among your region’s “A-list” teams is far more difficult than winning a run-of-the mill, 30-team, open-entry competition. He’ll only award points if a school advances to a top finish at the national rounds of the National, when in reality, some sort of recognition should be given to the 26 teams that even qualify to get there. It’s arguably THE premier event of the entire year, and yet under his system, it’s given less significance than any other competition that has more than 26 teams entering.
4) He refuses to acknowledge success at invitational or state-level competitions. Why should a strong program be penalized because it has the reputation and opportunity to enter an invitational? Likewise, there are some state competitions that are extraordinarily competitive. I recognize my bias, but Texas’s eight law schools (led by South Texas College of Law) are consistently among the top performers on the national stage. At last year’s ABA NAAC finals in Chicago, 5 of the nation’s 24 teams were from Texas, despite the fact that the state (the nation’s second-most-populous) has just 8 law schools (California (#1), by contrast, has 61 law schools, New York (#3) has 15, Florida (#4) has 11, and Illinois (#5) has 9, with 6 just in the city of Chicago). Our state competition (held each summer) is a virtual bloodbath, and winning it carries with it national prestige. He won’t count that in his rankings, and that’s a mistake.
5) Why rank according to the calendar year and not the academic year? Nothing in the academic world works on a January to December basis. It’s inaccurate to say “School X is the strongest in moot court this year” when you’re taking into consideration results that are broken up over two academic classes. Plus, a large number of competitions really start in the fall–with the release of problems and brief deadlines—and conclude in the spring.
6) Koppen refuses to consider brief and advocate awards. Why? Aren’t those important and good indicators of a strong program?
7) The rankings system really favors quantity, and not necessarily quality. Schools like South Texas will ALWAYS be in the top three of Koppen’s rankings because it sends a team to nearly every competition. While it’s true that it will have to finish in the top four to get any points, the simple law of probability tell us that South Texas will garner its share of top finishes when it attends 30-plus competitions every year. Texas Tech, on the other hand, picks and chooses 10 or so competitions, some of which may not meet Koppen’s arbitrary cutoff of participating teams. This system virtually ensures a top ranking to any school that decides to spend the money to attend as many competitions as it can.
By the way, don’t misconstrue this point to be a slight on South Texas. Dean Treece and his coaches have built an enviable program and should be at the top of any rankings list regardless of methodology. The problem is that Koppen’s rankings reward and favor South Texas for the quantity of competitions it enters, as opposed to the actual strength of its advocates.
Those are my major complaints. Don’t take them as an attack on Koppen—again, I think his effort is admirable, and perhaps a good start. Given the effort many of us are expending on advocacy activities, a credible ranking system would be a positive thing. Over the next year, I’ll be working to develop a formula that hopefully better represents strength in advocacy. The ranking system will certainly include competition results, but I’ll also seek out a reliable method of incorporating competition prestige, individual advocate (both oral and writing) awards, a school’s non-competition programs, and reputation among deans and advocacy directors.
I welcome your (and Mr. Koppen’s) assistance and input. Throughout the next year, I’ll periodically update the status of my efforts and the relevant thoughts anyone wishes to share.