September 26, 2008

"The Ranker" responds to my critque of his ranking system

Earlier this week, I entered a lengthy post on the thorny issue of ranking advocacy programs. My entry was aimed primarily at the methodology of attorney Brian Koppen and his site, "Law School Advocacy."

I thought it only fair to alert Mr. Koppen to my criticisms and allow him the opportunity to respond with a post of his own. I'm happy to report that he has taken me up on my offer.

Below is my original post (in small type, original links omitted) with Koppen's responses interspersed in larger, bold, italic type.

I'll likely reply to Koppen's responses some time next week.

-----------------------------

Over the past year, I’ve noticed several respected blogs and websites (here, here, and perhaps most notably, Blog Emperor Caron, here) link to a site called “Law School Advocacy” (formerly “Best Moot Court Programs”), which purports to be a running ranking of law school moot court and trial advocacy programs.

I take issue with the word “respected.” I started the ranking to reveal these types of comments for what they are: they author’s opinion. I wanted to get people asking, “By what measure do you proclaim yourself the best moot court program in the nation.” Here, however, since they’re linking to my site, I’ll give them the benefit of the doubt.

To be clear, I have no problem with the general premise of a rankings system. Nearly every law school in the country—mine included—trumpets its advocacy program as among the “best in the nation,” despite the fact that it’s mathematically impossible for all 196-plus schools to have “top ten” programs. From that standpoint alone, it would be nice to have a rankings system to acknowledge those programs that are truly a cut above the rest. My complaint regarding the “Law School Advocacy” site, which I suspect isn’t much different from the criticisms of the US News law school rankings, is that the methodology used by “The Ranker” is so severely misguided.

I think the first thing that should be recognized is that the self-proclaimed “ranker” is not a professor, nor a judge, nor a well-practiced attorney, nor a long-time observer of law school advocacy competitions. He’s Brian Koppen—a 2007 law school graduate who, like thousands of others each year, competed in moot court as a student. Of course, that doesn’t mean his rankings should be disregarded. Indeed, if they were grounded in solid methodology, his experience would be less relevant. And I do give him credit for attempting to do something that has never been done before. But I am a bit puzzled that nobody seems interested in asking whether the rankings bear any degree of reliability, or whether the ranker himself has the knowledge and expertise to issue those rankings. (To his credit, Professor Yates expressed mild hesitation on this issue before linking to the rankings).


If we go to the “qualified to rank” argument, it seems the first disqualified from ranking law schools would be those currently paid by law schools (or by schools owning law schools). So it raises my eyebrows at least that The Bench Brief would take issue with the fact that the “ranker” is not a professor.

There are not many among the non-professors who can compete on qualifications if the measures are: accomplishment as participant and/or accomplishment as coach. Two of the three or so historically prestigious competitions in moot court are the ABA NAAC and NYC Bar National MCC. At the 176-team ABA NAAC, Brian Koppen was a national semifinalist, and a regional best brief winner. See, http://www.abanet.org/lsd/competitions/results/07naac.pdf.

Brian Koppen was also an alumni coach for the national winner of the 185-team NYC Bar National MCC, a team that included two of his ABA teammates. See,
http://www.kentlaw.edu/depts/alums/enews/Apr08/index.html.

I recognize that my criticisms are likely to be construed as sour grapes by a professor at a school that fails to crack Koppen’s current “top 72.” But I arrive at Texas Tech as the just-departed Director of Advocacy at tiny Texas Wesleyan University School of Law, which Koppen ranked 10th in 2007, and which currently sits at eighth in the 2008 rankings. As proud as one would be of a top-ten program (and I do believe that Texas Wesleyan is a top-ten program regardless of the rankings used), I can’t ballyhoo the results of a methodology so riddled with problems. I’d have these objections whether my program was 1st, 6th, or 60th.
A few of my criticisms:

1) You can’t accurately rank an advocacy program solely on team results in national competitions. Those of us who have been doing this for any amount of time know that a top finish—while a nice affirmation of a school’s performance—can be as much attributable to luck as it is to skill. Anyone who plays poker will understand: Over the long-term, you won’t win if you’re not good. But anything can happen at a single competition. A terrible team can skate through to a top finish on a brief that inexplicably scores well. A great team can lose a round and be sent packing when one judge thinks an advocate looked at him funny. My point? Finishes at competitions are extremely important, but they don’t even begin to paint an accurate picture of the merits of a law school’s moot court or mock trial program. Moreover, by solely focusing on results, there’s likely to be a great degree of variance from year to year. One school may be top ten today, and entirely unranked next year. Credible ranking systems don’t feature that degree of volatility.

Many take issue with US News’ ranking methodology for the very reason that it does not allow for volatility given its heavy weighting of reputation. Harvard could hire magicians to teach its students, and would still remain in the top tier of US News for several years. Though US News is credible, it may be credible for reasons other than its lack of volatility.

2) You can’t give equal weight to, for lack of a better term, “unequal” competitions. Obviously, not all moot court competitions carry the same degree of distinction. Some (such as the National Moot Court Competition and the ABA National Appellate Advocacy Competition) are far more prestigious than others, yet Koppen makes no adjustment for that. Instead, he blindly awards points based on how many teams entered a competition and excludes from his rankings competitions that don’t fit his arbitrary number of sufficient teams (thankfully, he’s recently reduced his qualifying number of teams from 24 to 16).

The problem, of course, is that you can’t judge the prestige of a competition by merely eyeballing the number of teams that enter. Some competitions are (often by design) small and yet incredibly prestigious. Others are large merely because they’re held in a fun place (like the Tulane Sports Law competition held in New Orleans during Mardi Gras). Koppen’s system is akin to ranking Division III football teams alongside Division I football teams based solely on wins or number of teams in their division, without taking into consideration the fact that the divisions are completely unequal in prestige or strength.

An example: This year’s 16th-ranked team is the University of Hawaii, whose only (albeit impressive) accomplishment was finishing in the top three of a single competition. But because that competition fielded 70 teams, the points Hawaii racked up puts it on the verge of a “Top-15” program. Perhaps they are a top-15 program. But we’re willing to say this based on one semi-final finish at one competition? Really?

I take issue with the word “blindly.” Given my involvement with ABA NAAC and NYC Bar National MCC, and my extensive interaction with program directors early on in the process of formulating methodology, I’m aware of the historical prestige of the two. My position has always been that my ranking enables prestige to finally evolve. Those programs not stuck on historical prestige will give about five seconds thought before deciding to send their best teams to the 70-team Pace Environmental MCC, rather than National MCC (that is, until National MCC consolidates its regionals, which prestigious Jessup has just done). If a program happens to like historical prestige, well, then, winning these competitions will have to be its own reward, won’t it?

3) The rankings ignore regional champions and runners up at the National Moot Court Competition.

Yes, no points for top finishes at regionals of NYC Bar National MCC because, currently, the regionals are too small. Your sentence could go either way, so I’ll also state that “runners up” at the national round do get points, but you may have been commenting on “regional…runners up,” who do not receive points.

He’s of the opinion (likely because he never competed in the National) that the competition’s regional rounds are too small to be worthy of inclusion. What he doesn’t understand is that most law schools send their very best team (or teams) to the regional rounds of the National, so winning an eight-team competition among your region’s “A-list” teams is far more difficult than winning a run-of-the mill, 30-team, open-entry competition. He’ll only award points if a school advances to a top finish at the national rounds of the National, when in reality, some sort of recognition should be given to the 26 teams that even qualify to get there. It’s arguably THE premier event of the entire year, and yet under his system, it’s given less significance than any other competition that has more than 26 teams entering.

Again, I’m well aware of the historical prestige of NYC Bar National MCC. Prestige can evolve, and I expect it to.

4) He refuses to acknowledge success at invitational or state-level competitions. Why should a strong program be penalized because it has the reputation and opportunity to enter an invitational? Likewise, there are some state competitions that are extraordinarily competitive. I recognize my bias, but Texas’s eight law schools (led by South Texas College of Law) are consistently among the top performers on the national stage. At last year’s ABA NAAC finals in Chicago, 5 of the nation’s 24 teams were from Texas, despite the fact that the state (the nation’s second-most-populous) has just 8 law schools (California (#1), by contrast, has 61 law schools, New York (#3) has 15, Florida (#4) has 11, and Illinois (#5) has 9, with 6 just in the city of Chicago). Our state competition (held each summer) is a virtual bloodbath, and winning it carries with it national prestige. He won’t count that in his rankings, and that’s a mistake.

The playing field would not be level. Schools outside Texas wouldn’t have the “opportunity” to earn these points.

5) Why rank according to the calendar year and not the academic year? Nothing in the academic world works on a January to December basis. It’s inaccurate to say “School X is the strongest in moot court this year” when you’re taking into consideration results that are broken up over two academic classes. Plus, a large number of competitions really start in the fall–with the release of problems and brief deadlines—and conclude in the spring.

Every program is different. I’ve seen programs wherein a student’s first competition is Spring with his/her second competition in Fall. Other programs do the exact opposite. Commenting in general, I hear this argument the most from programs with “sour grapes.” I’m comforted by the fact that, even had these programs known that I would choose calendar over academic year, their behavior wouldn’t have been affected.

6) Koppen refuses to consider brief and advocate awards. Why? Aren’t those important and good indicators of a strong program?

Brief and advocacy scores are the dispositive factors for advancement to the semifinals and beyond, which I do count. (I’ve added additional recognition by starting up the Advocacy and Brief sub-rankings.)

7) The rankings system really favors quantity, and not necessarily quality. Schools like South Texas will ALWAYS be in the top three of Koppen’s rankings because it sends a team to nearly every competition. While it’s true that it will have to finish in the top four to get any points, the simple law of probability tell us that South Texas will garner its share of top finishes when it attends 30-plus competitions every year. Texas Tech, on the other hand, picks and chooses 10 or so competitions, some of which may not meet Koppen’s arbitrary cutoff of participating teams. This system virtually ensures a top ranking to any school that decides to spend the money to attend as many competitions as it can.

Let’s say a small program enters four competitions a year. That’s 8-12 students being developed. By contrast, let’s say a program with more money enters 12 competitions a year, developing 24-36 students. If I were choosing where to attend law school based solely upon whether or not it will develop me into a skilled advocate, I would probably select the program developing more students, given the increased likelihood that I would be developed. More competitions, more students competing, and (hopefully) more students being developed.

By the way, don’t misconstrue this point to be a slight on South Texas. Dean Treece and his coaches have built an enviable program and should be at the top of any rankings list regardless of methodology. The problem is that Koppen’s rankings reward and favor South Texas for the quantity of competitions it enters, as opposed to the actual strength of its advocates.

Those are my major complaints. Don’t take them as an attack on Koppen—again, I think his effort is admirable, and perhaps a good start. Given the effort many of us are expending on advocacy activities, a credible ranking system would be a positive thing. Over the next year, I’ll be working to develop a formula that hopefully better represents strength in advocacy. The ranking system will certainly include competition results, but I’ll also seek out a reliable method of incorporating competition prestige, individual advocate (both oral and writing) awards, a school’s non-competition programs, and reputation among deans and advocacy directors.

I welcome your (and Mr. Koppen’s) assistance and input. Throughout the next year, I’ll periodically update the status of my efforts and the relevant thoughts anyone wishes to share.

The power of nonverbal persuasion in oral argument

Professor Michael Higdon at UNLV has posted his latest article, Oral Argument and Impression Management: Harnessing the Power of Nonverbal Persuasion for a Judicial Audience, on SSRN.

The abstract:

In essence, my article utilizes social science research on the topic of nonverbal communication in order to advance our understanding of what makes for effective oral advocacy. Currently, there are no articles that 1) give a comprehensive summary of the relevant social science research within the area of nonverbal persuasion and 2) apply that research specifically to the area of oral argument. My article attempts to fill both of these needs.

As you will see in the article, nonverbal communication goes well beyond simple hand gestures, but also encompasses how a person speaks, how a person dresses, a person's facial expressivity, and even such things as a person's posture and head position. Furthermore, social science research reveals that both these and other nonverbal cues can greatly impact the perceived credibility and persuasiveness of a speaker. Not only that, but in many instances, listeners tend to place even more reliance on what a speaker is saying nonverbally than the actual substance of the speaker's presentation. Given that attorneys should seek to maximize their persuasive potential during oral argument, knowledge of this research and these various principles is essential. Section III of my article explores this research.

Of course, what makes nonverbal persuasion somewhat different for oral advocates comes from the fact that the attorney is directing his argument not to a jury, but to a judge. As my article details, one of the ways a speaker nonverbally increases his ability to persuade is by employing nonverbal cues that enhance the speaker's perceived dominance. When appearing before a judge, however, the attorney must keep in mind that 1) it is the judge who is most dominant and 2) the judge expects nonverbal cues from the attorney that the attorney understands this hierarchy. Again using social science research, Section IV of my article explores this balancing act between dominance and submission and offers concrete advice on how oral advocates can navigate that somewhat thorny issue.

September 24, 2008

Greta Van Susteren thinks law school grading is "a fraud"

I wonder what she thinks about judging advocacy competitions.

September 23, 2008

In search of a reliable advocacy rankings system

Over the past year, I’ve noticed several respected blogs and websites (here, here, and perhaps most notably, Blog Emperor Caron, here) link to a site called “Law School Advocacy” (formerly “Best Moot Court Programs”), which purports to be a running ranking of law school moot court and trial advocacy programs. To be clear, I have no problem with the general premise of a rankings system. Nearly every law school in the country—mine included—trumpets its advocacy program as among the “best in the nation,” despite the fact that it’s mathematically impossible for all 196-plus schools to have “top ten” programs. From that standpoint alone, it would be nice to have a rankings system to acknowledge those programs that are truly a cut above the rest. My complaint regarding the “Law School Advocacy” site, which I suspect isn’t much different from the criticisms of the US News law school rankings, is that the methodology used by “The Ranker” is so severely misguided.

I think the first thing that should be recognized is that the self-proclaimed “ranker” is not a professor, nor a judge, nor a well-practiced attorney, nor a long-time observer of law school advocacy competitions. He’s Brian Koppen—a 2007 law school graduate who, like thousands of others each year, competed in moot court as a student. Of course, that doesn’t mean his rankings should be disregarded. Indeed, if they were grounded in solid methodology, his experience would be less relevant. And I do give him credit for attempting to do something that has never been done before. But I am a bit puzzled that nobody seems interested in asking whether the rankings bear any degree of reliability, or whether the ranker himself has the knowledge and expertise to issue those rankings. (To his credit, Professor Yates expressed mild hesitation on this issue before linking to the rankings).

I recognize that my criticisms are likely to be construed as sour grapes by a professor at a school that fails to crack Koppen’s current “top 72.” But I arrive at Texas Tech as the just-departed Director of Advocacy at tiny Texas Wesleyan University School of Law, which Koppen ranked 10th in 2007, and which currently sits at eighth in the 2008 rankings. As proud as one would be of a top-ten program (and I do believe that Texas Wesleyan is a top-ten program regardless of the rankings used), I can’t ballyhoo the results of a methodology so riddled with problems. I’d have these objections whether my program was 1st, 6th, or 60th.
A few of my criticisms:

1) You can’t accurately rank an advocacy program solely on team results in national competitions. Those of us who have been doing this for any amount of time know that a top finish—while a nice affirmation of a school’s performance—can be as much attributable to luck as it is to skill. Anyone who plays poker will understand: Over the long-term, you won’t win if you’re not good. But anything can happen at a single competition. A terrible team can skate through to a top finish on a brief that inexplicably scores well. A great team can lose a round and be sent packing when one judge thinks an advocate looked at him funny. My point? Finishes at competitions are extremely important, but they don’t even begin to paint an accurate picture of the merits of a law school’s moot court or mock trial program. Moreover, by solely focusing on results, there’s likely to be a great degree of variance from year to year. One school may be top ten today, and entirely unranked next year. Credible ranking systems don’t feature that degree of volatility.

2) You can’t give equal weight to, for lack of a better term, “unequal” competitions. Obviously, not all moot court competitions carry the same degree of distinction. Some (such as the National Moot Court Competition and the ABA National Appellate Advocacy Competition) are far more prestigious than others, yet Koppen makes no adjustment for that. Instead, he blindly awards points based on how many teams entered a competition and excludes from his rankings competitions that don’t fit his arbitrary number of sufficient teams (thankfully, he’s recently reduced his qualifying number of teams from 24 to 16).

The problem, of course, is that you can’t judge the prestige of a competition by merely eyeballing the number of teams that enter. Some competitions are (often by design) small and yet incredibly prestigious. Others are large merely because they’re held in a fun place (like the Tulane Sports Law competition held in New Orleans during Mardi Gras). Koppen’s system is akin to ranking Division III football teams alongside Division I football teams based solely on wins or number of teams in their division, without taking into consideration the fact that the divisions are completely unequal in prestige or strength.

An example: This year’s 16th-ranked team is the University of Hawaii, whose only (albeit impressive) accomplishment was finishing in the top three of a single competition. But because that competition fielded 70 teams, the points Hawaii racked up puts it on the verge of a “Top-15” program. Perhaps they are a top-15 program. But we’re willing to say this based on one semi-final finish at one competition? Really?

3) The rankings ignore regional champions and runners up at the National Moot Court Competition. He’s of the opinion (likely because he never competed in the National) that the competition’s regional rounds are too small to be worthy of inclusion. What he doesn’t understand is that most law schools send their very best team (or teams) to the regional rounds of the National, so winning an eight-team competition among your region’s “A-list” teams is far more difficult than winning a run-of-the mill, 30-team, open-entry competition. He’ll only award points if a school advances to a top finish at the national rounds of the National, when in reality, some sort of recognition should be given to the 26 teams that even qualify to get there. It’s arguably THE premier event of the entire year, and yet under his system, it’s given less significance than any other competition that has more than 26 teams entering.

4) He refuses to acknowledge success at invitational or state-level competitions. Why should a strong program be penalized because it has the reputation and opportunity to enter an invitational? Likewise, there are some state competitions that are extraordinarily competitive. I recognize my bias, but Texas’s eight law schools (led by South Texas College of Law) are consistently among the top performers on the national stage. At last year’s ABA NAAC finals in Chicago, 5 of the nation’s 24 teams were from Texas, despite the fact that the state (the nation’s second-most-populous) has just 8 law schools (California (#1), by contrast, has 61 law schools, New York (#3) has 15, Florida (#4) has 11, and Illinois (#5) has 9, with 6 just in the city of Chicago). Our state competition (held each summer) is a virtual bloodbath, and winning it carries with it national prestige. He won’t count that in his rankings, and that’s a mistake.

5) Why rank according to the calendar year and not the academic year? Nothing in the academic world works on a January to December basis. It’s inaccurate to say “School X is the strongest in moot court this year” when you’re taking into consideration results that are broken up over two academic classes. Plus, a large number of competitions really start in the fall–with the release of problems and brief deadlines—and conclude in the spring.

6) Koppen refuses to consider brief and advocate awards. Why? Aren’t those important and good indicators of a strong program?

7) The rankings system really favors quantity, and not necessarily quality. Schools like South Texas will ALWAYS be in the top three of Koppen’s rankings because it sends a team to nearly every competition. While it’s true that it will have to finish in the top four to get any points, the simple law of probability tell us that South Texas will garner its share of top finishes when it attends 30-plus competitions every year. Texas Tech, on the other hand, picks and chooses 10 or so competitions, some of which may not meet Koppen’s arbitrary cutoff of participating teams. This system virtually ensures a top ranking to any school that decides to spend the money to attend as many competitions as it can.

By the way, don’t misconstrue this point to be a slight on South Texas. Dean Treece and his coaches have built an enviable program and should be at the top of any rankings list regardless of methodology. The problem is that Koppen’s rankings reward and favor South Texas for the quantity of competitions it enters, as opposed to the actual strength of its advocates.

Those are my major complaints. Don’t take them as an attack on Koppen—again, I think his effort is admirable, and perhaps a good start. Given the effort many of us are expending on advocacy activities, a credible ranking system would be a positive thing. Over the next year, I’ll be working to develop a formula that hopefully better represents strength in advocacy. The ranking system will certainly include competition results, but I’ll also seek out a reliable method of incorporating competition prestige, individual advocate (both oral and writing) awards, a school’s non-competition programs, and reputation among deans and advocacy directors.

I welcome your (and Mr. Koppen’s) assistance and input. Throughout the next year, I’ll periodically update the status of my efforts and the relevant thoughts anyone wishes to share.

September 22, 2008

New moot court advisors' listserv

The Legal Writing Institute's Moot Court Committee has started a new listserv for moot court coaches, advisors, and professors. It's just a week old, and we've already had several great discussions.

To subscribe, e-mail Jim Dimitri at jddimitr@iupui.edu.

The LWI’s Moot Court Committee is made up of co-chairs Melissa Greipp (Marquette University Law School) and Jim Dimitri (Indiana University School of Law – Indianapolis), and members Coleen Barger (UALR - William H. Bowen School of Law), Jason Cohen (Rutgers School of Law – Camden), Jessica Price (Marquette University Law School), and Allison Martin (Indiana University School of Law – Indianapolis). Thanks again to them for their work in launching what I think will be an outstanding resource for those of us involved in moot court.

Helpful list of national moot court, mock trial, and ADR competitions

This past summer, Professor Todd Bruno at LSU sent out an e-mail announcing that his Moot Court and Trial Advocacy Boards had compiled a list of the national moot court, mock trial, and ADR competitions that are offered across the country. Click here for the impressive list...

Thanks to Professor Bruno for sharing his students' hard work with the law school advocacy community.

September 17, 2008

Hi there...

Welcome! My name is Rob Sherwin, and I am the Director of Advocacy Programs at Texas Tech University School of Law. Given the seemingly exponential increase in law school moot court, mock trial, and alternative dispute resolution activities we’ve seen in the past few years, I thought the timing was right for a blog directed at those of us who teach, coach, and serve students in the world of advocacy and skills instruction. Of course, students are welcome too.

I envision that my primary blogging activities will consist of postings and commentary on competition news, results and events, conferences and programs for skills-based law professors, and the various issues that we face as coaches, advisers, and legal educators. But certainly a blog is nothing without readership, so I welcome suggestions as to how to make this a place that can better benefit us all. Let me know if you have ideas or news that need spreading, and don’t be afraid to feed me competition results so I can share your successes.