Yesterday, the New York Times published what I consider a medicocre criticism of law reviews.  Not that some criticism isn't valid.  It is.  I just think this one was poorly executed.  Consider, for example, these thoughtful responses from Orin Kerr and Will Baude.

As I have thought about it, one thing that struck me was about the Times article was the opening:

 “Would you want The
New England Journal of Medicine to be edited by medical students?” asked
Richard A. Wise, who teaches psychology at the University of North Dakota. 

Of course not. Then why are law reviews, the primary
repositories of legal scholarship, edited by law students?

I don't disagree with the premise, but note how limiting it is.  First, it talks about one journal, one that is highly regarded.  I know some people hate all law reviews, but I humbly suggest that most people consider elite journals like the Yale Law Journal a little differently.  (It's also true that some journals like the Yale Law Journal happen to use some forms of peer review in their process.) 

Second, the implication is that medical journals have it all figured out.  That's apparently not true, either.  An article from the Journal of the Royal Society of Medicine, What errors do peer reviewers detect, and does training improve their ability to detect them?, had the following goal:

Objective

To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection.

The study concluded

Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.  

Consider also, for example, this article from National Geographic, Fake Cancer Study Spotlights Bogus Science Journals

A cancer drug discovered in a humble lichen, and ready for testing in patients, might sound too good to be true. That's because it is. But more than a hundred lower-tier scientific journals accepted a fake, error-ridden cancer study for publication in a spoof organized by Science magazine.

Finally, the problem for all kinds of journals is hardly new.  This study, Peer-review practices of psychological journals: The fate of published articles, submitted again, from 1982, determined that the problem can also run in the other direction. From the Abstract: 

A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

In the interest of full disclosure, I admit I have a fondness for law reviews. I am a former editor in chief of one, have served as an advisor to another, serve as the current president of our law review alumni association, and serve on the review's Advisory Board of Editors. The things I learned, from (usually) patient and careful authors, were exceedingly valuable and help guide me to do what I do now.  I also have worked with several journals and reviews from the author side, and I have been usually impressed, and sometimes very frustrated, which is also true of almost every job experience I have ever had.  And I am confident every editor in chief of a law review has worked with an author or two who drove them nuts.  

I understand the frustrations, and the criticisms are often valid, at least to a point.  But let's not undercut the efforts of committed and careful, if not experienced, student editors, who usually work their tails off.  And let's not assume that every other discipline has it all figured out.  I think it's clear they don't.  There may be a better system (and I suspect there is), but let's not keeping dumping on a system (and students who work hard) without proposing some alternatives that we have a reason to believe will actually be better.