Steve Bradford yesterday posted a thoughtful (as is usual for his posts) critique of law reviews. I had drafted a comment, but Steve suggested that I should post links to my prior posts separately, so here goes, along with (what has turned out to be a lot of) additional commentary.

I think Steve has some valid (and compelling) points. As I have written before, though, I can’t go as far as he does.  I won’t rehash all that I have written before on this subject, but one of my earlier posts, Some Thoughts for Law Review Editors and Law Review Authors covers a lot of that ground.  Please click below to read more: 

Last week, I posted a response to the New York Times article criticizing law reviews.  A friend pointed me to a cover story from the Economist, How science goes wrong: Scientific research has changed the world. Now it needs to change itself.  It's an interesting read.  This paragraph jumped out at me:

In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.

 The article also calls for more acceptance