Last week, I posted a response to the New York Times article criticizing law reviews. A friend pointed me to a cover story from the Economist, How science goes wrong: Scientific research has changed the world. Now it needs to change itself. It's an interesting read. This paragraph jumped out at me:
In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.
The article also calls for more acceptance of what it calls "humdrum" or "uninteresting" work that confirms or replicates other trials, a long-standing practice underappreciated by both journals and those who award grants.
Not all is lost. One interesting suggestion: "Peer review should be tightened—or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments." The article notes that the areas of physics and mathematics have made progress using the latter method.
We do have some versions of the post-publication evaluation in the law review world, often published as responses to the work of others, or articles that build upon such work. Over at The Conglomerate the post, Bebchuk v. Lipton on Corporate Activism, provides a good example of two papers take opposite views, with David Zaring's post itself serving the role of post-publication evaluator (on a small, but I think important, scale):
Some of the studies cited are quite old, and not all of the journals are top-drawer. But others seem quite on point. Perhaps the disputants will next be able to identify some empirical propositions with which they agree, and others with which they do not (other than, you know, sample selection).
Many blogs do this (including, sometimes, the Business Law Prof Blog), and I think it is a important role. Perhaps it is one the should be more formalized so that the value of such commentary can be more clearly recognized as part of the scholarly realm. For example, perhaps law reviews and other journals should consider publishing updates, major citiations, or critiques from various sources made about articles the review/journal has previously published.
There are many ideas out there, and we should
keep looking for ways to keep developing useful scholarship. And by useful, I
mean complete, thoughtful, and careful work, including what some people might consider
"not novel," if not "humdrum" or "uninteresting."
We don’t always need the legal equivalent of studies about drinking wine
and letting kids play video games, not that there's anything wrong with either of
those things.