Pages

Thursday, December 15, 2011

reply to "Peer review without peers?"

I just came upon this post by Aaron Shaw about a somewhat unusual idea for the scientific peer review process. Since I did not want to leave a lengthy text in the comments section of his post, I decided to put it here. Aaron, I am happy about comments you might have.

So here is the thing: we (here at ETH) were thinking quite a bit lately about issues of scientific evaluation and peer review. In this vein, especially the following questions arise: 1) How can one judge the value of research performed in an interdisciplinary research environment and 2) How can we get *good* research by *unknown* people in high-impact journals and *bad* research by *established people* out of them, prohibiting a view scientists to de facto decide what is "hype" at the moment and what is not. But I will try to post about this another time. So let's talk about Aaron's post.

Aaron is basically talking about the idea to use wisdom-of-crowds effects for scientific peer review:
...what if you could reproduce academic peer review without any expertise, experience or credentials? What if all it took were a reasonably well-designed system for aggregating and parsing evaluations from non-experts?
And he continues:
I’m not totally confident that distributed peer review would improve existing systems in terms of precision (selecting better papers), but it might not make the precision of existing peer review systems any worse and could potentially increase the speed. If it worked at all along any of these dimensions, implementing it would definitely reduce the burden on reviewers. In my mind, that possibility – together with the fact that it would be interesting to compare the judgments of us professional experts against a bunch of amateurs – more than justifies the experiment.
First of all, I agree, it would be an interesting thing to test, whether non-expert crowds might perform as good as "experts" in a peer review process. Here is my predicted outcome: for the social sciences and qualitative economy papers, this might be the case most of the time. It will *not* work for the vast majority of papers in the quantitative sciences. But this is actually not the point I want to make here. The point is the following: what Aaron and people were thinking of is how to "speed up" the peer review process and "...reduce the burden on reviewers." Humbly, I think those are *completely wrong* incentives from an academic point of view. Reviewing is a mutual service scientists provide among their peers. When our goal is to reduce "the burden" of reviewing so many papers, we all should write less. (This might be a good idea anyways). Also, the problem with peer review without peers: non-experts will not know the existing literature and redundancy will be increased (even more) and this is something you can not get rid of without peers. If we however would go into this direction the "reviewing crowd" would basically be a detector of "spam papers" and nothing more. But those are also not the papers which need a lot of time to review, they are often very easy identified. What really is it that makes peer review so time consuming is a) the complexity of papers and b) the quantity. We should not aim at reducing a), because this is just the way it goes in scientific evolution: once the easy work has been done, the complicated details remain. (Einstein famously (supposedly) said that he does not understand his GRT anymore, since mathematicians started working on it). So I assume, in order to get rid of all the papers to review but maintain scientific excellence, option b) is the only choice. And, as I said earlier, this might not be a bad idea at all. It might also have a positive effect on the content and excellence of the published papers. But decreasing the number of published papers is complicated and would require us to *rethink* how science is done today. But this is material for another post.