Earlier this year I was diagnosed with cancer of the salivary gland – something I’d never heard of and didn’t know anything about. My doctors, obviously (and thankfully!) know a lot about it, of course, but they speak a whole different language that’s largely incomprehensible to non-scientists like me.
So, like most people in this sort of situation, I headed straight to Google. As you can imagine, there’s an awful lot of information out there – even for a rare cancer like this (just one in 200,000 people in the US diagnosed annually). And some of it was really really scary – think blogs and chatrooms filled with horror stories about people who have undergone major, often disfiguring, surgery; or who haven’t been able to eat real food for years. But luckily there is also some very good information around. And by good, what I really mean is peer-reviewed.
For me as a patient, knowing that the research I was reading had been peer reviewed (or was based on peer-reviewed articles) gave me much more confidence in it. To quote Peer Review: The Nuts and Bolts, a new booklet from Sense About Science, the British-based charitable trust that equips people to make sense of scientific and medical claims in public discussion:
Just as a washing machine has a quality kite mark, peer review is a kind of quality mark for science. It tells you that the research has been conducted and presented to a standard that other scientists accept.
So for non-scientists like me, it provides a level of reassurance that would not otherwise be there.
The peer review process, however, is increasingly being challenged by some within the scholarly community. The Sense About Science booklet goes on to say:
At the same time [peer review] is not saying that the research is perfect (nor that a washing machine will never break down).
It’s certainly not without its flaws; to quote Nigel Hawkes, author of “Straight Statistics,” in the same booklet:
It’s a good thing most scientists are honest, because peer review offers the greatest possible temptation to steal ideas, to show favor to former students, to boost favored theories, or to do down rivals.
Others simply ask, why bother with the traditional process of single or double-blind peer review when you can just make your research available immediately online for people to comment on? In some subjects, such as physics (where research is typically posted on ArXiv before being submitted for publication), that’s already the norm – mainly because physicists often work in large, diverse, and global teams where it makes sense to share results ahead of publication.
However, rather than dying out, traditional peer review still seems to be thriving in most disciplines. Sense About Science also carried out a major survey on peer review a couple of years ago, which both demonstrates the commitment of most in the scholarly community and confirms Hawkes’ view of most scientists as honest. The survey found that 90% of respondents review articles because they like playing their part as a member of the academic community; 85% enjoy seeing papers and being able to improve them; and 91% believe their own last paper was improved through the peer review process.
It helps, of course, that we’ve seen a number of improvements to the process in recent years, including initiatives such as CrossMark, which helps identify and track changes and retractions, and better anti-plagiarism tools, such as CrossCheck. While it may still not be a perfect system from a scholarly perspective, given that science is, by its nature, an iterative process, peer review provides an initial level of validation for original research that enables other scholars to challenge, critique, and improve on it.
In today’s “just Google it” world, where more people have more access to more information than ever before, it must make sense to ensure that everyone understands the difference between what they read in a peer-reviewed article and, say, a blog; or between a site like cancer.org (the American Cancer Society’s website), which includes full references for the mainly peer-reviewed information provided, and what is essentially a sales and marketing site for an individual physician or practice. Many publishers (including, in the interest of full disclosure, Wiley) already provide support for Sense About Science’s efforts to promote peer review, as well as for CrossMark, CrossCheck, and other cross-publishing initiatives that improve the quality of published research. But could we do more to ensure that peer review is understood and valued outside of our community as well as within it? For example, how about educating high school students – the next generation of scientists and scholars – about the importance of peer review? Encouraging high school science teachers to make their students aware of Peer Review: The Nuts and Bolts (freely available on the Sense About Science website) would be a great place to start.
28 Thoughts on "In Praise of Peer Review – A Personal Perspective"
There seem to be two different issues here. One is the value of peer review to the scholarly literature. This is considerable and I think it lies largely in sorting, filtering and ranking, as well as improving communication. But high school students do not, indeed cannot, read the literature, so this issue is above them. I would hesitate to add it to the already crowded curriculum.
The second issue is the relative trustworthiness of web content that claims to be derived from peer reviewed literature, versus that which does not make this claim. It is not clear that this distinction is valid enough, or clear enough, to be taught in high school. There is a lot of bad stuff that claims to be based on peer review, and a lot of good stuff that does not.
Thanks David. Although you’re right that high school students mostly wouldn’t read the high-level research, many of them do read articles in the more mainstream scholarly journals. So I think that a basic introduction to the concept of peer review would be valuable – maybe selectively, in AP or equivalent classes. And that introduction could include a component about how best to evaluate the quality of content they find online.
Alice, can you give me an example of a journal that high school students can read? I ask because I recently finished a large project cataloging the scientific concepts typically taught in K-12 science, plus many of those taught in basic and advanced college courses. We used it to develop a search algorithm that sorts content by grade level, as well as being a writing guide for education content. (See http://www.stemed.info/)
The point is that I have a hard time imagining a journal article that only uses K-12 concepts, and no college level concepts. One should not introduce higher level concepts that are not being taught, as they cannot be understood and they disrupt the teaching sequence. This is also true of advocacy materials. If one wants to target AP only then the same rules apply, but with a slightly higher grade level cutoff.
In addition, we found that there is tremendous time pressure to teach what the state standards call for, and little else. There is also pressure to teach more engineering and computer science, plus critical thinking and inquiry methods. So I think it would be very hard to justify spending time on peer review. Science education in the US has become a regulatory regime. See my little essay here: http://scholarlykitchen.sspnet.org/2011/11/10/education-regulation-new-challenges-and-new-opportunities/
Beyond that, and as I suggested, I find it hard to see how peer review plays into evaluating the quality of online content, given that the online content is not itself peer reviewed. There is no correlation, not that I can see.
I am thinking of journals like Science and Nature, David – more accessible, but still peer-reviewed journals that all my children were encouraged (and sometimes required) to read by their high school teachers.
Thanks for the link to your post on science education, which I hadn’t read before and found very interesting. Even though I’ve lived in the US for many years, as a Brit I find the lack of national standards or testing in all subjects, not just science, quite disconcerting – both as a parent and as a publisher!
So far as I know the only part of Science that is peer reviewed is the back half — the reviews, brevia, research articles and reports, all of which are just as technical as in any other journal. Most use graduate school level concepts. The front half, from News of the Week to Perspectives, is far less technical, actually a popular magazine, and not peer reviewed.
As for national standards we have rigorous state standards because education is not a federal province. Proposed national standards are in the works, but who knows. It is a policy issue.
Alice (or anyone), if you want to push to include the concept of peer review (or anything else) in the proposed new national science standards for K-12, here are the people to push: http://www.nextgenscience.org/. Don’t mention my name as I have been rather critical of them.
Proposed national science standards have been around since at least 1988, but never went anywhere because the States guard their education prerogative. This time may be different, for a strange reason. Many years ago Congress passed the No Child Left Behind law. As I understand it, NCLB was designed to focus attention on the least achieving students and schools, and it did so. But the mechanism was an impossible requirement that every child be proficient in every subject by 2014, with annual progress along the way. Powerful limiting factors like student ability and interest were simply ignored. Needless to say it did not happen and the deadline is looming.
So the system is starting to gyrate. On gyration is that the Feds are granting temporary extensions if the States adopt “common core standards.” Math and English are already out and many states have adopted them. The Nextgen project is writing the common core Science standards.
Happy to help. The publishers seem to have been absent from this process. But I image they will be happy with any national standard, since the proliferation of detailed state standards has created a publishing nightmare. The K-12 science is largely the same overall, but the combination of topics that is taught in any given grade varies from state to state, due to something called spiraling (among the topics). So you cannot have a common textbook for a given grade.
I would love to do a 3D visualization of K-12 spiraling. When you combine spiraling with the intrinsic sequencing of concepts in each topic the structural topology of K-12 science is amazingly complex. No wonder students get confused.
This is really just one facet of a much broader problem of people not understanding or being willing to evaluate the sources of their information in any area, not just in medicine/science. I think a better public understanding of peer review would probably be beneficial, but I doubt a public information campaign such as this will do much good without being built upon a much more basic media literacy campaign that makes the general public comfortable questioning truth claims in the media, which I don’t feel that they are.
Also, I think that even if the general public had a good understanding of how peer review worked, it’s still really hard to judge claims of authority from outside of a community of research. An oncologist knows which cancer journals are top tier, which are second or third tier, and which may be suspect even if they claim to be reviewed, but I do not.
So, even though I have a firm understanding of peer review (having administered peer review for several journals over several years), I still don’t feel comfortable evaluating the reliability or authority of journals in fields I don’t know.
Alice, David, do you know about any peer-reviewed article that would show that peer review improves/guarantees the quality of the publication?
Hi Aaliaksandr, I didn’t know of any such article so I took a quick look at the Learned Publishing archives, which seemed like the obvious place to start. The one article on this topic I was able to find from a (very quick!) scan, was a 2003 piece by Yanping Lu, which you can find here. She concluded that there is currently no objective way to measure improvement in manuscript quality. If anyone knows of other/more recent research on this I’d be very interested. Surveys of academics (such as the SAS one quoted above) typically find that most authors value peer review and believe it improves their papers.
As someone who does the science of science I agree with Yamping Lu. I know of no empirical measure of quality. On the other hand, if three experts in my field review my work and suggest what they consider to be improvements, and I agree, and make them, there is a strong prima facie case that these are in fact improvements. The concept of improvement is subjective, but still quite useful, as with many human concepts.
Alice, David, thank you for your comments and the link. Still, we cannot trust our intuition – there are two many controversies. On the other hand, I agree with you – it seems that overall peer review helps to filter out junk and supposedly gets better papers published. An analysis we did in LiquidPub project (10000 reviews from various conferences, unpublished) suggested that peer review is good in selecting top 15-20% of conference papers and bottom 15-20% but acts very randomly in the middle. I remember Eamonn Keogh once did similar conclusions in a SIGKDD’09 keynote.
As for your question about more recent references – in our recent paper there was a short overview of peer review controversies – http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2011.00056/full – see Section 2.
Thanks Joel. In case you’re interested, Sense About Science is running a great campaign to encourage the public to question scientific claims (in the media or elsewhere). It’s called Ask for Evidence and you can find more information here: http://www.senseaboutscience.org/pages/a4e.html
And how about journals that claim to conduct peer review but don’t, such as those OA journals exposed as scam operations? How does a nonexpert user deal with fraud?
Hi Sandy, unfortunately you will always get a few charlatans! But educating people about the peer review system – warts and all – should help, I believe. And fraud does usually get exposed, even if not as quickly as we’d like. Hopefully CrossMark will also help by clearly identifying changes, retractions, etc.
There’s a lot of information out there on the Web; some of it’s even true. I agree learning more about the peer-review process should be a critical element of education. One of the most critical skills of the future will be judging what to trust and what not to trust. Peer review isn’t perfect, but it’s a lot better than no review.
Ken, how will learning about peer review help people judge what to trust and not to trust on the Web, given that the Web content in question is not peer reviewed? I just cannot see how this works. People interpret the peer reviewed literature in many strange ways, so claiming that one’s Web content is based on the peer reviewed lit is no measure of its quality.
David, I am not certain what it is you are looking for or the purpose. But wouldn’t the disease descriptions and treatment options offered on the Mayo Clinic website suite your purpose? These descriptions are scientifically based on the peer reviewed literature. I am not sure whether Mayo puts their descriptions through a peer review, but I am certain that there is some sort of vetting mechanism to insure that the content corresponds to the evidence reported in the peer reviewed literature. If your objective is to find a scientifically based resource that is written for people without a scientific background then this is where I would go (in the field of medicine). If I had a medical condition with which I was unfamiliar, I would never do a Google search on it. I would go directly to Mayo. I would go to Mayo because I trust that what is written is based on scientifically proven evidence. Or perhaps you are looking for something beyond just medicine. If that is the case, then I agree with you. There are few places you can turn to find reliable content written for lay people.
Mark, I am not looking for anything. I made my point in my first comment — There is a lot of bad stuff (on the web) that claims to be based on peer review, and a lot of good stuff that does not. The fact that there is good stuff that claims to be peer reviewed, like Mayo, is not a counter agrument. The point is that the mere fact that a website claims to be based on the peer reviewed lit is no measure of its quality. Many cranks, quacks and advocates make this claim. So knowing about peer review, per se, will not help people judge the quality of web content. It is very simple.
And many cranks, quacks, and advocates publish in journals that are peer reviewed by like-minded cranks, quacks, and advocates. And it’s really hard for the rest of us to know which journals these are (and there likely isn’t even consensus on which journals are bogus).
All peer review really does is determine whether an article meets the (sub)discipline/field’s standards for methodologies and doesn’t transgress the hegemony of the discipline/field’s boundaries of the “thinkable” vs. the “unthinkable.”
I do not agree with your claims about the literature Joel. I was talking about the Web. Plus peer review does a lot more than you suggest. The journal system provides, among other things, ranking and sorting via rejection, which is based on peer review. Most articles get published somewhere and it is the where that counts. This is a complex and fascinating evaluation system.
Why paragraph don’t you agree with?
There are many examples of journals that claim peer review but are not accepted by mainstream science: Scientific Exploration (paranormal phenomena), BIO-Complexity (intelligent design), etc. There are also fields where different factions make mutually exclusive claims on truth (various schools of psychology, Chomskyan vs. non-Chomskyan linguists) and each faction has its own journals and don’t accept the other factions journals at legit.
This second example really does point out how peer review only evaluates research relative to the underlying assumptions of the community of practice/research that the journal represents and serves.
If you’re reacting to my comment “all peer review really does,” I will admit that that one sentence cannot really fully explicate the complexity of the journal system. I do stand by my assertion that peer review serves to discipline hegemonic boundaries rather than judge absolute truth claims. In fact, I think the sorting of “better” research into higher tier journals and “lesser” research into lower tier journals is part of this disciplining.
I don’t know if Kuhn discusses the peer review process specifically (I have to admit that I have not read the entirety of The Structure of Scientific Revolutions), but he clearly demonstrates how the practice of science is constrained by the conventional wisdoms of a field during times between paradigmatic shifts.
Joel, I disagree with your apparent claim that a significant fraction of the peer reviewed scientific journals are bogus. It is a strong statistical claim so a few examples do not prove it. I study the literature as a medium and I see little evidence for your claim.
As for the rest, the fact that journals defend positions is good, not bad. The scientific frontier is a battleground of ideas, including at the journal level. I believe it was Planck who said that your ideas will succeed when your students become journal editors. This is human reasoning writ large, a vast social process involving millions of people. The journals are not above the fray, they are part of it.
As for Kuhn, I did my doctoral thesis on this phenomenon, specifically his point that people defending different theories talk past one another. Paradigms actually scale over many orders of magnitude, so stability at one level supports battles within it. The constraints, as you call them, are necessary for progress, but none is beyond being overthrown. Peer review does enforce these constraints, but that is just the nature of the beast we call science. It is not a defect. New ideas have thresholds to cross, because many new ideas are wrong.
I disagree with your apparent claim that a significant fraction of the peer reviewed scientific journals are bogus.
I did not say that these journals are a significant fraction; I just said that they exists. The original point of that was that the general public doesn’t have the tools to tell which journals are generally accepted mainstream science and which are fringe science or pseudoscience. And, to get back to the original article, educating the public about peer review doesn’t help them judge which “peer reviewed” sources should be trusted and which they should perhaps be more skeptical of.
Peer review does enforce these constraints, but that is just the nature of the beast we call science. It is not a defect.
I think you are inferring a value judgement that I did not intend regarding my comments about disciplining hegemonic boundaries. I do not think this is inherently a defect. One assumption that I saw in the original article was that the characteristic of “peer reviewed” relates directly to trustworthiness, but I don’t think this relationship is direct at all because peer review has to be judged by the context of the community that reviewed the research.