UW News

July 20, 2017

Bringing a ‘trust but verify’ model to journal peer review

UW News

Academic journals are increasingly asking authors to use transparent reporting practices to “trust, but verify” that outcomes are not being reported in a biased way and to enable other researchers to reproduce the results. To implement these reporting practices, most journals rely on the process of peer review — in which other scholars review research findings before publication — but relatively few journals measure the quality and effectiveness of the process.

In a commentary published July 20 in the journal Science, lead author Carole Lee and co-author David Moher identify incentives that could encourage journals to “open the black box of peer review” for the sake of improving transparency, reproducibility, and trust in published research. Lee is an associate professor of philosophy at the University of Washington; Moher is a senior scientist at The Ottawa Hospital and associate professor of epidemiology at the University of Ottawa.

Lee and Moher see this as a collective action problem requiring leadership and investment by publishers.

“Science would be better off if journals allowed for and participated in the empirical study and quality assurance of their peer review processes,” they write. “However, doing so is resource-intensive and comes at considerable risk for individual journals in the form of unfavorable evidence and bad press.”

To help journals manage the reputational risk associated with auditing their own peer review processes, Lee and Moher suggest revising the Transparency and Openness (TOP) Guidelines, a set of voluntary reporting standards to which 2,900 journals and organizations are now signatories. These guidelines were published in Science in 2015 by a committee of researchers and representatives from nonprofit scientific organizations, grant agencies, philanthropic organizations and elite journals.

Lee and Moher suggest adding a new category to the TOP guidelines “indicating a journal’s willingness to facilitate meta-research on the effectiveness of its own peer review practices.” With these, journals can choose which tier or level they take on. Higher levels of transparency would involve higher risk.

  • For the lowest tier, journals would publicly disclose whether they are conducting internal evaluations of peer review, in which they are able to retain the study results for internal use.
  • At the middle tier, journals would disclose the results of their internal evaluations of peer review, but could maintain flexibility in how they report their results for external use. For example, results could be aggregated across several journals to reduce risk to any single journal.
  • At the upper tier, journals could agree to relinquish data and analyses to researchers outside their institution for third-party verification. This is an option, Lee and Moher write, “that might appeal especially to publishers with fewer resources, as it places the financial burden on those conducting the meta-research.” Journals conducting their own analyses could preregister their study designs then deposit their data publicly online.

By agreeing to these more stringent guidelines, the authors write, publishers and journals would have the chance to legitimize and advertise the relative quality of their peer review process in an age when predatory journals, which falsely claim to use peer review, continue to proliferate.

“Illegitimate journals are becoming a big problem for science,” said Moher. “True scientific journals can distinguish themselves with transparence about their peer review processes.”

Investing in research on journal peer review will be costly, they agree. Lee and Moher suggest that large experimental studies are needed to judge the effectiveness of different web-based peer review templates to enforce reporting standards, and of ways one might train authors, reviewers and editors to use such tools and evaluate research.

Also needed, they say, are ways to detect shortcomings in statistical and methodological reporting on a research paper, and to understand how the number and relative expertise of peer reviewers can improve assessment.

The largest publishers, whose profit margins compete with those of pharmaceutical and tech giants, can afford to invest in the requisite technology and resources needed to carry out these audits, the researchers say.

“Publishers should invest in their own brands and reputations by investing in the quality of their peer review processes,” said Lee. “Ultimately, this would improve the quality of the published scientific literature.”

###

For more information, contact Lee at 206-543-9888 or c3@uw.edu, or Moher at dmoher@ohri.ca, or through Jennifer Ganton, The Ottawa Hospital’s public relations officer, at 613-614-5253 or jganton@ohri.ca.

Tag(s):