|
Publishing, Technology, and the Future of the Academy

the future of peer review

1 Leave a comment on paragraph 1 0 The issue of peer review’s future has nonetheless been taken up in various forms by a number of recent publishing experiments. One such experiment is arXiv, an open-access “e-print” (or pre-print) repository, founded at Los Alamos and now housed at Cornell University, through which scientists have increasingly disseminated and obtained working papers in physics, mathematics, computer science, and quantitative biology [see screenshot 1.1].
screenshot 1.1

2 Leave a comment on paragraph 2 0 Such papers are very often submitted to arXiv before they are submitted to journals – sometimes because the authors want feedback, and sometimes simply to get an idea out into circulation as quickly as possible. However, a growing number of influential papers have only been published on the arXiv server, and some have suggested that arXiv has in effect replaced journal publication as the primary mode of scholarly communication within certain specialties in physics. As Paul Ginsparg indicates, arXiv has had great success as a scholarly resource despite employing only a modicum of review:

3 Leave a comment on paragraph 3 0 From the outset, a variety of heuristic screening mechanisms have been in place to ensure insofar as possible that submissions are at least of refereeable quality. That means they satisfy the minimal criterion that they would not be peremptorily rejected by any competent journal editor as nutty, offensive, or otherwise manifestly inappropriate, and would instead at least in principle be suitable for review (i.e., without the risk of alienating or wasting the time of a referee, that essential unaccounted resource). These mechanisms are an important – if not essential – component of why readers find the site so useful: though the most recently submitted articles have not yet necessarily undergone formal review, the vast majority of the articles can, would, or do eventually satisfy editorial requirements somewhere. (Ginsparg 12, emphasis in original)

4 Leave a comment on paragraph 4 0 In 2004, however, arXiv added a layer of author verification to its system by implementing an endorsement process that requires new authors to be vouched for by established authors before submitting their first paper to any subject area on the site. The site is at great pains to indicate that the endorsement process “is not peer review,” but it is a process for the review of peers, and as such bears a direct relationship to the site administrators’ desire to maintain the consistently high quality of submissions to the site, a means of verifying that “arXiv contributors belong [to] the scientific community” (“The arXiv endorsement system”).[1.19] The site administrators do note, however, that “Endorsement is a necessary but not sufficient condition to have papers accepted in arXiv; arXiv reserves the right to reject or reclassify any submission,” suggesting that the open server is nonetheless subject to a degree of editorial control, if not in the form of traditional peer review.

5 Leave a comment on paragraph 5 4 Another peer review experiment in scientific publishing that received significant attention is that undertaken in 2006 by Nature, which was accompanied by a debate, published on the journal’s website, about the future of peer review [see screenshot 1.2].
screenshot 1.2

6 Leave a comment on paragraph 6 2 The experiment was fairly simple: the editors of Nature created an online open review system that ran parallel to its traditional anonymous review process. “From 5 June 2006,” the editors wrote, “authors may opt to have their submitted manuscripts posted publicly for comment. Any scientist may then post comments, provided they identify themselves. Once the usual confidential peer review process is complete, the public ‘open peer review’ process will be closed. Editors will then read all comments on the manuscript and invite authors to respond. At the end of the process, as part of the trial, editors will assess the value of the public comments” (Campbell). The experiment was closed in early December, after which time the editors analyzed the data resulting from it, and, later in the month, declared the experiment to have failed, announcing that “for now at least, we will not implement open peer review.” The statistics cited by the editors are indeed indicative of serious issues in the open system they implemented: only 5% of authors who submitted work during the trial agreed to have their papers opened to public comment; of those papers, only 54% (or 38 out of a total of 71) received substantive comments. And as Linda Miller, the executive editor of Nature, told a reporter for Science News, the comments that the articles received weren’t as thorough as the official reviews: “They’re generally not the kind of comments that editors can make a decision on” (Brownlee 393).

7 Leave a comment on paragraph 7 2 Certain aspects of the experiment, however, raise the question of whether the test was flawed from the beginning, destined for a predictable failure because of the trial’s constraints. First, no real impetus was created for authors to open their papers to public review; in fact, the open portion of the peer review process was wholly optional, and had no bearing whatsoever on the editors’ decision to publish any given paper. Which points to the second problem, as no incentive was created for commenters to participate in the process: why go to all the effort of reading and commenting on a paper if your comments serve no identifiable purpose?

8 Leave a comment on paragraph 8 0 As several entries in the web debate held alongside Nature’s peer review trial made clear, though, the editors had not chosen a groundbreaking model; the editors of several other scientific journals that already use open review systems to varying extents posted brief comments about their processes. Electronic Transactions on Artificial Intelligence, for instance, has a two-stage process, with a three-month open review stage followed by a speedy up-or-down refereeing stage (with some time for revisions, if desired, inbetween). This process, the editors acknowledge, has produced some complications in the notion of “publication,” as the texts in the open review stage are already freely available online; in some sense, the journal itself has become a vehicle for re-publishing selected articles.

9 Leave a comment on paragraph 9 0 ETAI’s dual-stage process highlights a bifurcation in the purpose of peer review: first, fostering discussion and feedback amongst scholars, with the aim of strengthening the work that they produce; second, providing a mechanism through which that work may be filtered for quality, such that only the best is selected for final “publication.” Moreover, by foregrounding the open stage of peer review — by considering an article “published” during the three months of its open review, but then only “refereed” once anonymous scientists have held their up-or-down vote, a vote that comes only after the article has been read, discussed, and revised — such a dual-stage process promises to return the center of gravity in peer review to communication amongst peers.

10 Leave a comment on paragraph 10 1 ETAI’s process thus highlights the relatively conservative move that Nature made with its open peer review trial. First, the journal was at great pains to reassure authors and readers that traditional, anonymous peer review would still take place alongside open discussion. There was, moreover, a relative lack of communication between the two forms of review: open review took place at the same time as anonymous review, rather than as a preliminary phase, preventing authors from putting the public comments they received to use in revision. And though the open review was on some level expected to serve as a parallel to the closed review process — thus Miller’s disappointment that the comments weren’t as thorough as traditional peer reviews — they weren’t really allowed to serve a parallel function: while the editors “read” all such public comments, it was decided from the beginning that only the anonymous reviews would be considered in determining whether any given article was published.

  • It’s worth noting the challenge posed to this already quite open system by a new pre-print server named viXra; according to a recent story on Physicsworld.com, viXra removes any restrictions on the kinds of papers that can be uploaded. Scholars associated with viXra allege that some researchers have been blocked from uploading papers based on the moderators’ sense that their work is too speculative, or that their papers have been “dumped” in the generic “physics” category, where they’re unlikely to be found and read. See Cartwright; see also “Why viXra?”
  • Page 12

    Source: https://mcpress.media-commons.org/plannedobsolescence/one/the-future-of-peer-review/?replytopara=9