What Experiments Have Been Conducted in Open Review?
¶ 1 Leave a comment on paragraph 1 8 Publishers, scholars, and academic collectives have conducted a number of recent experiments with open review practices, with a range of results. Perhaps the best-known of these experiments was that conducted by Nature in 2006, an open review trial that has become famous for its public declaration of failure. However, as Fitzpatrick has argued, it is likely that the experiment’s design made its failure inevitable[8]; the journal proposed a system in which there were no perceived benefits to be derived from participation. Other scientific journal editors, however, posted descriptions of more successful open review processes in the forum that was posted alongside the open review trial.[9]
¶ 2 Leave a comment on paragraph 2 3 Several experiments with open review in the humanities have received both scholarly and journalistic attention, including those conducted at MediaCommons Press, such as the open review of Kathleen Fitzpatrick’s Planned Obsolescence and the two open review experiments conducted in collaboration with Shakespeare Quarterly.[10] These experiments used CommentPress, a WordPress plugin developed by the Institute for the Future of the Book,[11] in order to present texts-in-process for open discussion. These texts were at the stage at which they would be submitted for traditional peer review, but were in these experiments opened to community discussion. The discussion of Planned Obsolescence took place alongside traditional peer reviews, while the Shakespeare Quarterly reviews took place as the central part of a multi-stage process. In each case, the texts were read and commented upon by many of the same scholars who would have been called upon to conduct traditional reviews, but also by readers whose expertise might have been overlooked in such a process (librarians, in the case of Planned Obsolescence; performers and directors in the case of Shakespeare Quarterly). Moreover, the CommentPress format allowed reviewers and authors not simply to respond to the text but to respond to one another as well. The locally targeted, threaded commenting facilitated by CommentPress, along with the underlying social features of WordPress, resulted in robust discussions aimed at helping the authors involved revise their work before final print publication. The open review process thus served a developmental editing role, but in the case of Shakespeare Quarterly, the discussions also helped the editorial board make final decisions about whether to accept the articles for inclusion in the print journal.
¶ 3 Leave a comment on paragraph 3 5 Jack Dougherty and Kristen Nawrotzki similarly used CommentPress to facilitate the open review of the essays contained in their forthcoming volume, Writing History in the Digital Age, using the platform in order to help make “the normally behind-the-scenes development of the book more transparent.”[12] Matt Gold likewise used CommentPress in the review process for the essays in Debates in the Digital Humanities, as did Louisa Stein and Kristina Busse for the forthcoming Sherlock and Transmedia Fandom, though in these two cases the essay drafts and commenting process were only open to the authors who submitted work for the volume; the authors worked together as a community to improve the volume as a whole.
¶ 4 Leave a comment on paragraph 4 3 Beyond CommentPress-based projects, however, a number of other humanities publications have put various kinds of open review processes into practice. The journal postmedieval conducted a crowd review for its special issue entitled “Becoming Media,” using WordPress to present a more blog-like structure for its discussions, with the articles appearing at the top of each page, and the comments below each article.[13] The journal Kairos uses an extensive multi-tier editorial review process, which includes several phases of open communication amongst editorial board members and between editors and authors.[14] The site Digital Humanities Now uses PressForward’s combination of crowd- and editorial-filtering methods to highlight some of the best work being done in digital humanities across the open web; those highlights are then reviewed for republication in the Journal of Digital Humanities.[15]
¶ 5 Leave a comment on paragraph 5 5 From these examples we can see that successful processes may differ from one another in a number of ways: some were entirely open, while others were only partly so; some were single-stage processes while others used multiple stages of review; some were wholly open while others were open within a community; some entirely eschewed anonymity while others permitted it. What all such successful experiments bear in common, however, is a self-conscious consideration of the values of the community that the review process is meant to serve, and a flexible but nonetheless rigorous attempt to reflect those values in the mechanics of the process itself.
Are there any open journals/publications where there is an obvious benefit from open participation?
Copernicus Publication has an interactive discussion as part of the review process for 14 journals. All manuscripts have to pass an access review (whether the manuscript is suited at all for publication). In this stage approx. 15% of the manuscripts are rejected. After the access review, a discussion paper is published. For a period of 6-8 weeks the referees, the scientific community and the authors can post comments on the paper. After the discussion, the rejection rate regarding the final revised papers is only approx. 5%.
I wonder if there is any precedent for publishing the comments as part of the actual text? I guess that would be a blog, but as much as using comments for review is awesome, it’s also a little disheartening that the comments are either incorporated or deleted when the book is “published.”
Interesting idea. Maybe the essay could be ‘published’ in two versions?
One interesting model is the book First Person edited by Pat Harrigan & Noah Wardrip-Fruin. Pre-publication, contributors read each others’ works & offered responses that were published at Electronic Book Review – some were excerpted into the print book as well, and follow-up conversations emerged online. Not exactly open to participants nor free-flowing conversation, but still an interesting set of experiments.
One example of comments being published, and an “essay” existing in two different states might be the conversation about academic reviewing that was part of the SQ special issue on performance. The original piece and all of its attendant comments are <a hrefp=”http://mcpress.media-commons.org/shakespearequarterlyperformance/dobson/“>archived at our open review site on MediaCommons</a>. An excerpted version of that conversation–with one paragraph of the original essay and a portion of the comments on it–was published in the print issue of SQ (and I’ve put <a href=”http://sarahwerner.net/blog/index.php/rethinking-academic-reviewing/“>that version up on my website</a>). The print version tries hard to signal that it’s only a substitute to the “real” version that happened online.
This would be an ideal opportunity to tell us more about the lesser-known science journal experiments that did not gain as much attention as Nature 2006. To most readers, open peer review is still an unknown beast, so feed us more evidence to calm our fears.
I am not familiar with the open review trial from 2006. Does lack of perceived benefits constitute failure? It seems that if a topic made it to the review process that several people must have deemed it worthy of critique. Perhaps there was lack of publicity or a lack of familiarity with a new approach to reviewing?