I am not familiar with the open review trial from 2006. Does lack of perceived benefits constitute failure? It seems that if a topic made it to the review process that several people must have deemed it worthy of critique. Perhaps there was lack of publicity or a lack of familiarity with a new approach to reviewing?
I disagree that open review is already being practiced, because, typically, conference and workshops tend to attract–and be attended by–similar-field academic peers. The beauty of open review is that people from all disciplines can offer their thoughts. Great improvements can result from scientists’ unique insight for humanities scholars (and vice versa). This sort of academic intermingling has been stymied by the requirement of physical attendance at a conference or workshop. But in a digital environment, the process is easier.
I love the notion of open peer review. The open review seems to encourage a reinvigorated collegial, scholarly communication, as opposed to the blind reviews which are often done by professors for payment if the journal is large enough. I do see areas for concern in this process, however. I am not sure that all professors/reviewers in this current Capitalistic society who reside in an arguably competitive milieu would operate perfectly within a system based upon something quite akin to a process based upon generalized reciprocity. Are there safety nets, legal or otherwise, for such situations?
I agree that there is considerable benefit to speeding up the publication process; not only does the scholar benefit from a faster turnaround tme, but the community benefits from access to more current information.
This seems to be a concern for me as well. It would be hard to draw that line, especially in a digital space. Though I do think it is a good thing to have the world of “peer review” expanded beyond a select group of people, it could pose to be a challenge to maintain quality control on the reviews that are going on.
I have read Planned Obsolescence and plan on actively peer reviewing in the future. This was helpful to read over and I am testing to see if my temporary password will work before I get my feet wet.
Thank you for clarifying the notion of “peer” and its ever-expanding conceptualization in a digital world. I wonder though, if there should still be limitations imposed on this idea. Yes, a closed, fairly isolated peer group seems to inhibit the growth of scholarship and understanding while limiting it to a specific worldview, or as this paragraph suggests, perpetuating singular opinions, but where do we draw the line in terms of who is “qualified” to be considered a peer reviewer?
This raises an interesting point. Since there is more diverse ways of publishing do we need have a review process for both the text and the means that people use to find that text? There is a ton of material available online. Some is valuable and a lot of it is not. What if a valuable work that does not show up in popular searches? I would say this is a problem and people should also be reviewing this process along with the content of the digital publication itself.
I agree with Sandy on the question of revisions by the authors. Additionally, did the different review processes make a difference with regards to how likely the authors were to make revisions?
It sounds like there is already something of a framework for supporting open review. I wonder if there is a way to collect a sample of the feedback mentioned in this paragraph to analyze how it can be just as useful as traditional peer review.
Source: https://mcpress.media-commons.org/open-review/all-comments/
Comments on the Pages
Recommendations for Communities of Practice (41 comments)
In the case of a monograph like mine or Kathleen’s, should this editor or leader be the same as the author? What about an edited collection – should the open review editor be the same as the volume editor? I see plusses and minuses of both – curious what recommendations the authors might make, or how the issue might be addressed.
One role I don’t see mentioned here is “improvement” – getting feedback from peers should make our work better!
Should the question of persistence be addressed here, insofar as a reviewed manuscript with comments might or might not remain online after the review process? This speaks to openness of publication (will a press allow a draft to live online once the book/journal is out?) and openness of the draft/feedback (does an author want an incomplete or imperfect version to be the one most accessible via Google, or allow comments to remain publicly linked to their draft?).
I’m not wild about “and possibly delimit” as it sounds too much like censorship (which does happen but shouldn’t be assumed) – what about “and possibly shape”?
I’m not sure if this is the right place to mention it, but it seems worth saying that closed review lacks context that might help explain a particular response – for instance, knowing who wrote something might help explain that the reviewer comes from a different subfield or tradition, or has their own competing scholarship on the topic, or is just known as a hard-ass. Such problems won’t go away in open review, but they’ll at least be brought to light & given some potential context.
Do you think this instance of open review of this document follows these guidelines adequately? I’m not sure that the type of feedback I’m giving is what is being sought, so perhaps a bit more meta-clarity would be useful?
I love the phrase “socialize participants into community etiquette”!(And I should note that it does feel odd to write a comment that’s just “good phrase!” style of praise. Perhaps that itself might be mentioned, as to how to solicit affirmative commentary?)
This paragraph is very important, but unwieldy in structure & hard to follow.
It might be worth mentioning that the publish-then-filter model does greatly speed up the ability for people to read your work, especially when it’s timely, even if it does create an additional level of labor during the review process.
I can see what you mean by “established hierarchies,” but I also wonder if that’s too strong a term. My sense is that open review establishes new communities and challenges older hierarchies. And you do need to work through what’s already there, but I shudder a bit at the term hierarchy.
I second Jim’s response, and I take it that you are interested in a top-down approach. I think also developing a system of rewards that are tangible might be important.
It seems that the gatekeeping function can’t always be open, unless you are talking about a blog-like post-publication form of review. Even w/ the example of MediaCommons, certain works are elevated to the status where they are reviewed by the community and (I assume) other’s aren’t.
I think Andy Famiglietti’s discussion of “moral economy” on Wikipedia might be instructive here. Wikipedia has formal rules for review, yet there are also informal norms that are reinforced whenever reviewers come across something that violates commonly held assumptions that aren’t written down.
This relates to my earlier question: We have the ability to make peer review more visible and transparent. Why don’t we try to create communities of practice that are interested in visibility?
How different is this result of the open peer review process from book reviews or edited collections done by junior scholars? In both of the latter instances, there could be a similar amount of fear. Maybe this paragraph could also incorporate these examples…
Are there models of open review that discuss the limitation of the review process? It might be worth noting examples here.
I’m very enthusiastic about the intellectual rewards of open review. When I post something on a blog, I tend to get better feedback, more rapidly, than I could ever hope to get in another medium. I reach exactly the people who can offer useful suggestions and constructive critique.However, I’m not altogether convinced that we need to build a new infrastructure (social or technical) to foster this. I’m pretty happy with the way comment threads already work. I don’t have to round people up and request formal review. I just say something, and they say “yeah, but have you considered X?” And I say “Huh. Good point.” It’s not as elegant as the process you’re using here, but it seems to suffice.I’m also not altogether convinced that the open review process is — or needs to become — an alternative to more formal kinds of “gatekeeping” review. It seems to me that open review on the web is, actually, better understood as an alternative to the kind of thing we used to do at conferences, or by circulating drafts among colleagues. If you want a “reading,” blogs do the job much better, quicker, and with less outlay of $$ for plane tickets.I think that kind of open feedback can usefully replace *part*, but only part, of the traditional review process. E.g, it seems to me that JDH is requesting much lighter kinds of revision than other journals do — and this is appropriate, because they’re selecting articles at a point when they have in practice already gone through a baptism of fire. So that part of editorial labor can be abridged.But I’m not yet convinced that the open review process can or should try to do the work of selection itself — whether we want to view selection positively as aggregation or negatively as gatekeeping. I think there’s still going to be a role for editorial judgment, and I think we may still want those judgments to be made by a relatively small group of people. But that’s a complicated topic, and I’ve said enough for one comment.Thanks btw for this paper, and for this incredibly fluid and easy-to-use architecture for open comments.
Different processes have handled this differently. In SQ‘s case, both of our issues did some behind-the-scenes gatekeeping (as the white paper discusses elsewhere). But for Writing History in the Digital Age, everything submitted was evaluated publicly, but not everything moved on to the next phase of production.
I like this list of possibilities–they’re important questions to consider and suggest the range of ways the process can work.
Ah, persistence. That’s a great point. I’m not sure if it’s about openness, but I agree that questions about persistence should be addressed. When I’ve talked about open review, I’ve found that some folks are uneasy about having multiple states of their work preserved; others welcome it as being both about transparency of the scholarly process of thinking and about open-access, if the review remains online but the final product is published in a closed system. Either way, persistence touches on important questions of preservation and discoverability, as well as potentially touching on what sort of incentive reviewers have to participate.
I would add that there are challenges in determining how appropriate it is for authors to solicit comments on their submissions. For instance, I have encountered some resistance from authors when I’ve advised them to circulate the call for participation since they felt like if their circle responded to the paper it wouldn’t be neutral vetting by their peers. On the other hand, I know that the editors of Writing History in the Digital Age actively encouraged their authors to contact and encourage commenters and Jack and Kristin attribute part of their success to the authors having done that. In other words, part of what is being put under pressure here is not “peer” or “community” but ideas of neutrality.
Ah, here is where persistence of sites comes up. But, as I comment above, I think it might be worth pulling it out a bit more.
You might reconsider the line about how the “visible contributions” of open peer review “can potentially muddy the waters when it comes to attribution and ownership of ideas.” Under the closed review system, the water was already muddy when anonymous reviewers made suggestions that influenced an author’s work. The key difference is that open peer review makes the mud more clearly visible.
I agree with Jason here, and in my own writing about open peer review, I sometimes forget to emphasize the importance of “speed to publication,” especially for audiences who don’t realize how SLOW it can be in the humanities. In the publish-then-filter model, our writing sees the light of day on the web in a few weeks or months. Compare this to traditional publishing models, where my average speed is two to five YEARS after initial submission. I’m all in favor of a “thoughtful” review process, but those additional years are not making my scholarship more meaningful. Increasing speed toward publication deserves equal attention as one of our desired outcomes.
Overall, this page holds out great faith that “clearly establishing roles and expectations” is the answer to our prayers for open peer review. To some extent I agree, and spent considerable time with Writing History co-editor Kristen Nawrotzki drafting our “editorial and intellectual property policy” (which we required contributors to read before submitting their work) and tweaking our online layout to encourage readers to write not only paragraph-level remarks, but broader General Comments on the Book (using the Press’ evaluation criteria). Yet despite our best efforts to be crystal clear, I was surprised how often these guidelines were overlooked, ignored, or forgotten by some of our fine colleagues. Higher education is still a cat-herding business, and we don’t necessarily follow the roles and expectations laid out for us.
Agreed. In my experience with open peer review, it is essential for editors and authors to sustain engagement. However, scholars are constantly inundated with work and requests to review this or that. At some point, there needs to be an incentive that goes beyond academic interest and the desire to contribute to the community. We need ways to reward scholars — if nothing else, by creating systems that help them report their labor to promotion and tenure committees. It might be worthwhile to mention the possibility of incentives which might improve the open peer review model.
Yes, this is very important. As Sarah suggests, there are pros and cons to this, which might be useful to mention in this section.
There are ways of addressing some of these problems. For example, Amazon has created a system of review that rewards quality feedback and “punishes” poor feedback. Users themselves rate each other, and a similar mechanism might work in an academic forum.
I agree with Jason on this. I do like the word “shape” much better.
I see that you have addressed this below, which leads to another aspect of open peer review. To what extent do we want authors to have the ability to move, edit, and/or delete comments?
I think the “groupthink” issue you mention here is very important. On the plus side, open review allows author X’s piece to be read by the small community of people who are interested in her unconventional approach. In that sense it could foster a diversity of viewpoints. On the minus side, it encourages scholarly discourse to fragment into small, self-reaffirming communities. E.g., in open review, I can and do get readings from other people who are already interested in big data / text analysis. In a way these are the best readers. But it may also mean that I never have to confront the larger community of literary scholars who think we’re crazy. This isn’t just a problem with open review; it’s a problem already with edited collections and special issues of journals, or frankly in the case of certain theoretical subfields where everyone starts from the same assumptions — which therefore never get seriously challenged. If we really want to validate scholarship and give it broad significance, we’re going to need to consciously stage confrontation across community boundaries. This could happen through an open, or blind, review process.
Particularly worth examining given the redefinition of the term “peer” earlier on in the paper. If the notion of peers changes, doesn’t that affect the established hierarchy as well?
The parenthetical comment about closed peer review processes strikes me as quite important, and perhaps worth developing in a bit more detail.
Roger, unless I’m misunderstanding something, doesn’t transparency also result in increased visibility for the reviewers? In an open process, thoughtful, substantive feedback is visible to all and builds the credibility and reputation of the reviewer. If you had something else in mind here, could you clarify?
I think that’s a great point about scholarship and community in general, one that, as you say, isn’t specific to blind or open review. We all do a bad job of reaching across communities. (Although, actually, whenever I complain about big data, I remember the conversations we’ve had on twitter about it, so maybe the combination of social/scholarly that is the twitter network makes these conversations possible in ways that pure scholarly exchange doesn’t.)
I agree. I do think scholarly communities on the web tend to be messier and overlap w/ each other a bit more than our usual discipline/subfield boundaries. That could be a real strength of the “open” process.
I also want to refer to the issue of persistence: In our public peer review model, we regard this point as very important. Therefore, we introduced two stages. In the first stage, manuscripts that pass a rapid access-review are immediately typeset and published in the discussion forum in an onscreen format. They are then subject to Interactive Public Discussion, during which the referees’ comments (anonymous or attributed), additional short comments by other members of the scientific community (attributed) and the authors’ replies are published. In the second stage, the peer-review process is completed and, if accepted, the final revised papers are published in the journal. To ensure publication precedence for authors, and to provide a lasting record of scientific discussions, the discussion forum and the journal are both ISSN-registered, permanently archived and fully citable.
With regard to the final question in this paragraph: I worry both about the presentation of only two options (“truly democratic” vs. editors’ prerogative) and about the characterization of the first of these. Assuming that “truly democratic” is meant to suggest that there is no intervention in the process, then how are antidemocratic abuses such as in-group cronyism and out-group antagonism to be avoided?One can imagine a system on the “truly democratic” end of the spectrum in which there is some minor editorial (perhaps even purely mechanical) intervention, whereby e.g. each “like” or “dislike” between two nodes in the network is given slightly less weight than the last.
Eric, I think you touch a vital point here. The term “truly democratic” is extremely problematic because the authority of a peer reviewer cannot be seen completely separately from her/his identity. In other words, differentiation between more and less authoritative reviewers is a prerequisite for the review process. If reviewers function as equally weighted nodes in a network, the main question is one of inclusion. What about academics from a different field? What about dissenting voices? etc. If a less binary logic is followed and different reviewers/reviews are weighted differently, the method or rules for this differentiation are in fact part of the scientific method of the piece/conversation. Either way, their explicit mention is of the utmost importance.
Items (d) and (f) bring up the important issue of quantity over quality. How can we strike the balance between benefiting from a large array of voices and being overwhelmed with a mountain of comments–some useful, and some not. Whose responsibility is it to wade through all the comments and find the useful ones–the editor, the author? On the flip side, do commenters have the responsibility to self-police and refrain from side conversations or unrelated tangents? Clearly there must be a mutual agreement on behavior so that open review can be rigorous and multi-voiced, but also judicious and on-point.
I agree that there is considerable benefit to speeding up the publication process; not only does the scholar benefit from a faster turnaround tme, but the community benefits from access to more current information.
What Experiments Have Been Conducted in Open Review? (24 comments)
To this list of experiments I want to add Jason Mittell’s Complex TV, whose proposal was traditionally peer reviewed by NYU Press while it was posted in CommentPress. That manuscript s is now being posted serially by chapter and, like Planned Obsolescence, the full ms will be sent out for traditional review and eventually made available for sale as a print and e-book. Another is Writing History in the Digital Age, a born-digital edited volume, under contract with the University of Michigan Press under their digitalculturebooks imprint. And the earliest collaboration with a university press that I am aware of: Grand Text Auto / Expressive Processing.
You can read more about Debates in the Digital Humanities at this Inside Higher Ed piece.
“These texts were at the stage at which they would be submitted for traditional peer review, but were in these experiments opened to community discussion.” Very confusing sentence.
The Sherlock book is now published.
And also the Sherlock book did invite two non-anonymous outside reviewers (I was one) to comment on the manuscript via CommentPress.
You call all of these “successful experiments” – based on what measures? I’m not arguing that they were unsuccessful, but I think this question of how we measure success is crucial to address in some way here.
Absolutely agree with Jason here: it would be really useful and instructive to give readers who are interested in experimenting with open review some sense of how to gauge success or failure. In the case of examples offered here (Planned Obsolescence, Shakespeare Quarterly, Debates in Digital Humanities, etc), were there any surveys or questionnaires undertaken that tried to assess the experience of different stakeholders (e.g., authors, reviewers, publishers)? Just as useful–perhaps more so–would be post-open peer review reflections, assessments, summaries by those same stakeholders. I’m sure those are out there, and it would be great to identify and reference some of them. Another idea: PressForward recently conducted a six-month review of Digital Humanities Now and reported on some of its findings. Consider conferring with the editorial team about how, exactly, they conducted that review and include a description of that process here.
I would clarify that the postmedieval crowd review was explicitly not about vetting contributions to the journal but about discussing them–their essays were solicited and already accepted for publication when they were put up for comment. Actually, I would clarify that all the examples in this paragraph used open commenting but not open peer review, as opposed to the examples you give in the previous paragraph.
I agree with Jason and Kari that the definition of “successful” needs to be clarified. I haven’t always been inclined to describe my own open peer reviewed issue of SQ as “successful” (although I also am never inclined to describe it as having failed!). I suppose one definition of success is that all these examples resulted in works that were commented on and evaluated and published. That’s certainly a measure of success.I don’t think the place for it is here, but I like Kari’s idea of providing some sort of guidance on how success might be evaluated. That would certainly be a useful service for anyone considering doing something along these lines.
Actually, there were at least two earlier collaborations with U presses (and, in many ways the harbinger/s of Comment Press): Ken Wark’s GAM3R TH3ORY, which had already been accepted by Harvard before the folks at if:book customized a site for open review (circa 2005-06); and Mitchell Stevens’s Without Gods in 2006 or so. Both are important touchstones and I’m surprised they are not mentioned here, particularly given the roots of Media Commons…
Are there any open journals/publications where there is an obvious benefit from open participation?
I wonder if there is any precedent for publishing the comments as part of the actual text? I guess that would be a blog, but as much as using comments for review is awesome, it’s also a little disheartening that the comments are either incorporated or deleted when the book is “published.”
Interesting idea. Maybe the essay could be ‘published’ in two versions?
This would be an ideal opportunity to tell us more about the lesser-known science journal experiments that did not gain as much attention as Nature 2006. To most readers, open peer review is still an unknown beast, so feed us more evidence to calm our fears.
From my experience, readers want to know more about these rich examples of open peer review in practice. In fact, that’s the title of a THATCamp CHNM 2012 session that Sarah Werner organized on this topic. Furthermore, our “Conclusions: What We Learned” for Writing History in the Digital Age was roughly framed around the types of questions we frequently heard when speaking publicly about our experience. The concept of “open peer review” is so unknown that readers need more examples, from different varieties, to help make up their minds about its strengths and weaknesses.
Sarah’s point is worth emphasizing, and I didn’t realize it until she pointed me to this cluster of essays, “The State(s) of Review” on postmedieval Forum (which i had not seen until reading her THATCamp CHNM 2012 proposal).
One interesting model is the book First Person edited by Pat Harrigan & Noah Wardrip-Fruin. Pre-publication, contributors read each others’ works & offered responses that were published at Electronic Book Review – some were excerpted into the print book as well, and follow-up conversations emerged online. Not exactly open to participants nor free-flowing conversation, but still an interesting set of experiments.
Just to add one more version to the mix, History Working Papers (http://www.historyworkingpapers.org/) is currently using CommentPress to provide a space for scholars to exchange and comment on works-in-progress. Here, the emphasis is on the earliest stages of writing and is especially interested in developing conference and symposia papers. If we are to think of peer review as a process rather than something that one does right before publication, it might be worth mentioning. The North American Conference on British Studies has adopted it and encouraged participants to use it since 2011.
Agreed; also, developing more detail about the outcomes of the open review process–how the books were received once published, etc.–may be helpful. The second part of the last sentence (from “though in these two cases”) could also be reframed in a more positive light, emphasizing that these are examples where openness was deliberately limited to a specific group. It’s a good thing to have that kind of possibility.
One example of comments being published, and an “essay” existing in two different states might be the conversation about academic reviewing that was part of the SQ special issue on performance. The original piece and all of its attendant comments are <a hrefp=”http://mcpress.media-commons.org/shakespearequarterlyperformance/dobson/“>archived at our open review site on MediaCommons</a>. An excerpted version of that conversation–with one paragraph of the original essay and a portion of the comments on it–was published in the print issue of SQ (and I’ve put <a href=”http://sarahwerner.net/blog/index.php/rethinking-academic-reviewing/“>that version up on my website</a>). The print version tries hard to signal that it’s only a substitute to the “real” version that happened online.
Copernicus Publication has an interactive discussion as part of the review process for 14 journals. All manuscripts have to pass an access review (whether the manuscript is suited at all for publication). In this stage approx. 15% of the manuscripts are rejected. After the access review, a discussion paper is published. For a period of 6-8 weeks the referees, the scientific community and the authors can post comments on the paper. After the discussion, the rejection rate regarding the final revised papers is only approx. 5%.
One indicator of success in certain cases, such as “Planned Obsolescence,” where both traditional and open peer review were conducted, is to offer some assessment of how much revision was done in response to the open review beyond what the traditional peer review prompted.
I agree with Sandy on the question of revisions by the authors. Additionally, did the different review processes make a difference with regards to how likely the authors were to make revisions?
I am not familiar with the open review trial from 2006. Does lack of perceived benefits constitute failure? It seems that if a topic made it to the review process that several people must have deemed it worthy of critique. Perhaps there was lack of publicity or a lack of familiarity with a new approach to reviewing?
Why Open Review? (19 comments)
I would add an addendum like this to the first sentence: “—of course, traditional closed review processes can also succeed and fail in a wide range of often unacknowledged ways.” And maybe cite Planned Obsolescence on this? It’s important to remind people that the status quo isn’t a perfect machine, it’s just the one whose faults we’ve come to accept as unavoidable or palatable.
Another variable you might add here is “publicness” – to what degree might a review process be open to participation but not publicly accessible (like an anonymous survey)? Or reviews posted publicly but not open to participation outside of invitations or a closed community?
We have spent some time in Harley et al. 2010 ( Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines), Harley and Acord, 2011, http://escholarship.org/uc/item/1xv148c8 and in two relatively new publications, discussing issues around sharing pre-publication work in a variety of disciplines, and note the importance of disciplinary culture and also of credit, time and personality in dictating who shares what, with whom, and when. As they say, it’s complicated. I think the passage below from a recent New Media and Society (NMS) article (Acord and Harley, Credit, time, and personality. In Press, and which was posted in November for open peer review and has undergone final revisions; it attracted almost no open comments) sums up our thoughts on open peer review. The whole paper gives that passage better context of course. With regards to sharing pre-publication work generally, we note: As with everyday communication practices more generally (cf. Goodwin, 2000), individuals design scholarly communication practices to maximize impact with a select ‘target’ audience. As scholars formulate, develop, edit, and fact-check their work-in-progress, they gradually share their work with wider and wider circles of trusted, targeted individuals….(W)hile the web has extended the reach of these types of sharing, the functions they serve developmentally have not changed radically. As one historian noted, ‘It’s really not substantially different than what’s been in practice for several hundred years…[except] it’s faster and it’s more global’…. “Taken together, guarded prepublication sharing practices function as a safety net for scholars to not only improve the work and avoid ‘making fools of themselves’, but also to stake a claim on ideas and maintain a visibility in their research areas. Our work suggests that scholars seek out informal peer review in a highly strategic manner based on social variables and disciplinary values. That is, in deciding when and where to share their work-in-progress, scholars make decisions based upon their discipline’s culture and degree of trust (of the interlocutor), comfort (how well they feel their work is developed), and audience (who needs to be aware of their work and how are they best reached).”If open commentary experiments proliferate, it will be important to assess who is offering comments (i.e., what portion of such open comments come from ‘friends’) in these venues, and whether the overall impact and costs –to authors, editors, and readers–exceed the normal levels of traditional informal and formal peer review. This report is a contribution answering those questions.
Wholeheartedly seconded. It’s hard to move beyond the status quo until it’s clear that it’s broken in important ways.
This paragraph is facile and doesn’t address the actual objections it alludes too.
I’m intrigued by Jason’s suggestion of degrees of publicness. I’d add the possibility of comments that might be public to the editor and author, but not public to viewers (one of the factors that I’ve encountered running an open peer review is the fear that reviewers have of seeming stupid in front of their peers and the ways this inhibits their willingness to comment publicly).
Another element of “openness” is clarity of writing. I know, for example, Alice Bell argues that openness can also be “simply a new way to rub scientists’ cleverness in people’s faces, letting more of them feel lost and stupid in the face of such impressive expertise.” (http://www.timeshighereducation.co.uk/story.asp?c=1§ioncode=26&storycode=419684).
I like Roger’s comment here–this is an important dimension to open scholarship.
What bothers me about this paragraph is that it challenges “openness” without raising any questions about its opposite: “secrecy.” Why aren’t we interrogating secrecy within the academe to the same degree? When Kristen Nawrotzki and I designed our open peer review process for Writing History in the Digital Age, we eventually found ourselves taking a stance of “transparency by default” in the writing and editorial process, except in cases where was a compelling case for privacy (such as our decision to inform contributors by private email when their submissions were not selected to advance to the final manuscript). We write about this in “Conclusions: What We Learned,” under the subheading titled, “Did the benefits of publishing on the web, with open peer review, outweigh its risks?”
I second this point – framing “open” as a default that can be overridden when needed goes a long way toward decentering the status quo model where “blindness” is often accepted as inherently valued simply because it is the default.
In the open peer review system that we’re designing for Drupal, we have set the default to “open.” However, we plan to give editors the ability to keep the identities of some reviewers anonymous if they wish. However, keeping their comments hidden seems to defeat much of the purpose of open peer review, it seems to me. Open peer review is not simply about product; it’s about process. Discussions and disagreements “in the margins” are every bit as important to scholarship as the text. Perhaps this is mentioned later in the document, but it might be worth mentioning here.
A clarification: I meant there could be an option of comments appearing anonymously to the public but revealed to author and editor(s). I do agree that even that seems a bit odd, in that it works against the purpose of openness, but it would counter the anxiety I’ve encountered from potential reviewers about how idiotic they might appear in front of others. Random attributions that are consistent could be assigned: rather than appearing as “Sarah Werner” my comments could be attributed to “ABC” consistently, which would at least give a sense of the individual biases of my perspective, even if my name isn’t attached. Not that I’m biased, of course…
I work for Copernicus Publications, an Open Access Publisher based in Germany. We already have 11 years of experience with Public Peer Review (http://publications.copernicus.org/services/public_peer_review.html). We experienced that people often mix up the terms “public peer review” (which we regard as a more transparent form of the peer review process) and “open peer review” which mostly is seen equal to eponymous referees. In our approach, we allow each referee to choose whether he or she wants to be called by name or as “referee” in a public discussion.
As a practical matter, it would likely be very difficult to identify which commenters were “friends” (as distinguished, say, from fellow members of the author’s department or former dissertation advisers).
[…] to class. In the back of this one, I’ll probably be reading and writing about a section on why open review. And then when I come back, grounded theory and narrative analysis. (See how my plans keep […]
It sounds like there is already something of a framework for supporting open review. I wonder if there is a way to collect a sample of the feedback mentioned in this paragraph to analyze how it can be just as useful as traditional peer review.
This raises an interesting point. Since there is more diverse ways of publishing do we need have a review process for both the text and the means that people use to find that text? There is a ton of material available online. Some is valuable and a lot of it is not. What if a valuable work that does not show up in popular searches? I would say this is a problem and people should also be reviewing this process along with the content of the digital publication itself.
I love the notion of open peer review. The open review seems to encourage a reinvigorated collegial, scholarly communication, as opposed to the blind reviews which are often done by professors for payment if the journal is large enough. I do see areas for concern in this process, however. I am not sure that all professors/reviewers in this current Capitalistic society who reside in an arguably competitive milieu would operate perfectly within a system based upon something quite akin to a process based upon generalized reciprocity. Are there safety nets, legal or otherwise, for such situations?
I disagree that open review is already being practiced, because, typically, conference and workshops tend to attract–and be attended by–similar-field academic peers. The beauty of open review is that people from all disciplines can offer their thoughts. Great improvements can result from scientists’ unique insight for humanities scholars (and vice versa). This sort of academic intermingling has been stymied by the requirement of physical attendance at a conference or workshop. But in a digital environment, the process is easier.
What Is Peer Review? (13 comments)
It might be worth putting something at the end of this paragraph about humanities peer review striving to ensure that new work is up-to-date on debates in the field – this is more than just citing “relevant existing literatures” but that the goal is that a new scholar’s work reflect what ground has already been traveled and ensuring that any claims of “originality” is warranted.
I think the word “expertise” should appear in this paragraph, as one of the key shifts here is that expertise is no longer necessarily embedded in somebody’s institutional credentials, but in their public demonstration of knowledge, understanding, careful reading, etc.
Thank you to the authors for providing a preview copy of this important report before the AAUP meetings. We would like to correct citation for our 2011 piece, which spends a fair amount of time defining the multiple functions of peer review: Harley D and Acord SK (2011) Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Future. University of California, Berkeley: Center for Studies in Higher Education. Available at: http://escholarship.org/uc/item/1xv148c8
I’m more than a little hesitant to assert that blind peer review has served as an instrument of meritocracy. It was meant to, certainly, but research indicates that many works are easy to un-blind, and reviewers can be swayed by author affiliations (sometimes left in otherwise “blind” manuscripts) and other characteristics of author or manuscript uncorrelated with quality.It may also be worth pointing out (here or somewhere) that the inter-rater consistency of peer review is pathetically poor when tested, suggesting that its use as a quality filter is rather suspect. I don’t have citations at hand, but contact me out-of-band and I can dig them up.The hope, of course, is that open review could control for some of these variables by broadening the discussion and making it more transparent.
I’m not sure this is the place to go into a long critique of how peer review operates, but it could be useful to differentiate between the intent of peer review and the effect of it.
Right, these shifts are also connected to the rise of digitally mediated networks. While that may be obvious, it might be useful to mention that briefly.
I agree with Sarah here: this could be further nuanced.
Part of the problem with this very brief overview assumes that the reader is already familiar with the arguments and evidence about peer review and its discontents that Kathleen Fitzpatrick raised in her first chapter of Planned Obsolescence. I don’t think readers need more nuance here, but rather some direct evidence (such as Peters & Ceci 1982/2004; Godlee 2000).
I would tend to agree with Sarah as well. Intent and effect are key distinctions when it comes to arguing for the significance of open peer review. That distinction is also useful in assessing various open peer review platforms and processes as well.
Besides “depth” of argument, i would add “cogency.”
But, harking back to the earlier comment about open peer review having long existed (with which i disagreed in part), I would urge here that this more expansive notion of “peer” harks back to earlier times when gentlemen-scholars (along with a relatively few females who ran salons in 18th-c. France) would freely exchange their views about each others’ writings while having no formal, academic credentials.
Thank you for clarifying the notion of “peer” and its ever-expanding conceptualization in a digital world. I wonder though, if there should still be limitations imposed on this idea. Yes, a closed, fairly isolated peer group seems to inhibit the growth of scholarship and understanding while limiting it to a specific worldview, or as this paragraph suggests, perpetuating singular opinions, but where do we draw the line in terms of who is “qualified” to be considered a peer reviewer?
This seems to be a concern for me as well. It would be hard to draw that line, especially in a digital space. Though I do think it is a good thing to have the world of “peer review” expanded beyond a select group of people, it could pose to be a challenge to maintain quality control on the reviews that are going on.
Recommendations for Technological Systems (13 comments)
I think the idea of a “single sign-in” option would be excellent, especially for systems like CP, allowing a user’s contributions to be all linked to a single ID, ideally across hosts.
It seems that paragraphs 5 & 6 have some indenting issues causing CP troubles.
Printing: one of the most comment complaints I’ve gotten about CP for my project is that people still like to read manuscripts on paper. I think some people would be more amenable to participate in online open review if they could print the manuscript cleanly, and then add comments to the screen. (I know, we shouldn’t enable such behavior, but change is hard!)
Should you mention Anvil Academic as a potential collaborator here?
It might also be useful to have some kind of track-changes feature that allows people to make minor corrections easily, rather than talking about them in a comment.
I had thought about the reviewer selection recommendation. How do you allow the editor to select a specific pool of reviewers without reinscribing the same kind of walls (academic/non-academic) that open peer review is trying to contest? It seems that a feature like this would make it all-too-easy to ignore specific comments simply due to who made them.
Wouldn’t there need to be some centralization or standardization here? i.e. like MLA providing the institutional structure for such a consortium?
I’m not sure what the authors have in mind for (d) Reputation and designing a system that assesses “the quality of contribution of individual reviewers.” My preference is to judge quality the old-fashioned way: read the online comments and decide whether the reviewer adds valuable insight and constructive criticism, or the opposite. Please avoid creating anything resembling a “Reviewer-O-Metric.”
Jason, the ability for authors to “see” their full essay with ALL comments in a single printout (or to preserve it in a PDF) was one of the most frequent requests that we heard from contributors to Writing History in the Digital Age. And I was stunned that CommentPress (as of Fall 2011) couldn’t do it. We eventually created some Print Style Sheet code to deliver this output, which Christian Wach refined as a new feature in CommentPress v3.3.2 in June 2012.
Here’s a much more basic workflow issue confronting open peer review & publication. Most humanities authors compose their work in Microsoft Word, which has to be converted (sometimes with Chicago-style footnotes) to WordPress or a similar format for public commentary. But when it’s time to revise, several authors prefer to work in their more-familiar Word format. Also, several publishers insist that submissions be delivered to them in Word, too. So the biggest workflow obstacle we had in Writing History in the Digital Age was Word-to-WordPress-to-Word. But we figured out some tricks to make it work.
Although, Jack, a number of times when reading your comments on this document, I’ve wanted to say “great point!” – a +1 button or the like is a simple way to affirm contributions, letting commenters know that people are reading & appreciating their points.
[…] kinds of projects from happening. And I buy that to a degree. But when I read through some of those recommendations, I felt exhausted by them. One obvious problem of starting from X is that you have to deal with all […]
Point taken, Jason. +1
Who Else Is Exploring These Issues? (8 comments)
If read beyond the executive summary, the Harley and Acord 2011 publication http://escholarship.org/uc/item/1xv148c8, makes clear how peer review is a multi-faceted activity and every scholar is bound to its many guises. We also note the great expense peer review, in all of its forms, places on the academy and publishers. We did in fact do a review of open peer review on pages 45-48 in the 2011 publication cited above. This section was embedded in a larger chapter on reviewing new models (in a variety of disciplines) that attempt to modify or upend traditional peer review processes, and provide trusted filters to an ever expanding literature in all fields. We noted there and elsewhere a number of reasons we think open peer review (either pre-publication or “open commentary post-publication,” such as PLoS ONE) will not be embraced by the majority of scholars. Most importantly, we do not think open peer review has even a small chance of ” lightening the burden of peer review under which scholars labor.” Just the opposite. We think it will add an immense burden, and will be eschewed by most scholars who are already overwhelmed with requests for reviewing (be it for publication, advancement dossiers, grad student work, grant proposals, etc.) and are looking to more filters, not fewer, for the avalanche of literature they must keep up with. Peer reviewed publications with prestigious imprimaturs offer those filters for most.We also emphasize the potential for increased editorial costs to publishers who must manage and sift the open commentary. We suspect this might be particularly acute in the humanities. We would argue that issues of time and money cannot be ignored given the economic complexities of the current scholarly communication landscape (an additional theme in the 2011 paper). That being said, open peer review of the sort demonstrated at MediaCommons may succeed in tightly-knit communities with affinities to new media, and where authors can invest time into raising visibility of their work.
Clarification: the postmedieval forum pieces discuss not the postmedieval experiment but a range of responses from others involved in such work, including contributions from Katherine Rowe and from myself on our two SQ open peer reviews.One thing missing nearly entirely from all the reflections that are listed here are perspectives of authors and commenters. The recent conclusion to Writing History in the Digital Age does offer some correction to that, but the voices of contributors, rather than editors, are still largely unheard.
Can I leave a little bit of self-promotion? Brian Croxall and I delivered a keynote at the University of Florida last April titled “Theses on the Open Humanities,” which also discusses these issues. We archived the talk, our slides, and links to videos here: http://www.rogerwhitson.net/?p=1693
Because credentialing is such an important aspect of peer review, I wonder if it might be useful to mention Jason Priem et al’s work on http://altmetrics.org/ alongside some of these other projects.
Frankly, your “open” process does not seem very open!
I have written extensively about open peer review, as has my colleague, Richard Smith, former editor of BMJ. See:
http://www.jopm.org/opinion/commentary/2009/10/21/reputation-systems-a-new-vision-for-publishing-and-peer-review/
And
http://e-patients.net/archives/2010/08/a-troubled-trifecta-peer-review-academia-tenure.html
I would like to fortify the link between open access publishing and public peer review. I think this combination offers great opportunities as it makes results of research accessible for everyone and enhances the transparency of the process of peer review.
And that is the reason that incentives need to be built into the system, such as Kathleen Fitzpatrick discusses in “Planned Obsolescence.” If authors cannot submit an article to a journal unless they have built up some credit through reviewing, that is one very strong incentive, for example. If there is a way of measuring the utility of reviews and then making this a part of a scholar’s portfolio of academic achievements, that is another kind of incentive. Without such incentives, you are absolutely right that constraints of time and opportunity costs will keep participation rates low.
[…] but so is the need to share them — and to separate the wheat from the chaff. Open pre- and post-publication peer review is becoming more widespread. CUNY faculty can look elsewhere for […]
General Comments (6 comments)
Thank you to the authors for providing a preview copy before the AAUP meetings. We would like to correct citation for our 2011 piece, which spends a fair amount of time defining the many functions of peer review: Harley D and Acord SK (2011) Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Future. University of California, Berkeley: Center for Studies in Higher Education. Available at: http://escholarship.org/uc/item/1xv148c8
Overall comment: great work! This is an excellent document that will provide great resources for those of us doing open review, and hopefully inspire those who are not to rethink their assumptions and practices. Thanks for all of the time that went into this, and for inviting our feedback to make it stronger!
First, let me congratulate the authors for creating such a resourceful document on open peer review and practicing what they preach by placing it online for public commentary. If you’ve never done this (e.g., the vast majority of academics), it’s harder than it looks. After re-reading key portions of the text and reviewing other readers’ remarks, my general comments here respond to five broad questions posed by the authors in their <a href=”http://mcpress.media-commons.org/open-review/request-for-feedback/>Request for Feedback</a>. 1) Clarity of purpose (Are our intentions for the document clear? Does it fulfill those promises?)Perhaps this is a characteristic of committee-driven documents, particularly those that seek to satisfy a wide range of members’ opinions on a given topic, but I had difficulty determining the primary purpose of this report. The top half of the <a href=”http://mcpress.media-commons.org/open-review/executive-summary/>executive summary</a> tells us that “The overall objective of these meetings was to help develop a set of community protocols and technical specifications that would help systematize open peer review and ensure that it met both academic expectations for rigor and the digital humanities’ embrace of the openness made possible by social networks and other digital platforms.” At first, it appears as if the goal of the report is to improve our model for open peer review, but that did not fit well in my mind with the bottom half of that page, which emphasized that “no single set of tools or rules can be imposed on open peer review,” which demands a decentralized “structured flexibility.” My confusion on this particular point led me to wonder about other purposes: Is the report intended to <em>inform</em> audiences about the “merits and pitfalls” of open peer review? Or to <em>advocate</em> for its broader adoption by scholars and publishers? Or to <em>evaluate</em> claims and evidence on whether open peer review produces better-quality scholarship than does traditional practice? In my reading, the report was very informational, but not strong in advocacy or evaluation. If I didn’t already believe in open peer review, this document may have intrigued me about the concept, but probably would not have persuaded me that the merits outweigh the pitfalls. Perhaps the lack of advocacy was the intent of the authors or the result of a committee-driven document. In any case, what is clearer to me now is the need for a careful evaluation of our growing examples of open peer review, and whether or not the evidence shows that “the crowd” produces better developmental editing than traditional practices alone. 2) Organizational concerns (Have we structured the document in a coherent and logical manner? Do sections flow and does the information within them seem to be in the right place?)Yes, the organization of the report makes sense to me, but its integration with this CommentPress website could be improved. For example, the “Request for Feedback” lists 5 general questions for readers, but this would be more effective if they were prominently featured and embedded directly into the “General Comments” section. See one way we tried to do this in <a href=”http://writinghistory.trincoll.edu”>Writing History in the Digital Age</a>. 3) Nuance of argument/perspective (Are we missing key connections between open review and the humanities tradition, key human dynamics, or existing tools that might strengthen our recommendations?)Make a stronger connection between open peer review and speed toward publication. 4) Examples (Are there additional experiments in or explorations of open review that we should include in our consideration?)The text briefly mentions several examples of open peer review, but richer descriptions (or side-bars or vignettes) would clearly strengthen this report for readers who want to know more about what actually happens in practice. If advocacy is a goal of this report, then give it more consideration. 5) Applicability (Are there ways in which the ideas we’ve discussed here might affect your own work that we should consider?)If an author (or group of authors) read this document and wished to experiment with open peer review, is there any “how-to” advice here about practical next steps? Normally I would not raise this as a criteria for a report, but since “applicability” is on your list, I wonder whether this current draft fulfills it.
Sorry about formatting in my comment above. That’s another issue with CommentPress: once you submit it, you can’t edit it.
Here’s what I hope will be a more readable version of the general comment that I hastily posted above: First, let me congratulate the authors for creating such a resourceful document on open peer review and practicing what they preach by placing it online for public commentary. If you’ve never done this (e.g., the vast majority of academics), it’s harder than it looks. After re-reading key portions of the text and reviewing other readers’ remarks, my general comments here respond to five broad questions posed by the authors in their Request for Feedback. 1) Clarity of purpose (Are our intentions for the document clear? Does it fulfill those promises?)Perhaps this is a characteristic of committee-driven documents, particularly those that seek to satisfy a wide range of members’ opinions on a given topic, but I had difficulty determining the primary purpose of this report. The top half of the executive summary tells us that “The overall objective of these meetings was to help develop a set of community protocols and technical specifications that would help systematize open peer review and ensure that it met both academic expectations for rigor and the digital humanities’ embrace of the openness made possible by social networks and other digital platforms.” At first, it appears as if the goal of the report is to improve our model for open peer review, but that did not fit well in my mind with the bottom half of that page, which emphasized that “no single set of tools or rules can be imposed on open peer review,” which demands a decentralized “structured flexibility.” My confusion on this particular point led me to wonder about other purposes: Is the report intended to inform audiences about the “merits and pitfalls” of open peer review? Or to advocate for its broader adoption by scholars and publishers? Or to evaluate claims and evidence on whether open peer review produces better-quality scholarship than does traditional practice? In my reading, the report was very informational, but not strong in advocacy or evaluation. If I didn’t already believe in open peer review, this document may have intrigued me about the concept, but probably would not have persuaded me that the merits outweigh the pitfalls. Perhaps the lack of advocacy was the intent of the authors or the result of a committee-driven document. In any case, what is clearer to me now is the need for a careful evaluation of our growing examples of open peer review, and whether or not the evidence shows that “the crowd” produces better developmental editing than traditional practices alone. 2) Organizational concerns (Have we structured the document in a coherent and logical manner? Do sections flow and does the information within them seem to be in the right place?)Yes, the organization of the report makes sense to me, but its integration with this CommentPress website could be improved. For example, the “Request for Feedback” lists 5 general questions for readers, but this would be more effective if they were prominently featured and embedded directly into the “General Comments” section. See one way we tried to do this in Writing History in the Digital Age. 3) Nuance of argument/perspective (Are we missing key connections between open review and the humanities tradition, key human dynamics, or existing tools that might strengthen our recommendations?)Make a stronger connection between open peer review and speed toward publication. 4) Examples (Are there additional experiments in or explorations of open review that we should include in our consideration?)The text briefly mentions several examples of open peer review, but richer descriptions (or side-bars or vignettes) would clearly strengthen this report for readers who want to know more about what actually happens in practice. If advocacy is a goal of this report, then give it more consideration. 5) Applicability (Are there ways in which the ideas we’ve discussed here might affect your own work that we should consider?)If an author (or group of authors) read this document and wished to experiment with open peer review, do you want any “how-to” advice or resources to appear about practical next steps? Normally I would not raise this as a criteria for a report, but since “applicability” is on your list, I wonder whether this current draft fulfills it.
Great material from what I’ve seen so far. Would you consider making the draft report available for reading in more various & open forms such as a single straight HTML page, PDF for printing, ePIN, etc.? The web-based, section-by-section is quite limiting of how I can access and read the doc, particularly on mobile (currently I’m viewing on iPhone 3GS, iOS 5, Safari) on which it is very difficult to use, particularly because of the floating right column covering most of the main text area. On mobile, I usually read articles only via Instapaper, Readbility, Pocket, etc.
thanks, Tim.
Preface (5 comments)
Not a fan of the “sweetness and light” comment, as it seems to make light of disagreements. Perhaps highlight instead how in contrast to any perceptions of DH as a community of homogeneous & like-minded “true-believers,” differences in perspective and beliefs emerged – and that would strengthen this paragraph’s important point that open review thrives as a site of diversity & discussion, not head-nodding consensus.
You could simply argue that the process of making this document reflected the “diversity of opinion” that you are discussing here.
I can see how the term ‘sweetness and light’ is useful, if you’re making an ironic pun using Matthew Arnold’s concept of culture. But maybe make the reference concrete to help reach a broader audience?
In addition to the comments above, perhaps add that one would not expect to reach unanimous agreement among a disparate group if they attempted to design a one-size-fits-all version of traditional blind review for all types of scholarly publications.
Might it be useful to mention why these particular policies and guidelines were selected?
Weathering the Current Climate for Open Review (4 comments)
While this is a larger question, acknowledging the different constraints outside of the U.S. tenure/promotion system might be useful. In my own conversations this year in Europe, I’ve heard from nearly every European scholar that they could never do the type of open review I’m doing with my book, given the strict measures by which their universities & government overseers measure impact & publication benchmarks. Of course, the irony is that by publishing openly, non-American scholars could actually get their work more broadly read & increase their actual impact, but the systems are not set-up for such ironies.
The other difficult reality here is the fact that current scholars interested in OA publications will probably have to do double the work of traditional scholars – since they will have to prove they have the chops for traditional publications AND experiment with OA.
How can champions of open peer review argue for that, specifically (that open peer review ‘adopts and enhances the best aspects of humanities based scholarly practices’)? Some examples would be helpful.
Yes, we definitely need more evidence. In the “Conclusions:What We Learned” section of Writing History in the Digital Age, we offered some descriptive data and illustrative excerpts on the 1000+ comments posted on the site to make our case for how and why open peer review improved the quality of our writing and intellectual coherence of the manuscript. By making drafts and commentary visible, we can trace how different author-reader exchanges influenced the final manuscript. Try doing that with traditional closed review.
Conclusion (4 comments)
In addition to “validate the open review process,” I think this document lays out ways to “strengthen open review practices,” which is equally (and arguably more) important than validation.
In my mind, there’s also the ability of academic discourse to be shared with professionals and members of the public in a more transparent way.
Continue to emphasize this key line throughout the report: “Moreover, we firmly endorse the notion that open review can and should facilitate the best kinds of humanities scholarship by virtue of its focus on the process of scholarly review as much as its end product.”
Moving from a “knowledge purveyor” to a “conversational steward” doesn’t sound like much of a promotion to me. Got a better phrase?
Executive Summary (3 comments)
So is the commentary that we’re writing now (summer 2012) a third level of open review?
But the kinds of open peer review mentioned here are still restricted, e.g., to those who attend the conference and the session at which a paper is presented and to those with whom an author chooses to share a working paper. The type of completely unrestricted, worldwide open peer review made possible by the Internet has no real precedent in the pre-Internet age.
Hi, Jack. Months later: Actually, this is that second (hypothetical as we wrote; actual, now) layer. The mix of past and future tense betrayed the in-betweenness of this document, I think. We’re in the process of revising now, back in Google Docs, and will release the final version once we’re done.
How to Comment (2 comments)
I have written extensively about open peer review, as has my colleague, Richard Smith, former editor of BMJ. See:
http://www.jopm.org/opinion/commentary/2009/10/21/reputation-systems-a-new-vision-for-publishing-and-peer-review/
And
http://e-patients.net/archives/2010/08/a-troubled-trifecta-peer-review-academia-tenure.html
I have read Planned Obsolescence and plan on actively peer reviewing in the future. This was helpful to read over and I am testing to see if my temporary password will work before I get my feet wet.
Request for Feedback (2 comments)
[…] therefore welcome the broadest possible feedback, both on the white paper’s details as well as on the larger questions that it raises. Please join […]
Additional example to consider: Hewett and Robidoux’s Virtual Collaborative Writing in the Workplace (2010; IGI Global). Chapter review was open to all authors who contributed, and Hewett and Robidoux developed a chapter in which they discuss this dynamic and experience at length (Chapter 22).
Appendix 1: Open Review Software (1 comment)
This is the most viable strategy that I can imagine, unless someone has an extra $100k lying around.
Notes (1 comment)
Thanks for reading. Correct spelling of my last name is Dougherty.