General Comments
6 general comments
I am not familiar with the open review trial from 2006. Does lack of perceived benefits constitute failure? It seems that if a topic made it to the review process that several people must have deemed it worthy of critique. Perhaps there was lack of publicity or a lack of familiarity with a new approach to reviewing?
I disagree that open review is already being practiced, because, typically, conference and workshops tend to attract–and be attended by–similar-field academic peers. The beauty of open review is that people from all disciplines can offer their thoughts. Great improvements can result from scientists’ unique insight for humanities scholars (and vice versa). This sort of academic intermingling has been stymied by the requirement of physical attendance at a conference or workshop. But in a digital environment, the process is easier.
I love the notion of open peer review. The open review seems to encourage a reinvigorated collegial, scholarly communication, as opposed to the blind reviews which are often done by professors for payment if the journal is large enough. I do see areas for concern in this process, however. I am not sure that all professors/reviewers in this current Capitalistic society who reside in an arguably competitive milieu would operate perfectly within a system based upon something quite akin to a process based upon generalized reciprocity. Are there safety nets, legal or otherwise, for such situations?
I agree that there is considerable benefit to speeding up the publication process; not only does the scholar benefit from a faster turnaround tme, but the community benefits from access to more current information.
This seems to be a concern for me as well. It would be hard to draw that line, especially in a digital space. Though I do think it is a good thing to have the world of “peer review” expanded beyond a select group of people, it could pose to be a challenge to maintain quality control on the reviews that are going on.
I have read Planned Obsolescence and plan on actively peer reviewing in the future. This was helpful to read over and I am testing to see if my temporary password will work before I get my feet wet.
Thank you for clarifying the notion of “peer” and its ever-expanding conceptualization in a digital world. I wonder though, if there should still be limitations imposed on this idea. Yes, a closed, fairly isolated peer group seems to inhibit the growth of scholarship and understanding while limiting it to a specific worldview, or as this paragraph suggests, perpetuating singular opinions, but where do we draw the line in terms of who is “qualified” to be considered a peer reviewer?
This raises an interesting point. Since there is more diverse ways of publishing do we need have a review process for both the text and the means that people use to find that text? There is a ton of material available online. Some is valuable and a lot of it is not. What if a valuable work that does not show up in popular searches? I would say this is a problem and people should also be reviewing this process along with the content of the digital publication itself.
I agree with Sandy on the question of revisions by the authors. Additionally, did the different review processes make a difference with regards to how likely the authors were to make revisions?
It sounds like there is already something of a framework for supporting open review. I wonder if there is a way to collect a sample of the feedback mentioned in this paragraph to analyze how it can be just as useful as traditional peer review.
Source: https://mcpress.media-commons.org/open-review/general-comments/
Thank you to the authors for providing a preview copy before the AAUP meetings. We would like to correct citation for our 2011 piece, which spends a fair amount of time defining the many functions of peer review: Harley D and Acord SK (2011) Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Future. University of California, Berkeley: Center for Studies in Higher Education. Available at: http://escholarship.org/uc/item/1xv148c8
Overall comment: great work! This is an excellent document that will provide great resources for those of us doing open review, and hopefully inspire those who are not to rethink their assumptions and practices. Thanks for all of the time that went into this, and for inviting our feedback to make it stronger!
First, let me congratulate the authors for creating such a resourceful document on open peer review and practicing what they preach by placing it online for public commentary. If you’ve never done this (e.g., the vast majority of academics), it’s harder than it looks. After re-reading key portions of the text and reviewing other readers’ remarks, my general comments here respond to five broad questions posed by the authors in their <a href=”http://mcpress.media-commons.org/open-review/request-for-feedback/>Request for Feedback</a>. 1) Clarity of purpose (Are our intentions for the document clear? Does it fulfill those promises?)Perhaps this is a characteristic of committee-driven documents, particularly those that seek to satisfy a wide range of members’ opinions on a given topic, but I had difficulty determining the primary purpose of this report. The top half of the <a href=”http://mcpress.media-commons.org/open-review/executive-summary/>executive summary</a> tells us that “The overall objective of these meetings was to help develop a set of community protocols and technical specifications that would help systematize open peer review and ensure that it met both academic expectations for rigor and the digital humanities’ embrace of the openness made possible by social networks and other digital platforms.” At first, it appears as if the goal of the report is to improve our model for open peer review, but that did not fit well in my mind with the bottom half of that page, which emphasized that “no single set of tools or rules can be imposed on open peer review,” which demands a decentralized “structured flexibility.” My confusion on this particular point led me to wonder about other purposes: Is the report intended to <em>inform</em> audiences about the “merits and pitfalls” of open peer review? Or to <em>advocate</em> for its broader adoption by scholars and publishers? Or to <em>evaluate</em> claims and evidence on whether open peer review produces better-quality scholarship than does traditional practice? In my reading, the report was very informational, but not strong in advocacy or evaluation. If I didn’t already believe in open peer review, this document may have intrigued me about the concept, but probably would not have persuaded me that the merits outweigh the pitfalls. Perhaps the lack of advocacy was the intent of the authors or the result of a committee-driven document. In any case, what is clearer to me now is the need for a careful evaluation of our growing examples of open peer review, and whether or not the evidence shows that “the crowd” produces better developmental editing than traditional practices alone. 2) Organizational concerns (Have we structured the document in a coherent and logical manner? Do sections flow and does the information within them seem to be in the right place?)Yes, the organization of the report makes sense to me, but its integration with this CommentPress website could be improved. For example, the “Request for Feedback” lists 5 general questions for readers, but this would be more effective if they were prominently featured and embedded directly into the “General Comments” section. See one way we tried to do this in <a href=”http://writinghistory.trincoll.edu”>Writing History in the Digital Age</a>. 3) Nuance of argument/perspective (Are we missing key connections between open review and the humanities tradition, key human dynamics, or existing tools that might strengthen our recommendations?)Make a stronger connection between open peer review and speed toward publication. 4) Examples (Are there additional experiments in or explorations of open review that we should include in our consideration?)The text briefly mentions several examples of open peer review, but richer descriptions (or side-bars or vignettes) would clearly strengthen this report for readers who want to know more about what actually happens in practice. If advocacy is a goal of this report, then give it more consideration. 5) Applicability (Are there ways in which the ideas we’ve discussed here might affect your own work that we should consider?)If an author (or group of authors) read this document and wished to experiment with open peer review, is there any “how-to” advice here about practical next steps? Normally I would not raise this as a criteria for a report, but since “applicability” is on your list, I wonder whether this current draft fulfills it.
Sorry about formatting in my comment above. That’s another issue with CommentPress: once you submit it, you can’t edit it.
Here’s what I hope will be a more readable version of the general comment that I hastily posted above: First, let me congratulate the authors for creating such a resourceful document on open peer review and practicing what they preach by placing it online for public commentary. If you’ve never done this (e.g., the vast majority of academics), it’s harder than it looks. After re-reading key portions of the text and reviewing other readers’ remarks, my general comments here respond to five broad questions posed by the authors in their Request for Feedback. 1) Clarity of purpose (Are our intentions for the document clear? Does it fulfill those promises?)Perhaps this is a characteristic of committee-driven documents, particularly those that seek to satisfy a wide range of members’ opinions on a given topic, but I had difficulty determining the primary purpose of this report. The top half of the executive summary tells us that “The overall objective of these meetings was to help develop a set of community protocols and technical specifications that would help systematize open peer review and ensure that it met both academic expectations for rigor and the digital humanities’ embrace of the openness made possible by social networks and other digital platforms.” At first, it appears as if the goal of the report is to improve our model for open peer review, but that did not fit well in my mind with the bottom half of that page, which emphasized that “no single set of tools or rules can be imposed on open peer review,” which demands a decentralized “structured flexibility.” My confusion on this particular point led me to wonder about other purposes: Is the report intended to inform audiences about the “merits and pitfalls” of open peer review? Or to advocate for its broader adoption by scholars and publishers? Or to evaluate claims and evidence on whether open peer review produces better-quality scholarship than does traditional practice? In my reading, the report was very informational, but not strong in advocacy or evaluation. If I didn’t already believe in open peer review, this document may have intrigued me about the concept, but probably would not have persuaded me that the merits outweigh the pitfalls. Perhaps the lack of advocacy was the intent of the authors or the result of a committee-driven document. In any case, what is clearer to me now is the need for a careful evaluation of our growing examples of open peer review, and whether or not the evidence shows that “the crowd” produces better developmental editing than traditional practices alone. 2) Organizational concerns (Have we structured the document in a coherent and logical manner? Do sections flow and does the information within them seem to be in the right place?)Yes, the organization of the report makes sense to me, but its integration with this CommentPress website could be improved. For example, the “Request for Feedback” lists 5 general questions for readers, but this would be more effective if they were prominently featured and embedded directly into the “General Comments” section. See one way we tried to do this in Writing History in the Digital Age. 3) Nuance of argument/perspective (Are we missing key connections between open review and the humanities tradition, key human dynamics, or existing tools that might strengthen our recommendations?)Make a stronger connection between open peer review and speed toward publication. 4) Examples (Are there additional experiments in or explorations of open review that we should include in our consideration?)The text briefly mentions several examples of open peer review, but richer descriptions (or side-bars or vignettes) would clearly strengthen this report for readers who want to know more about what actually happens in practice. If advocacy is a goal of this report, then give it more consideration. 5) Applicability (Are there ways in which the ideas we’ve discussed here might affect your own work that we should consider?)If an author (or group of authors) read this document and wished to experiment with open peer review, do you want any “how-to” advice or resources to appear about practical next steps? Normally I would not raise this as a criteria for a report, but since “applicability” is on your list, I wonder whether this current draft fulfills it.
Great material from what I’ve seen so far. Would you consider making the draft report available for reading in more various & open forms such as a single straight HTML page, PDF for printing, ePIN, etc.? The web-based, section-by-section is quite limiting of how I can access and read the doc, particularly on mobile (currently I’m viewing on iPhone 3GS, iOS 5, Safari) on which it is very difficult to use, particularly because of the floating right column covering most of the main text area. On mobile, I usually read articles only via Instapaper, Readbility, Pocket, etc.
thanks, Tim.