|
Publishing, Technology, and the Future of the Academy

community-based filtering

1 Leave a comment on paragraph 1 0 One might see a relatively simple example of such a system in Philica, which bills itself as “the journal of everything.” Philica is an open publishing network, co-founded by British psychologists Ian Walker and Nigel Holt, which invites scholars from any field to post papers, which are then made freely available for reading and review by any interested user [see screenshot 1.4].
screenshot 1.4

2 Leave a comment on paragraph 2 0 Philica describes itself as operating “like eBay for academics. When somebody reviews your article, the impact of that review depends on the reviewer’s own reviews. This means that the opinion of somebody whose work is highly regarded carries more weight than the opinion of somebody whose work is rated poorly” (“An Introduction to Using Philica”).[1.38] Account registration is open, though members are asked to declare their institutional affiliations if they have them, and encouraged to obtain “confirmation” of their status within the academy by sending the site administrators a letter on institutional letterhead, or a letter detailing appropriate credentials as an independent researcher. The site’s FAQ indicates that membership is in theory restricted to “fully-qualified academics,” though without confirmation, one could simply claim such a status, and thus the system makes an unconfirmed membership “much less useful than a confirmed membership, since (a) unconfirmed members’ reviews carry less weight than confirmed members’ reviews and (b) readers are less likely to trust research from unconfirmed authors. In other words, there’s not really much point joining if you do not go on to prove your status” (“Philica FAQs”). Reviewing articles published on Philica is open to registered, logged-in members, whether “confirmed” or not, though confirmed members’ reviews are noted with a check mark. Articles are evaluated by reviewers both quantitatively (rating “originality,” “importance,” and “overall quality” on a 1-to-7 scale) and qualitatively, via comments. Article authors each have a page that details their work on the site, including the number of articles and notes that they have published, the mean peer-review ratings their work has received, and the number of reviews and comments that the author has contributed to other work. The site notes that the author’s ratings “will change whenever a new review of this author’s work appears, as well as whenever somebody reviews the work of anybody who has reviewed” the work of the author in question.[1.39]

3 Leave a comment on paragraph 3 0 While Philica’s system presents some compelling possibilities for the future of scholarly publishing, it nonetheless has a number of apparent shortcomings: though the articles uploaded to the site are reviewed, and reviews are weighted based on the assessed quality of the work of the reviewers, the quality of the reviews themselves isn’t assessed, and thus these reviews don’t count among the “work” used in determining the value of a reviewer’s comments. In part this is due to the fact that while the comments made by a particular reviewer are associated with one another, they are not associated with their authors by name, but are rather submitted anonymously. Each review entry page contains the following notice: “Unless you sign your review, which you are welcome to do if you wish, it will be anonymous to the author and to other Philica readers. Nevertheless, the administrators can see who you are if necessary so please be sure your review is not abusive.” Thus, Philica only opens the comments produced by peer review to public scrutiny; though reviewers are accountable to the site’s administrators, they are not directly accountable to the article’s authors, or to the network’s community as a whole. And while the reviewers’ own peer-review ratings affect the way the system weights the ratings they assign to others, the working of this algorithm remains partially hidden behind the veil of anonymity.

4 Leave a comment on paragraph 4 2 Further, as a “journal of everything,” Philica runs the risk of precisely the kind of overflow that makes Internet skeptics worry; if “everything” is published there, how will researchers find what they need — and will they, as Shatz suggested, be required to “trek through enormous amounts of junk before finding articles” that are at all “rewarding” (16)? Such concerns are well-founded, in this case, as work published on Philica is organized by discipline, but as of August 2009, only 27 such disciplinary categories exist on the site, with no further subdivisions, tags, or other metadata allowing the reader to find relevant material. The site thus suffers from a too-general mode of organization; the “humanities” as a whole, for instance, represents a single field on Philica. The result, however, has not been overflow but, if anything, underflow; only 164 articles or notes were published on Philica between March 2006 and August 2009, a mere 4 of which were in the humanities. Such a miniscule rate of participation, like that experienced in the Nature open review trial, could be taken to indicate a general resistance among academics to new publishing models — and yet, it’s hard to imagine that a traditional, closed review, print-based “journal of everything” would fare much better. The purpose of scholarly publishing, after all, is not merely making the results of research public, but making those results public to the appropriate community. Because Philica has no particular disciplinary focus, it seems to have been unable to build a community.

5 Leave a comment on paragraph 5 0 The development and maintenance of such a community is key to the scholarly publishing network of the future, and in particular to its implementation of peer-to-peer review, because while the post-publication filtering mechanisms that such a system will require may in part be computational, they cannot be wholly automated; the individual intelligences and interests of the members of this social network are the bedrock of community-based filtering. One might, for instance, look at Chris Anderson’s explanation for the success of MySpace as a promoter of exclusively “Long Tail” music, where other such networks like MP3.com had failed: “The answer at this point appears to be that it is a very effective combination of community and content. The strong social ties between the tens of millions of fans there help guide them to obscure music that they wouldn’t otherwise find, while the content gives them a reason to keep visiting” (Long Tail 149). The absence of the kind of community that MySpace fosters — a user base committed to the site as a means of self-expression, whose relationships with one another are built precisely around that self-expression — prevented MP3.com from becoming a flourishing site for the exploration of new and obscure music, precisely because the absence of social ties among users left them no way of assessing the recommendations others were making. And the more niche-based the mode of cultural communication becomes — the further down the “tail” that communication moves — the more important such community-based knowledge becomes.

6 Leave a comment on paragraph 6 1 Given the case of Philica, in fact, one might begin to speculate that, in electronic scholarly publishing, the community is necessary not just to the post-publication review and filtering process but to the production of content itself. Scholarly communication, generally speaking, is all tail, aimed at a comparatively small niche group of similarly focused readers; for that reason, the technologies of the internet seem particularly well-positioned to enable those readers both to find and communicate with one another, as well as to set community-based standards for the evaluation of their work. Only once it is clear to scholars that the standards of this community are their standards — that this is a community to which they belong — will many of them venture to contribute their work to it. In order for such community to be established, however, its individual members must know one another, at least by reputation, and thus the process of review – the setting of standards by the community — must itself be open to continual review.

7 Leave a comment on paragraph 7 0 It seems self-evident: the more open such systems are, the more debate they foster, and the more communal value is placed on participating in them, the better the material they produce can be. However, all of these aspects of the community must be carefully nurtured in order for it to avoid turning into what Cass Sunstein describes, in Infotopia, as a deliberative cocoon, in which small groups of the like-minded reinforce one another’s biases and produce unspoken social pressures toward conformity with what appears to be majority opinion, resulting in a mode of “group-think” that propagates errors rather than correcting them. Sunstein points out that new internet-based knowledge aggregation systems such as wikis, open source software, and blogs “offer distinct models for how groups, large or small, might gather information and interact on the Internet. They provide important supplements to, or substitutes for, ordinary deliberation” (Sunstein 148), enabling correctives for the errors that small groups of decision-makers can produce. Using such new technologies for purposes of deliberation, however, requires that all members of the network be equally empowered — and in fact, equally compelled — to contribute their ideas and voice their dissent, lest the network fall prey to a new mode of self-reinforcing group-think.

8 Leave a comment on paragraph 8 2 The significance of dissent in Sunstein’s assessment of networked discussion might usefully remind us of Bill Readings’s model of the University of Thought, the mode of rethinking the contemporary academic institution that in his assessment stands the only possible chance of escaping the corporatizing effects of the University of Excellence. As he points, out, despite the equal emptiness of the two signifiers, “Thought does not function as an answer but as a question. Excellence works because no one has to ask what it means. Thought demands that we ask what it means, because its status as mere name — radically detached from truth — enforces that question” (159-60). Moreover, Thought provides a means of ethical engagement with our community, one that crucially functions not by creating and enforcing consensus, but by encouraging and dwelling within dissensus. I want to argue that while the current model of closed, pre-publication review enacts the most oppressive aspects of the consensus model of community — a forced agreement about standards, an assumption that we’re all speaking the same language, an ability to hide behind the notion of excellence — open peer review provides space for Readings’s dissensus. Such an open system of discussion and dissent has the potential to allow many more ideas into circulation, no doubt many of which we won’t agree with, and some of which we’ll find downright appalling. But only in allowing those ideas to be aired and argued against can we really obtain the openness in scholarly thought we claim to value.

9 Leave a comment on paragraph 9 0 Readings’s understanding of the ethical obligation we bear toward one another, which primarily manifests as an obligation to listen, must be extended not just to the scene of teaching or to the faculty meeting, but also to the scene of publishing; we need to think about our work as reviewers as part of an ongoing process of “thinking together,” one necessary for our full participation in the scholarly community. In this sense, the key to avoiding the group-think Sunstein fears is not heightened intellectual individualism — separating oneself from the network — but paradoxically placing the advancement of the community’s knowledge ahead of one’s own personal advancement. Sunstein presents evidence that the propagation of errors is “far less likely when each individual knows that she has nothing to gain from a correct individual decision and everything to gain from a correct group decision” (205). Such a turn toward a communally distributed mode of knowledge production, however, will not come easily in a culture in which credentialing processes focus precisely on individual achievement. I’ll turn my attention more fully to the issue of collaboration and community in chapter 2, but for now will simply suggest that the success of a community-based review system will hinge on the evaluation of one’s contributions to reviewing being considered as important as, if not even more important than, one’s own individual projects. Genuine peer-to-peer review will require prioritizing members’ work on behalf of the community within the community’s reward structures.

  • The comparison to eBay is perhaps a bit unfortunate, resulting in faintly crass images of intellectual commerce, but there’s something apt in the relationship as well, suggesting that electronic scholarly publishing might function as a locus for the exchange of ideas in which producers and consumers can find one another without the need for an intermediary. Lindsay Waters, however, argues that the marketplace “is not a concept that should be considered the ultimate framework for the free play of ideas” (9). See also Shatz for a more elaborated argument against the marketplace metaphor.
  • See, for instance, “Dr. Ian Walker’s Philica Details.”
  • Page 16

    Source: https://mcpress.media-commons.org/plannedobsolescence/one/community-based-filtering/?replytopara=7