|
Publishing, Technology, and the Future of the Academy

credentialing, revisited

1 Leave a comment on paragraph 1 0 But the idea of texts and authors being “ranked” and “rated” within the system raises several important concerns, most notably about the quantification of assessment across the academy. Faculty in the humanities in particular are justifiably anxious about the degree to which accrediting bodies and the U.S. Department of Education are demanding empirical, often numerical, accounting of things like “student learning outcomes,” even in fields in which the learning itself isn’t empirically-driven, but rather focused on interpretation and argument. Certainly we don’t want our own work to be subject to the same kinds of “bean-counting” principles, in which statistics overtake more nuanced understandings of significance; as Lindsay Waters suggests, the danger in assuming that all knowledge can be quantified is that “[e]mpiricism makes people slaves to what they can see and count” (9), and the values of the humanities are largely non-countable. Moreover, our colleagues in the sciences might provide a bit of a cautionary tale: even in fields whose methods and evidence are largely empirically produced, concerns about the reliance on citation indexes and impact factors as metrics of faculty achievement are growing.[1.44] We certainly don’t want to suggest to tenure and promotion review committees that the data produced through a process of online peer-to-peer review is a more accurate evaluation of faculty performance simply because it contains numbers.

2 Leave a comment on paragraph 2 2 On the other hand, we’re already relying upon a system that’s even more reductive than the kinds of metrics that the web can provide; the results of the current system of peer review are a simple binary: either the article or monograph was published in a peer-reviewed venue or it was not. There is precious little nuance in such a mode of evaluation, little room for considering whether a text published in a non-traditional format has been important in its field, little means of assessing the value of a scholar’s contributions to a field outside of standardized modes of publishing. Network-based peer-to-peer review can provide us with certain kinds of information that can help complicate this practice, including of course the quantitative, such as numbers of inbound links, of comments, of citations, and of course statistical analysis of community-based review practices, but also including a wide range of qualitative, evaluative, interpretative commentary from the other authors and readers interacting with the texts we produce. No single measure can demonstrative proof of scholarly significance, but a range of such information, including both the numerical and the narrative, the empirical and the ephemeral, can help illuminate the wide variety of ways that texts interact with the community of scholars.

3 Leave a comment on paragraph 3 2 The question remains, of course, whether the various credentialing bodies that currently rely on peer review’s gatekeeping function will be satisfied with the kinds of information that such a system can provide. This is the point at which I must fall back on polemic, and simply insist that they must — that we must say to hiring committees, tenure and promotion review bodies, and, most importantly, ourselves, that the fact that ostensibly anonymous reviewers didn’t determine whether an article or monograph was worthy of publication shouldn’t matter. A system of peer-to-peer review won’t give us an easy binary criterion for determining “value” — but then, if we’re honest, it never has. It will, however, give us invaluable information about how a scholar is situated within her field, how her work has been received and used by her peers, and what kind of effect she is having on her field’s future. Moreover, we need to remind ourselves, as Cathy Davidson has pointed out, that the materials used in a tenure review are meant in some sense to be metonymic, standing in for the “promise” of all the future work that a scholar will do (“Research”). We currently reduce such “promise” to the existence of a certain quantity of texts; we need instead to shift our focus to active scholarly engagement, of the sort peer-to-peer review might help us produce. Requiring an up-or-down measurement of impact, or promise, or engagement, or even relying on computationally produced metrics, can never provide an adequate substitute for the real work that such credentialing bodies must do: reading and assessing the scholarship, and engaging with expert analysis on the relationship between the scholarship and the field.[1.45] It is in part our desire for shortcuts, for a clear and quantifiable set of benchmarks by which we can judge “quality” without having to do the labor ourselves, that has gotten the academy into its current predicament, in which the very systems of production on which it relies are crumbling. Until institutional assumptions about how scholarly work should be assessed are changed — but moreover, until we come to understand peer-review as part of an ongoing conversation among scholars rather than a convenient means of determining “value” without all that inconvenient reading and discussion — the processes of evaluation for tenure and promotion are doomed to become a monster that eats its young, trapped in an early twentieth century model of scholarly production that simply no longer works.

  • See, as only two among many possible citations, Seglen and Richard Smith. Don Brenneis has likewise drawn my attention to the grave concern in the UK about chancellor Gordon Brown’s decision to replace the Research Assessment Exercise, which previously determined funding for British universities, with a very narrow set of metrics including citation indexes; see Alexandra Smith.
  • Lindsay Waters: “Reading the papers themselves! How quaint! How medieval!” (20).
  • Page 18

    Source: https://mcpress.media-commons.org/plannedobsolescence/one/credentialing-revisited/?replytocom=57