Modern scholarly production runs on the idea that the output of scientists and other researchers should contribute directly to the rewards those people receive. Put simply, academic rewards should be distributed according to the merits of academic work. All in all, fair enough.
Yet current methods of assessing the output of academics – based overwhelmingly on the citation rates of standard journal publications – have been widely criticised as manifestly inequitable and inadequate. As Kent Anderson has asked,
Does scientific attention — as expressed through citations, media coverage, or practitioner knowledge — accrue to quality or reward the real contributors of breakthroughs? Or does attention in scientific publishing create a closed loop? …
One reality of the attention economy in science is the Matthew Effect, named after a Biblical passage and popularized in 1968 by Robert K. Merton. Basically, it’s the “rich get richer” premise that once you start winning, you keep accruing benefits.
This is a well-studied phenomenon for citations. Once an article gets cited, it keeps getting cited. Once an article gets overlooked, it can disappear forever.
Though many have argued that the flaw with this system lies in the method of measurement, I think that current measurements of academic output rest on a flawed metaphor. This metaphor can be presented something like this:
The underlying goal of scholarly work – the goal probably acceptable to both scholarly workers and those who pay the bills – can be said to be the creation and dissemination of tools better able to represent, explain, understand or shape our world. That is, the production of tools that are more useful for solving theoretical or applied problems. Following this analogy, we can say that over the centuries we have created an enormous diversity of such intellectual tools, each better or worse at solving our theoretical and applied problems.
With this metaphor, it can be argued that scholarly work should be rewarded for its utility in contributing to solutions to theoretical and applied problems: we should reward the creation and dissemination of tools that are more useful.
Yet the current system of measurement of academic work – and indeed many of its suggested replacements – can only concentrate on the degree to which a tool is talked about, and even then in only specific environments. That is, Thomson Reuters’ Web of Knowledge, Elsevier’s Scopus and Google Scholar remain focused on the extent to which the representations of standard scholarly work (papers and books) are cited.
Remember, these representations of scholarly work – the journal articles we value so highly – are not essential scholarly work in and of themselves; they are only a particular (and imperfect) means of communicating the actual scholarly work.
This means that rather than developing tools appropriate to solving theoretical and applied problems, the scholarly world is oriented towards developing tools that are talked about. To continue the metaphor described above, though the hammer may be the most talked about tool in the shed, it is a device utterly useless for cutting wood. If we are to reorient the rewards for scholarly production towards solving the important theoretical and applied problems of the world today, then we must reorient our measurement towards the utility of academic production.
The question is, of course, how?
Here’s the thing: does Web 2.0, linking science producers, stakeholders, policy makers and consumers of science knowledge, offer a way to solve this problem? Does it offer a way to assess the utility of academic production?
Image by Flickr user mlhradio, used under a Creative Commons licence.