Utility: Measuring the Quality of Science

Modern scholarly production runs on the idea that the output of scientists and other researchers should contribute directly to the rewards those people receive. Put simply, academic rewards should be distributed according to the merits of academic work. All in all, fair enough.

Yet current methods of assessing the output of academics – based overwhelmingly on the citation rates of standard journal publications – have been widely criticised as manifestly inequitable and inadequate. As Kent Anderson has asked,

Does scientific attention — as expressed through citations, media coverage, or practitioner knowledge — accrue to quality or reward the real contributors of breakthroughs? Or does attention in scientific publishing create a closed loop? …

One reality of the attention economy in science is the Matthew Effect, named after a Biblical passage and popularized in 1968 by Robert K. Merton. Basically, it’s the “rich get richer” premise that once you start winning, you keep accruing benefits.

This is a well-studied phenomenon for citations. Once an article gets cited, it keeps getting cited. Once an article gets overlooked, it can disappear forever.

Though many have argued that the flaw with this system lies in the method of measurement, I think that current measurements of academic output rest on a flawed metaphor. This metaphor can be presented something like this:

The underlying goal of scholarly work – the goal probably acceptable to both scholarly workers and those who pay the bills – can be said to be the creation and dissemination of tools better able to represent, explain, understand or shape our world. That is, the production of tools that are more useful for solving theoretical or applied problems. Following this analogy, we can say that over the centuries we have created an enormous diversity of such intellectual tools, each better or worse at solving our theoretical and applied problems.

With this metaphor, it can be argued that scholarly work should be rewarded for its utility in contributing to solutions to theoretical and applied problems: we should reward the creation and dissemination of tools that are more useful.

Yet the current system of measurement of academic work – and indeed many of its suggested replacements – can only concentrate on the degree to which a tool is talked about, and even then in only specific environments. That is, Thomson Reuters’ Web of Knowledge, Elsevier’s Scopus and Google Scholar remain focused on the extent to which the representations of standard scholarly work (papers and books) are cited.

Remember, these representations of scholarly work – the journal articles we value so highly – are not essential scholarly work in and of themselves; they are only a particular (and imperfect) means of communicating the actual scholarly work.

This means that rather than developing tools appropriate to solving theoretical and applied problems, the scholarly world is oriented towards developing tools that are talked about. To continue the metaphor described above, though the hammer may be the most talked about tool in the shed, it is a device utterly useless for cutting wood. If we are to reorient the rewards for scholarly production towards solving the important theoretical and applied problems of the world today, then we must reorient our measurement towards the utility of academic production.

The question is, of course, how?

Here’s the thing: does Web 2.0, linking science producers, stakeholders, policy makers and consumers of science knowledge, offer a way to solve this problem? Does it offer a way to assess the utility of academic production?

Image by Flickr user mlhradio, used under a Creative Commons licence.

1 thought on “Utility: Measuring the Quality of Science

  1. Having worked in both the physical sciences and more recently in the social sciences one of the things that I have noticed is that journal articles in the social sciences seem to have many more citations than in the physical sciences. This has led me to think that journal articles in the two fields have somewhat different roles.
    In the physical sciences most of the intellectual effort and the hard graft takes place in the laboratory, the field, the theoretician’s study or the computer laboratory. The journal article is then a straightforwrd report of what happened, the conclusions arising from the investigation, and recommendations for future work.
    From my limited reading in the social sciences it seems that far more of the intellectual effort, the development of ideas, and the presentation of arguements appear in the journal article. The article itself is thus more than a simple report of external activities; it is an important part of the research itself. (Much the same seems to apply to theses in the two areas; simplfying wildly, reports or research arguementation)
    If one accepts this discipline related distinction then I would contend that journal articles in the physical sciences are indeed reports “oriented towards developing tools that are talked about”. We read and cite the articles in order to: provide a context, use, adapt and adopt new methods and experimental procedures, mine data and results relavant to one’s own work for comparison or extension, present new theoretical insights, use theoretical developments as the basis of an experimental investigation, or pick up on recommendations for future work. The amount of citation churn, as seems to be implied in the main posting, is comparatively small.

Comments are closed.