The bigger picture: where is the all disciplines conference for Australia?

What is the role of big interdisciplinary conferences? What do events like the recently held annual meeting of the American Association for the Advancement of Science (AAAS) and the Euroscience Open Forum (ESOF) provide?

If they are of value, should we reinvigorate something like this in Australia?

A recent Nature editorial has discussed general science conferences, arguing that these conferences allow researchers from various backgrounds – as well as policy-makers, stakeholders and citizens – to come together to discuss issues of broad public import that cut across disciplinary boundaries. Indeed, the editorial suggests that “for researchers wishing to enhance their awareness of the bigger issues and of other disciplines,” these meetings are a gift.

Australia doesn’t presently have such a forum. Yet it used to, and it should have one again.

Continue reading

Utility: Measuring the Quality of Science

Modern scholarly production runs on the idea that the output of scientists and other researchers should contribute directly to the rewards those people receive. Put simply, academic rewards should be distributed according to the merits of academic work. All in all, fair enough.

Yet current methods of assessing the output of academics – based overwhelmingly on the citation rates of standard journal publications – have been widely criticised as manifestly inequitable and inadequate. As Kent Anderson has asked,

Does scientific attention — as expressed through citations, media coverage, or practitioner knowledge — accrue to quality or reward the real contributors of breakthroughs? Or does attention in scientific publishing create a closed loop? …

One reality of the attention economy in science is the Matthew Effect, named after a Biblical passage and popularized in 1968 by Robert K. Merton. Basically, it’s the “rich get richer” premise that once you start winning, you keep accruing benefits.

This is a well-studied phenomenon for citations. Once an article gets cited, it keeps getting cited. Once an article gets overlooked, it can disappear forever.

Though many have argued that the flaw with this system lies in the method of measurement, I think that current measurements of academic output rest on a flawed metaphor. This metaphor can be presented something like this: Continue reading

Considering the social network for science innovation

There’s a lot of evidence these days that shows that social networks (the connections you have with your family, friends and colleagues, and more broadly with society at large) and social capital (the extent to which trust is contained within those connections) have an enormous impact on individual, organisational and societal outcomes.

It has become more and more common to apply the knowledge gained from the study of social networks and social capital to the science innovation landscape. This allows us to study the flows of science knowledge within scientific circles, between different disciplines, between scientists and policy makers, and between science and the public.

Continue reading

Science and Engineering Indicators 2010

The US National Science Board has just released Science and Engineering Indicators 2010, reporting broad trends in the global science and technology landscape to 2008 (pre Global Financial Crisis).

In general, the report shows a continuation of the pattern of the last two decades: an increase in the role of research in developed and developing economies, and the emergence of Asia (and particularly China) within this landscape.  See over the fold for a few selected graphs.

Continue reading