Open Access News

News from the open access movement

Tuesday, June 03, 2008

More on OA and metrics

Here are two more OA-related articles from the special March issue of Ethics in Science and Environmental Politics devoted to The use and misuse of bibliometric indices in evaluating scholarly performance:

  • Michael Taylor, Pandelis Perakakis, and Varvara Trachana, The siege of science.  (Thanks to Kevin Zelnio.)  Abstract:   Science is in a state of siege. The traditional stage for scientific ideas through peer-reviewed academic journals has been hijacked by an overpriced journal monopoly. After a wave of mergers and take-overs, big business publishing houses now exercise economic control over access to knowledge and free scientific discourse. Their ‘all is number’ rationale, made possible and perpetuated by single-parameter bibliometric indices like the Impact Factor and the h-index has led to a measurement of scientists, science and science communication with quality being reduced to quantity and with careers hanging in the balance of column totals. Other multi-parameter indices like the subscription-based Index Copernicus have not helped to resolve the situation. The patented and undisclosed black box algorithm of the Index Copernicus has just replaced one yardstick by another even less accessible one. Moreover, the academic as author, editor and/or reviewer, under intense competitive pressure, is forced to play the publishing game where such numbers rule, leading to frequent abuses of power. However, there are also deep paradoxes at the heart of this siege. Electronic software for producing camera-ready-copy, LaTeX style files, the internet and technology mean that it has never been easier or cheaper to publish than it is today. Despite this, top journals are charging exorbitant prices for authors to publish and for readers to access their articles. Academic libraries are feeling the pinch the most and are being forced to cut journal subscriptions. Not surprisingly, scholars in droves are declaring their independence from commercial publishers and are moving to open access journals or are self-archiving their articles in public domain pre-print servers. That this movement is starting to hurt the big publishing houses is evidenced by their use of counter-tactics such as proprietary pre-print servers and pure propaganda in their attempts to guard against profit loss. Whether or not bibliometry will be an artefact in the future depends on the outcome of this battle. Here, we review the current status of this siege, how it arose and how it is likely to evolve.

  • Stevan Harnad, Validating research performance metrics against peer rankings.  Abstract:   A rich and diverse set of potential bibliometric and scientometric predictors of research performance quality and importance are emerging today—from the classic metrics (publication counts, journal impact factors and individual article/author citation counts) to promising new online metrics such as download counts, hub/authority scores and growth/decay chronometrics. In and of themselves, however, metrics are circular: They need to be jointly tested and validated against what it is that they purport to measure and predict, with each metric weighted according to its contribution to their joint predictive power. The natural criterion against which to validate metrics is expert evaluation by peers; a unique opportunity to do this is offered by the 2008 UK Research Assessment Exercise, in which a full spectrum of metrics can be jointly tested, field by field, against peer rankings.

    From the body of the paper:

    ...It has now been demonstrated in over a dozen disciplines, systematically comparing articles published in the same journal and year, that the citation counts of articles that are made freely accessible to all users on the web (Open Access, OA) are on average twice as high as the citation counts of those that are not....

    Just as peer rankings and metrics can be used to mutually validate one another, so metrics can be used as incentives for providing OA, while OA itself (as it grows) enhances the predictive and directive power of metrics....