Open Access News

News from the open access movement


Tuesday, January 20, 2009

PLoS ONE will offer more impact-related data on articles

Elie Dolgin, New impact metric, The Scientist, January 19, 2009.  Excerpt:

In an attempt to provide alternative metrics to the traditional journal impact factor, the open-access journal Public Library of Science ONE announced that it will release a slew of alternative impact data about individual articles in the coming months.

The new "articles-level metrics project" -- which will post usage data, page views, citations from Scopus and CrossRef, social networking links, press coverage, comments, and user ratings for each of PLoS ONE's thousands of articles -- was announced yesterday (Jan. 18) by Peter Binfield, the journal's managing editor, at the ScienceOnline'09 conference in Research Triangle Park, North Carolina.

"No one has any data other than [ISI] impact factors," Binfield told The Scientist. "Our idea is to throw up a bunch of metrics and see what people use."

From its inception at the end of 2006, PLoS ONE has eschewed the notion of impact factors. (It is not currently listed by the ISI Web of Science's rankings.) Binfield argued that the traditional impact factor judges a journal's overall performance, rather than assessing impact at the article-level. The new scheme, however, is aimed at evaluating each article on its own merits, regardless of the other papers in the same journal, he said.

PLoS ONE doesn't plan to crunch the data itself, though. "We're not being arrogant enough to make our own metric," said Binfield. Rather, he hopes that the journal's readers will use the information to come to their conclusions. "We're putting the data out there and letting the world figure it out."

Eventually, Binfield hopes that readers will be able to personalize how they view the data, and sort articles according to the metric of their choice....

Comment.  Kudos to PLoS ONE.  This is an important decision and I hope that other journals (OA and TA) will follow suit. All academics have an interest in breaking the stranglehold of impact factors, undoing their pernicious effects on hiring, promotion, and funding, and working toward more nuanced impact measurements.  Because most OA journals are new, friends of OA have a special reason for undoing the pernicious incentives created by impact factors to shun new journals as such, regardless of their quality.  Here's an excerpt from an article I wrote last September:

...If you've ever had to consider a candidate for hiring, promotion, or tenure, you know that it's much easier to tell whether she has published in high-impact or high-prestige journals than to tell whether her articles are actually good....

Impact factors (IFs) rose to prominence in part because they fulfilled the need for easy quantitative judgments and allowed non-experts to evaluate experts....

IFs measure journal citation impact, not article impact, not author impact, not journal quality, not article quality, and not author quality, but they seemed to provide a reasonable surrogate for a quality measurement in a world desperate for a reasonable surrogate.  Or they did until we realized that they can be distorted by self-citation and reciprocal citation, that some editors pressure authors to cite the journal, that review articles can boost IF without boosting research impact, that articles can be cited for their weaknesses as well as their strengths, that a given article is as likely to bring a journal's IF down as up, that IFs are only computed for a minority of journals, favoring those from North America and Europe, and that they are only computed for journals at least two years old, discriminating against new journals....

When we want to assess the quality of articles or people, and not the citation impact of journals, then we need measurements that are more nuanced, more focused on the salient variables, more fair to the variety of scholarly resources, more comprehensive, more timely, and with luck more automated and fully OA....

I'm never surprised when OA journals report high IFs, often higher than older and better-known journals in their fields.  This reflects the well-documented OA impact advantage.  I'm glad of the evidence that OA journals can play at this game and win.  I'm not saying that journals shouldn't care about their citation impact, or that IFs measure nothing.  I'm only saying that IFs don't measure quality and that universities should care more about quality, especially article quality and candidate quality, than journal citation impact....

I do want to increase submissions to OA journals, but the present argument has the importantly different goal of removing disincentives to submit to OA journals....