Open Access News

News from the open access movement


Saturday, November 22, 2008

More on the quality of OA journals

Richard Poynder, Open Access: The question of quality, Open and Shut?  November 22, 2008.  Excerpt:

Does Open Access (OA) publishing mean having to accept lower-quality peer-reviewed journals, as some claim, or can we expect OA to improve quality? How good are the current tools used to measure the quality of research papers in any case, and could OA help develop new ones?

I started puzzling over the question of quality, after a professor of chemistry at the University of Houston, Eric Bittner, posted a comment on Open & Shut in October....[H]is main point seemed to be that OA journals are inevitably of lower quality than traditional subscription journals.

With OA advocates a little concerned about the activities of some of the new publishers — and the quality of their journals — we need perhaps to ask the question: could Bittner be right? ...

Like most researchers, Bittner appears to believe that the best tool for measuring the quality of published research is the so-called journal impact factor (IF, or JIF). So apparently does his department. Explained Bittner:

"[O]ur department scales the number of articles I publish by the impact factor of the journal. So, there is little incentive for me to publish in the latest 'Open Access' journal announced by some small publishing house."

What Bittner didn't add, of course, is that some OA journals have an IF equal to, or better than, many prestigious subscription journals....

Another point to bear in mind is that many OA journals are relatively new, so they may not have had sufficient time to acquire the prestige they deserve, or an IF ranking that accurately reflects their quality — not least because there is an inevitable time lag between the launch of a new journal and the point at which it can expect to acquire an impact factor score, and the prestige that goes with that....

In order to properly assess Bittner's claim we also need to ask how accurate impact factors are, and what they tell us about the quality of a journal....

[W]hen Bittner's department scale his articles against the IF of the journals in which he has published they are conflating his personal contribution to science with the aggregate contribution that he and all the authors published alongside him have made.

In reality, therefore, Bittner is being rewarded for having his papers published in prestigious journals, not for convincing fellow researchers that his work is sufficiently important that they should cite it. Of course, it is possible that his papers have attracted more citations than the authors he has been published alongside. It is equally possible, however, that he has received fewer citations, or even no citations at all....

Suber sums it up in this way: "IFs measure journal citation impact, not article impact, not author impact, not journal quality, not article quality, and not author quality, but they seemed to provide a reasonable surrogate for a quality measurement in a world desperate for a reasonable surrogate."

Or at least they did, he adds, "until we realised that they can be distorted by self-citation and reciprocal citation, that some editors pressure authors to cite the journal, that review articles can boost IF without boosting research impact, that articles can be cited for their weaknesses as well as their strengths, that a given article is as likely to bring a journal's IF down as up, that IFs are only computed for a minority of journals, favouring those from North America and Europe, and that they are only computed for journals at least two years old, discriminating against new journals." ...

For OA journals this is bad news, since it leaves them vulnerable to the kind of criticism levelled at them by Bittner.

However, the good news is that, in the age of the Web, new tools for measuring research quality can be developed. These are mainly article-based rather than journal-based, and they will provide a far more accurate assessment of the contribution an individual researcher is making to his subject, and to his institution.

The Web, says OA advocate Stevan Harnad, will allow a whole new science of "Open Access Scientometrics" to develop. "In the Open Access era," he explains, "metrics are becoming far richer, more diverse, more transparent and more answerable than just the ISI JIF: author/article citations, author/article downloads, book citations, growth/decay metrics, co-citation metrics, hub/authority metrics, endogamy/exogamy metrics, semiometrics and much more. The days of the univariate JIF are already over." ...

For those still in doubt there are two other factors to consider. First, it is not necessary to wait until suitable OA journals emerge in your area before embracing OA. It is possible to publish a paper in a TA journal and then self-archive it in a subject-based or institutional repository (a practice referred to as "Green OA")....

Second, whether they choose to self-archive or to publish in an OA journal ("Gold OA"), researchers can expect to benefit from the so-called "citation advantage". This refers to the phenomenon in which papers made OA are cited more frequently than those hidden behind a subscription paywall....

But there is many a slip twixt the cup and the lip....[There is a] danger that the actions of a few OA publishers might yet demonstrate that OA journals do indeed publish lower quality research than TA journals. And unless the OA movement addresses this issue quickly it could find that the sceptical voices begin to grow in both volume and number. That is a topic I hope to examine at a later date....

PS:  In addition to the September 2008 article of mine which Richard quotes here (Thinking about prestige, quality, and open access), also see my article from October 2006 (Open access and quality).

Update.  Also see Stevan Harnad's comments.  Excerpt:

Peer review selectivity determines quality, not open access vs. toll access....

It should also be pointed out that the top journals differ from the rest of the journals not just in their impact factor (which, as Richard points out, is a blunt instrument, being based on journal averages rather than individual-article citation counts) but in their degree of selectivity (peer revew standards): If I am selecting members for a basketball team, and I only accept the tallest 5%, I am likely to have a taller team than the team that is less selective on height.

Selectivity is correlated with impact factor, but it is also correlated with quality itself. The Seglen "skewness" effect (that about 80% of citations go to the top 20% of articles) is not just a within-journal effect: it is true across all articles across all journals....

OA will give richer and more diverse metrics; it will help the cream (quality) to rise to the top (citations) unconstrained by whether the journal happens to be TA or OA. But it is still the rigor and selectivity of peer review that does the quality triage in the quality hierarchy among the c. 25,000 peer reviewed journals, not OA.

(And performance evaluation committees are probably right to place higher weight on more selective journals -- and on journals with established, longstanding track-records.)