Open Access News

News from the open access movement

Thursday, July 31, 2008

More on OA, downloads, and citations

Philip M. Davis and four co-authors, Open access publishing, article downloads, and citations: randomised controlled trial, BMJ, July 31, 2008.  Abstract:

Objective. To measure the effect of free access to the scientific literature on article downloads and citations.

Design. Randomised controlled trial.

Setting. 11 journals published by the American Physiological Society.

Participants. 1619 research articles and reviews.

Main outcome measures. Article readership (measured as downloads of full text, PDFs, and abstracts) and number of unique visitors (internet protocol addresses). Citations to articles were gathered from the Institute for Scientific Information after one year.

Interventions. Random assignment on online publication of articles published in 11 scientific journals to open access (treatment) or subscription access (control).

Results. Articles assigned to open access were associated with 89% more full text downloads (95% confidence interval 76% to 103%), 42% more PDF downloads (32% to 52%), and 23% more unique visitors (16% to 30%), but 24% fewer abstract downloads (–29% to –19%) than subscription access articles in the first six months after publication. Open access articles were no more likely to be cited than subscription access articles in the first year after publication. Fifty nine per cent of open access articles (146 of 247) were cited nine to 12 months after publication compared with 63% (859 of 1372) of subscription access articles. Logistic and negative binomial regression analysis of article citation counts confirmed no citation advantage for open access articles.

Conclusions. Open access publishing may reach more readers than subscription access publishing. No evidence was found of a citation advantage for open access articles in the first year after publication. The citation advantage from open access reported widely in the literature may be an artefact of other causes.

The same issue of BMJ contains an editorial by Fiona Godlee on the study, but only the first 1.5 paragraphs are free online for non-subscribers.

Update.  Also see Stevan Harnad's comment:

...To show that the OA advantage is an artefact of self-selection bias (or any other factor), you first have to produce the OA advantage and then show that it is eliminated by eliminating self-selection bias (or any other artefact).

This is not what Davis et al did. They simply showed that they could detect no OA advantage one year after publication in their sample. This is not surprising, since most other studies don't detect an OA advantage one year after publication either. It is too early.

To draw any conclusions at all from such a 1-year study, the authors would have had to do a control condition, in which they managed to find a sufficient number of self-selected self-archived OA articles (from the same journals, for the same year) that do show the OA advantage, whereas their randomized OA articles do not. In the absence of that control condition, the finding that no OA advantage is detected in the first year for this particular sample of journals and articles is completely uninformative.

The authors did find a download advantage within the first year, as other studies have found. This early download advantage for OA articles has also been found to be correlated with a citation advantage 18 months or more later. The authors try to argue that this correlation would not hold in their case, but they give no evidence (because they hurried to publish their study, originally intended to run four years, three years too early)....[PS:  Omitting 18 specific bullet points.]

Update.  Also see Gunther Eysenbach's comment:

Today, Davis’ et al. have published a paper containing preliminary results from their Open Access RCT....[The paper shows] a significant increase in access and use of Open Access articles compared to non-OA articles....Davis et al. failed to show a citation advantage after 9-12 months, from which they conclude that “the citation advantage from open access reported widely in the literature may be an artifact of other causes.”  Jumping to these conclusions after only 9-12 months is actually quite outrageous and the fact that the BMJ publishes “negative” results of an ongoing trial before it is even “completed” is deeply disturbing....

[T]o conclude or even imply that any citation advantage is an “artifact” after looking only at citations that occur within the same year of the cited article (9-12 months after publication) is as interesting and valid as doing a RCT on the effectiveness of a painkiller and comparing the pain between control and intervention patients after only one minute, concluding that the painkiller doesn’t work if there is no statistically significant difference between the groups after 60 seconds....

Davis says there were only 20 self-archived articles in his total sample, which is a suspiciously low self-archiving rate of only 1.2% (with an unreported contamination rate in his control group), while my PNAS sample had 10.6% of all articles in the control group self-archived. What Davis et al. unfortunately fail to report is when were the searches for self-archiving done? The low self-archiving rate suggests to me that this was perhaps only tested once right after publication, rather than continuously after publication? ...

Update (9/3/08). For more comments pro and con, also Tracey Caldwell's article in Information World Review, September 3, 2008.