Open Access News

News from the open access movement

Saturday, January 02, 2010

January SOAN: Review of OA in 2009

I just mailed the January 2010 issue of the SPARC Open Access Newsletter.  This issue takes a close look at the progress of OA in 2009.  The roundup section briefly notes 118 OA developments from December.


Thursday, December 31, 2009


I'll be out tomorrow. Happy New Year!


Open access roundup

arXiv will ask libraries for funding

Flip Tanedo, Who will pay for the arXiv?, US LHC Blog, December 29, 2009.

... [W]ith increasing costs and the state of university budgets, the Cornell University Library (which operates the arXiv) is looking to find more cost-effective ways to support the arXiv and the much-needed overhauls in the software architecture (”arXiteXture”?). ... Currently the Cornell library pays the $400,000/year operating cost to make the arXiv available free-of-charge to the rest of the world. Here’s the official statement so far:

Cornell University Library is beginning an effort to expand funding sources for arXiv to ensure its stability and continued development. We intend to establish a collaborative business model that will engage the institutions that benefit most from arXiv — academic institutions, research centers and government labs — by asking them for voluntary contributions. We are working with library and research center directors at the institutions that are the heaviest users of arXiv to refine our plan and to enlist support. We expect to release the plan, with a call for broader engagement and contribution, in early 2010.

There’s also a very handy FAQ on the funding changes, which are still a work-in-progress. ...

Currently the plan is to ask the “heaviest user institutions” (other university library systems) to voluntarily contribute to support arXiv operational costs. The FAQ states that the library has already secured commitments from 11 of the 20 institutions that make the most use of the arXiv. (I’ve seen an unofficial list; these include many of the ‘big name research institutes’ around the world.) In return, besides academic karma, these institutions will be recognized for their support with arXiv banners and would possibly be privy to more detailed arXiv usage statistics. ... There is no plan to charge individuals for uploading or downloading papers from the arXiv. This business model is meant to be a temporary plan for the next three years while a longer-term solution can be figured out in collaboration with the wider community. It seems like the arXiv managers envision this long term plan being some kind of mixture of Cornell and user-institution support, but they are open to external support, e.g. from the National Science Foundation (which many physicists have suggested). ...

Mr. Malamud goes to Washington

Carl Malamud, A National Scan Center: A Public Works Project, O'Reilly Radar, December 30, 2009.

... A huge backlog of paper, microfiche, audio, video, and other materials is located throughout the federal government. Little money has gone from Congress for digitization, and bureaucracies have resorted to a series of questionable private-public partnerships as a way of digitizing their materials. For example, the Government Accountability Office shipped 60 million pages of our Federal Legislative Histories (the record of each law from the initial bill through the hearings and conference reports) off to Thomson West, but didn't even get digital copies back. Another example is the recent failed effort by the Government Printing Office to digitize 60 million pages of the Federal Depository Library Program, an effort they tried to get through as a "zero dollar cost to the government" effort with the private sector.

There are no free lunches and there are no "no cost to the government" deals. The costs involve the government effort to supervise the contract, prepare the materials, and ship them, and in both the GAO and GPO cases, the government wasn't getting much back for its effort. What the government and the people usually get is a lien on the public domain, preventing the public from accessing these vital materials. Similar efforts are sprinkled throughout the government. I testified to Congress that I had learned that the National Archives was contemplating a scan of congressional hearings with LexisNexis under similar circumstances, and many may be aware of the questionable deal the Archives cut with Amazon where my favorite online superstore got de facto exclusive rights to 1,899 wonderful pieces of video. ...

After my testimony, I went and visited senior officials at the Library of Congress and the Smithsonian. They all said that while they had tried to get more congressional interest in digitization, and had tried to go after stimulus money, so far nobody had much success. I asked if they had gone hand-in-hand with their sister institutions to ask for this money, and it was pretty clear that they had not. Each institution went in one at a time pleading their own special case to congressional staffers and to officials at the Office of Management and Budget. ...

If the government invested a mere $100 million of our stimulus package (we've already spent over $72.6 billion), that means 2 billion pages of paper or microfiche would get scanned. For $500 million, we're talking a huge chunk of our national backlog being digitized, a task that would result in an enduring digitial public work for our modern era, something that would prove immense use to future generations ...

Tuesday, December 29, 2009


I'll be out tomorrow; OATP remains active, as always. Happy holidays!


Open access roundup

Data sharing in the life sciences: reality vs. policy

Patterns of information use and exchange: case studies of researchers in the life sciences, report by the Research Information Network and the British Library, November 2009. From the executive summary:

... Researchers communicate their findings – new knowledge, new methodologies and tools – primarily through conference proceedings and journal articles. These public activities have strong institutional and professional incentives in building reputations, securing promotion and so on. Incentives for other kinds of communication and sharing are weaker and indirect.

Most research councils have policies requiring researchers to set up formal mechanisms to manage created data, including provision for access and re-use. Moreover, the experience of sharing data such as gene sequences in high-profile research programmes in fields such as genomics or proteomics has come to be seen as something of a paradigm or model around which policies and practice will converge.

But our study suggests that such a model remains exceptional. Indeed, researchers highlight a number of barriers to sharing their research data, including concerns about potential misuse, ethical constraints, and intellectual property. Above all, they see data as a critical part of their ‘intellectual capital’, generated through a considerable investment of time, effort and skill. In a competitive environment, their willingness to share is therefore subject to reservations, in particular as to the control they have over the manner and timing of sharing.

Discussion of these issues has been hampered by confusions and inconsistent usage of the terms ‘data’ and ‘information’. The current preoccupation with sharing research data has diverted attention from the diverse range of formal and informal information exchanges that take place in the life sciences. Given the limited current understanding of which forms of sharing and exchange are most effective and beneficial, and under what circumstances, we suggest that policy-makers need to engage in further discussions with researchers to identify and address the constraints, as well as to preserve the exercise of informed choice that is fundamental to science.

Narrowly prescriptive approaches are unlikely to be effective. We recommend rather that funders should adopt a more pragmatic and experimental policy that recognises the multiplicity of contexts, and the different approaches to information sharing; and which builds upon the informal sharing that is already taking place, based on the recognition of mutual needs. Such a bottom-up view is needed in order:

  • to attend to the practicalities of data sharing: what makes information from other sources intelligible? Under what circumstances is such sharing useful and sufficiently beneficial to warrant the labour necessary to achieve it? and
  • to address existing barriers and drivers for change, including the perceived self-interests and goals of researchers, and their need to sustain their intellectual capital and advance their careers.

A key message from our work, therefore, is that policy intervention and support systems for researchers need to be built around the many different and successful tools and practices emerging within life science research communities themselves. ...

Monday, December 28, 2009

60,000 OA books from Library of Congress

Sarah Rouse, Library of Congress Puts Thousands of Historic Books Online,, December 24, 2009.

Nearly 60,000 books prized by historians, writers and genealogists, many too old and fragile to be safely handled, have been digitally scanned as part of the first-ever mass book-digitization project of the U.S. Library of Congress (LOC), the world’s largest library. Anyone who wants to learn about the early history of the United States, or track the history of their own families, can read and download these books for free.

“The Library chose books that people wanted, but that were too old and fragile to serve to readers. They won’t stand up to handling,” said Michael Handy, who co-managed the project, which is called Digitizing American Imprints. ...

[The] digitized books can be accessed through the Library’s catalog Web site and the Internet Archive (IA), a nonprofit organization dedicated to building and maintaining a free online digital library. ...

The Library of Congress has digitized many of its other collections — more than 7 million photographs, maps, audio and video recordings, newspapers, letters and diaries can be found at the Library’s Digital Collections site, such as the popular American Memory and the multilingual Global Gateways collections — but “this is the first sustained book-digitization project on a high-volume basis,” Handy said. ...

A $2 million grant from the Alfred P. Sloan Foundation inaugurated the LOC book digitization project. One of the grant’s objectives was “to address some of the issues that other book digitization projects had mainly avoided dealing with — for instance, the brittle book issue,” Handy said. “We established some procedures and preservation treatments to be able to scan books that otherwise couldn’t be scanned.” ...

Handy said, “More funding will be sought to keep this going after this year. This is just the beginning.”

See also our past posts on the program.

OA mandate at Dublin Tech

Dublin Institute of Technology has adopted an OA mandate:

Academic staff, research assistants, research students and other members of the Institute are entitled and required to deposit digital copies of refereed and other research publications and documents. ...

Exceptionally, material that is to be commercialised, or which can be regarded as confidential, or the publication of which would infringe a legal commitment of the Institute and/or the author, is exempt from inclusion in the repository.

Uploading of items into [the IR] is the responsibility of authors and researchers. It is desirable that items be self-archived. However, this task may be delegated to others or to Library Services.

All deposits of journal articles must comply with Publishers’ policies. ...


OA mandate at U. Abertay Dundee

The University of Abertay Dundee has adopted an OA mandate:

It is the University’s policy to establish a comprehensive database of research outputs, recording bibliographic information and, where permissible under publishers' copyright policies, providing access to the full text of published research produced by University staff and research students.

The University therefore requires that all staff and research students submit the following to the repository:

  • Full text electronic copies and bibliographic details of peer-reviewed research published from 1 January 2010.
  • Bibliographic details (including abstracts, where available) of peer-reviewed research published between 1 January 2001 and 31 December 2009.

and that:

  • The electronic version of theses accepted for research degrees after 10th July 2009 will be deposited in the repository on behalf of the students. ...


Is changing copyright law the best way to OA?

Steven Shavell, Should Copyright of Academic Works Be Abolished?, working paper, December 18, 2009. Abstract:
The conventional rationale for copyright of written works, that copyright is needed to foster their creation, is seemingly of limited applicability to the academic domain. For in a world without copyright of academic writing, academics would still benefit from publishing in the major way that they do now, namely, from gaining scholarly esteem. Yet publishers would presumably have to impose fees on authors, because publishers would no longer be able to profit from reader charges. If these author publication fees would actually be borne by academics, their incentives to publish would be reduced. But if the publication fees would usually be paid by universities or grantors, the motive of academics to publish would be unlikely to decrease (and could actually increase) – suggesting that ending academic copyright would be socially desirable in view of the broad benefits of a copyright-free world. If so, the demise of academic copyright should probably be achieved by a change in law, for the “open access” movement that effectively seeks this objective without modification of the law faces fundamental difficulties.