Open Access News

News from the open access movement


Saturday, November 12, 2005

More on the OA impact advantage for newspapers

In mid-September the New York Times removed some of its popular columnists from the OA edition. The purpose was to increase revenue, and it worked in the sense that many readers were willing to pay for online access to the columnists. However, many readers were not, and the impact of the columnists started to decline. The decline was already evident in October (blogged here 10/14/05). Now Independent Sources cites November data to show that their impact continues to decline.

Mickey Kaus calculates that the increase in NYT revenue is about $6.1 million and wonders whether it was worth it. He asks a good question: if a rich conservative offered the paper $6.1 million to reduce the impact of its liberal columnists, would it have taken the offer? (Thanks to Joho via Darius Cuplinskas.)

Universal repository at U of Tampere

The OA repository at the University of Tampere in Finland is open to deposits by scholars anywhere. The repository instruction page puts it this way: "You can also offer your scientific material for publication on the internet to the University of Tampere, whether you belong to the personnel of the University or not." (Thanks to Kimmo Kuusela.)

(PS: This is an excellent idea. It makes the Tampere repository the first universal repository --or the first that I know of. The current draft of the RCUK OA policy mandates the deposit of all RCUK-funded research in an OA repository, but makes an exception for grantees who don't have deposit rights at any OA repository. The Tampere repository should plug this loophole even if the universal repository I'm building with the Internet Archive is not ready in time.)

OA accelerates the innovation cycle

Pertti Saariluoma, Open Access Publishing As An Incorporator Of Research And Innovation Cycle, Human Technology, October 2005. (Thanks to Kimmo Kuusela.) An editorial. Excerpt:
Even though basic research doesn’t often result in immediately usable products, it plays an essential role in technological innovations, as it has formed the basis for many groundbreaking advances in product development over the decades....On the other hand, it is also common that product design processes provoke new directions and developments in basic research....This reciprocal connection between new ideas in basic research and product development can be called an innovation cycle....New ideas in one area generate more or different ideas not just on that topic or in that area but also more widely....The faster new ideas find their routes into everyday life, the broader and deeper their impact on, for instance, social development can be. In our knowledge society, the advancement of learning and research no longer takes place strictly at universities and research institutions. Today, numerous types of knowledge agents --business enterprises, foundations, individual inventors, and other social actors-- play essential roles in knowledge creation. An impact, however, is that the scope of expanding research is perhaps changing the nature of basic research....And, because of the “give-and-take” of the innovation cycle, strong communication is needed between the various stakeholders within the knowledge production sectors of modern society. Open access publishing provides one important tool for the communication of research results and innovative applications within our modern society. This free access to research findings can mean that anyone --everyone-- who seeks the information on contemporary thinking and testing can tap into the knowledge generated, no matter by whom or where. Independent of the size or resources of the individual, the organization, or the enterprise, a designer, a researcher, a manager, etc. may tap into some knowledge and implement it in a variety of ways. Open access publishing possesses a crucial element of the broad dissemination of knowledge, and thereby speeds up the innovation cycle. Open access publishing of research builds essential connections among --and benefits to-- universities, business enterprises, and society at large.

Growth at European OA repositories encouraging

A blogger named Marin is encouraged by the 55,000 articles on deposit he found at about a dozen OAI-compliant repositories in Europe.

Indian knowledge institutions should focus on access

Sam Pitroda, head of India's Knowledge Commission, says that India's growth depends on restructuring the country's "knowledge institutions." According to a story in yesterday's The Hindu, Pitroda wants to see India's knowledge institutions "focus on five key elements; the concept of knowledge, creation of knowledge, access to knowledge, application of knowledge and governance of knowledge."

Another OA overview

Heather Morrison and Andrew Waller, Open access : basics and benefits, in Letter of the LAA (Library Association of Alberta), pp. 17-18, 2004 (but self-archived November 10, 2005).
Abstract: “Open Access” has emerged in recent years as a major development in the world of scholarly communication. It may have the potential to greatly alter the university publishing environment and change the ways in which everyone accesses research material, particularly scholarly journals. This article will take a look at the basics of Open Access (or OA) as well as some direct and indirect benefits of OA inside and outside of academe.

Interoperable OA publishing, esp. for masters theses

Matthias Meiert, Modellierung des elektronischen Publikationsprozesses am Beispiel von Magisterarbeiten im Studiengang Internationales Informationsmanagement an der Universität Hildesheim, Hildesheim, January, 2005. (Thanks to Klaus Graf.) In German but with this English-language abstract:
The internet is drastically changing the framework of scholarly publishing. Scientific publications are increasingly available to the public over the internet, often replacing those found in classical scientific media such as specialist books and journals. The implementation of an electronic publishing process is prerequisite for the successful distribution of scientific information via internet. This however requires the observance of certain rules and regulations which, if not followed, would threaten to create unwanted island solutions through system incompatibility. The following work elucidates the technical and legal framework of online publications in the context of copyright amendment and open access movement and introduces projects and initiatives which deal within the area of scholarly publishing. Further interviews service to explore problem areas of the current publishing process of master theses. Finally, based on a solution analysis as well as on previous expositions, an electronic publishing process is shaped.

Proposed citation service for OA literature

Rachel Hardy, Charles Oppenheim, Tim Brody, and Steve Hitchcock, Open Access Citation Information, a preprint, September 2005. Self-archived November 11, 2005.
A primary objective of this research is to identify a framework for universal citation services for open access (OA) materials, an ideal structure for the collection and distribution of citation information and the main requirements of such services. The work led to a recommended proposal that focuses on: [1] OA contents in IRs rather than on wider OA sources. [2] Capture and validation of well-structured reference metadata at the point of deposit in the IR. [3] Presentation of this data to harvesting services for citation indexes. The aim of the proposal is to increase the exposure of open access materials and their references to indexing services, and to motivate new services by reducing setup costs. A combination of distributed and automated tools, with some additional effort by authors, can be used to provide more accurate, more comprehensive (and potentially free) citation indices than currently exist.

Google lawyer on Google Library

Alistair Coleman reports in BBC news on a speech by Alexander Macgillivray at the Oxford Internet Institute. Macgillivray is Google's senior product counsel. Two nuggets:
"We totally believe we have the right to index absolutely everything on the internet, but we will respect any webmaster's decision not to be included," said the California-based legal counsel....

Rather than antagonise the publishing community further, he told the Oxford seminar, Google was actively searching for a compromise that all parties would find acceptable.

Licensing research to maximize humanitarian use

Amanda L. Brewster, Audrey R. Chapman, Stephen A. Hansen, Facilitating Humanitarian Access to Pharmaceutical and Agricultural Innovation, Innovation Strategy Today, 1, 3 (2005). Excerpt:
This paper seeks to raise awareness about the importance of managing IP to facilitate humanitarian use and applications. Our goal is to identify intellectual property approaches that can promote access to and use of health and agricultural product innovations by poor and disadvantaged groups, particularly in low?-income countries. The paper encourages more public-sector IP managers to understand and employ strategies that will accomplish these goals. Humanitarian use approaches should become the norm, and we seek to help private?]sector licensees understand the rationale and potential benefits behind such strategies. This paper focuses on the pharmaceutical and agricultural sectors, but the principles noted could potentially be applied to other areas as well. There are key moments when technology managers can improve the likelihood that their IP will benefit people in need: when they decide 1) who will receive a license, 2) whether the license will be exclusive, 3) what types of applications will be covered, and 4) how long the duration of the license will be....The license agreement can further increase access through specific terms that govern the use of the technology for research, the licensee’s freedom to grant sublicenses, and the treatment of follow-on innovations developed by the licensee. We acknowledge that improved IP management cannot by itself solve the access crisis....This paper deals with voluntary strategies, but this is not meant to exclude other approaches, such as incorporating some of the suggested changes in IP management into public policies, laws, or treaty reform. The advantage of voluntary strategies is that they can be implemented immediately, without the complexities involved in changing regulations and legal requirements.

Forthcoming report on access to genomics research

From a press release by the U.S. National Academies:
On November 17, 2005, the National Academies' Science, Technology and Economic Policy (STEP) Board and the Science, Technology, and Law Committee will release a joint report commissioned by the National Institutes of Health. Committee co-chairs Shirley Tilghman, President of Princeton University, and Roderick McKelvie, Covington and Burling (and retired Judge, Federal District Court of Delaware), will present the report at a public briefing at 2:00 PM in Room 100 of the Keck building at 500 Fifth Street, N.W., Washington, D.C. The meeting is open to the public without advance reservation. The briefing will be webcast and the report and press release can be accessed at [the National Academies site] on the 17th. The report documents trends in patenting and licensing in several categories of genetic and protein research, differences between US practices and those in Europe and Japan, and the impact of these practices on academic and corporate research directed at identifying disease mechanisms and therapeutic and diagnostic advances. Recommendations are addressed to the NIH and other government and private research funding agencies, the courts and the US Patent and Trademark Office, Congress, and the research community at large. The report includes the results of a new survey of bench scientists on their experience with intellectual property acquisition and access and sharing of data and research materials.

Results of the ODF summit

Martin LaMonica, OpenDocument format gathers steam, ZDNet, November 10, 2005. Excerpt:
IBM and Sun Microsystems convened a meeting in Armonk, N.Y., on Friday to discuss how to boost adoption of the standardized document format for office applications. The ODF Summit brought together representatives from a handful of industry groups and from at least 13 technology companies, including Oracle, Google and Novell. That stepped-up commitment from major companies comes amid signs that states are showing interest in OpenDocument. Massachusetts in September decided to standardize on OpenDocument for some state agencies. James Gallt, the associate director for the National Association of State Chief Information Officers, said Wednesday that there are a number of other state agencies are exploring the use of the document format standard....The OpenDocument standard, which uses XML data-tagging to format and store documents, was only ratified in May of this year. The format, known in full as the OASIS Open Document Format for Office Applications, covers applications such as word processors, spreadsheets and charts. As a standard, OpenDocument is an "open" format that can be used in any software, whether closed source or open source. Although few products incorporate support for OpenDocument right now, O'Grady expects that more manufacturers will adopt it. That could have a significant impact on Microsoft's multibillion-dollar Office franchise, he noted.

NCBI search toolbar

The NCBI has released an NCBI Search Toolbar for Explorer and Firefox. The toolbar lets users run searches PubMed, Gene, Nucleotide, and other NCBI databases without having to visit any of the NCBI sites.

OA portal of audio-visual material

JISC is funding a new, OA Visual and Sound Material (VSM) Portal. For details, see yesterday's press release or Balviar Notay and Catherine Grout's article in the October Ariadne.

Friday, November 11, 2005

Studying the impact of OA in LIS

Cheryl Knott Malone and Anita Coleman, The Impact of Open Access on Library and Information Science, text of a funding proposal, self-archived November 10, 2005.
Abstract: This is the text of a proposal (unfunded) submitted by Cheryl Knott Malone and Anita Coleman, School of Information Resources and Library Science, University of Arizona, Tucson to the IMLS National Leadership Grants 2005. To what extent does open access improve the impact of an article? This is the deceptively simple question that we will investigate. Our question is an important one if a clear understanding about the open access archive (OAA) phenomenon and what it means for our discipline, Library and Information Science (LIS) is ever to be achieved. We will use DLIST as the testbed for answering our key research question. DLIST is the Digital Library for Information Science and Technology, an OAA, where scholars can self-register and deposit research, education, and practice publications that center on cultural heritage institutions such as libraries, archives, and museums. DLIST was established in the summer of 2002 as a disciplinary repository for LIS. DLIST runs on open source software, Eprints, and is compliant with Open Archives Initiative-Protocol for Metadata Harvesting (OAI-PMH). Thus DLIST is an interoperable data provider in the global chain of OAI repository services. Currently DLIST has about 500 users and 400 documents. Usage of DLIST has grown from 41,156 hits in February 2004 to 112,728 hits in January 2005. To answer the research question we will undertake the following activities over a period of three years. In the first year we will 1) digitize articles from the back issues of the Journal of Education for Library and Information Science (JELIS), the premier journal for all matters related to library education; 1) conduct a citation study of JELIS articles to benchmark their research impact prior to deposit in DLIST, 2) deposit and create the metadata for digitized JELIS articles in DLIST; and 3) complete the writing of a DLIST User Guide and Self-Archiving Workshops manual. In the second year of the project, we will 1) survey LIS faculty to determine a baseline of copyright awareness and scholarly communication behaviors related to self-archiving in the LIS education community, and 2) offer DLIST self-archiving workshops at four selected conferences. The workshops will introduce scholars to OAA and how to self-archive using DLIST. In the third year of the project, 1) participants who completed the DLIST workshops and surveys will be surveyed again, 2) a follow-up citation study to document citation rates and patterns of the digitized and deposited JELIS articles will be conducted, and 3) will be analyzed with usage of JELIS articles in DLIST to understand the impact of open access. The goal of the second survey is to determine how behaviors may have changed and find out how the JELIS articles in DLIST, were used in ways that may not be revealed through mere citation data. This will contribute a richer understanding of impact than if we had only quantitative data from DLIST usage logs and citation rates and patterns (traditional research impact factors only) for JELIS. Current experience with DLIST has given us tantalizing evidence that open access to the JELIS articles will have an impact and that the nature of the impact will be diverse and rich, not just limited to research citations. For example, informally gathered DLIST usage ‘nuggets’ are often about the usefulness of DLIST materials for classroom teaching (sometimes in a global context, as when we learned that it is used in a LIS school in Czechoslovakia) and networking among LIS teachers, researchers and practitioners.

Thursday, November 10, 2005

More free online Filipino journals

Von Totanes has added four more journals to his list of free online Filipino journals that are not included in the DOAJ.

Access issues in the humanities and social sciences

The American Council of Learned Societies (ACLS) has released the Draft Report of the American Council of Learned Societies’ Commission on Cyberinfrastructure for Humanities and Social Sciences. If you have comments, send them to cybercomments@listserv.acls.org before by December 31, 2005. Excerpt:
But what is also clear is that achieving this potential [for a superior online environment] requires overcoming some daunting barriers --insufficient training, outdated policies, unsatisfactory tools, incomplete resources, inadequate access. The barriers to this possibility are not primarily technological, but economic, legal, and institutional. The effort required to realize this potential is not insignificant, but these limitations are small compared to the potential benefits. This report calls for an investment not just of money but also of leadership --from commerce, education, government, and foundations-- in order to realize the promise of cyberinfrastructure for the cultural record....We need public and institutional policies that foster openness and access. Because humanists and social scientists study society and culture, their use of the cyberinfrastructure inevitably has social, economic, and political implications and limitations. Laws, policies, and conventions surrounding copyright and privacy are, thus, also an implicit part of the cyberinfrastructure in the social sciences and humanities. We must align current intellectual property laws and privacy policies with the new realities of digital knowledge environments. Policies and the laws that support them must take account of the characteristics of digital content and the practices that make that content productive. The recent effort of the Copyright Office to address the problem of “orphan works” --works whose copyright status is uncertain and, hence, cannot be used by scholars and others-- is a welcome sign of a key agency in this debate taking an appropriate leadership role....We think it particularly important to explore more nuanced notions of intellectual property rights, supported by more sophisticated tools, so that the increasing capacity of digital technologies to mine, process, and analyze massive collections of texts not be nullified by laws intended to restrict republication. We support the work of groups like “Creative Commons” that are exploring innovative and nuanced ways to ensure the widest dissemination of works of the human imagination....And while scholars advocate public and legal policies of openness and access, they must similarly urge these policies within their own communities: universities need to consider the impact of their technology transfer and intellectual property policies; university presses and scholarly societies need to envision dissemination models that reflect academic values and lobby for the resources they need to live up to those values; museums need to make their digitized surrogates freely available. All parties should work energetically to ensure that the fruits of scholarly research and analysis are accessible to all those who might use them....Scholars, academic leaders, librarians should work with policy-makers toward [these access] goals and they should work within their own communities to ensure the widest possible access to scholarship, research, and creativity.

More on patents chilling research

There's a new Slashdot thread on how patents are chilling scientific research.

Story of the OA repository at India's National Informatics Centre

Sukhdev Singh, and Naina Pandita, Building the open access self-archiving repository for the bio-medical sciences at National Informatics Centre, in: National Convention of Medical Library Association of India, November 7-9m 2005, Bangalore, India.
Abstract: Self-Archiving is an important model of the Open Access movement. National Informatics Centre has been providing various services and products to the biomedical community. Building up a Self-archiving repository for Bio-medical and Allied sciences was a natural extension of these activities. To make this repository interoperable with other such repositories Open Access Initiative Protocol for Metadata Harvesting (OAI-PMH) was adopted. The selection of suitable software for the archive was done from OAI-PMH compatible softwares. GNU EPrints was finally selected. A prototype was build for planning of activities, demonstration and checking security aspects. To provide subject-wise browse view to the archive a MeSH based categorization was adopted. A dedicated server was procured and installed in the NIC Network Domain under RedHat Advanced Server Version 3.0. EPrints software was then installed and customized. Making scientists and authors aware of Open Access and its benefits remains a major challenge for any such attempt. However efforts are bearing fruits in the form of Open Self-archiving repository for Bio-medical and Allied Sciences i.e. OpenMED@NIC.

OCA book-scanning at the U of Toronto

David Kesmodel and Vauhini Vara, Building an Online Library, One Volume at a Time, Wall Street Journal, November 9, 2005 (free online this week only). (Thanks to ResourceShelf.) A close-up look at OCA book-digitizing at the University of Toronto. Excerpt:
Ms. Ridolfo is part of a massive undertaking to digitize the world's books. She is one of about a dozen scanners employed by the Internet Archive, a San Francisco nonprofit group that is spearheading the Open Content Alliance, a consortium of business and educational groups that includes Microsoft Corp., Yahoo Inc., Hewlett-Packard Co., Adobe Systems Inc. and several university libraries. The group wants to build an online library of millions of old books and hopes to make a big batch accessible through Web searches as early as next year. For all its technical sophistication, the group needs the manual work of people like Ms. Ridolfo to make digitization a reality....The Internet Archive's effort to get books online is still in its early stages. In the little more than a year since the group started scanning books, it has digitized just 2,800 books, at a cost of about $108,250. Funding has come largely from libraries that have paid to have their texts digitized. Work will likely speed up now that Microsoft and Yahoo are on board; both companies joined the effort in October. Microsoft has pledged to pay for the scanning of about 150,000 books from collections at the U.K.'s British Library and elsewhere, and Yahoo will fund the scanning of 18,000 American classics at the University of California. Mr. Kahle estimates it costs about 10 cents a page to get a book online, taking into account equipment, labor and the cost of hosting the pages on the Internet Archive's Web servers. The funding from Microsoft and Yahoo will be used to expand the scanning operation. So far, books are being scanned at just two locations, in San Francisco and Toronto. Participating libraries ship their books to those scanning centers, where a total of eight scanning machines are in use. The group hopes to use new funding to buy more machines, which cost $20,000 to $40,000 each (the more expensive machines can work faster, and can accommodate larger books)....The Internet Archive closely tracks each book that has been scanned, and a computer alerts employees if they try to scan a book that has already been digitized.

Stevan Harnad comment for the SSHRC on OA

Stevan Harnad has posted his comments for the SSHRC consultation on OA in a new blog posting to Archivangelism. Excerpt:
Let journal reform take care of itself: SSHRC's concern should only be with research access (and impact). Mixing that up with forced journal-reform will again just elicit needless opposition and delay for the primary target: 100% OA for Canadian SSHRC-funded research article output (authors' final drafts) so that all users worldwide can access, use, apply and build upon it, and not just those users who (or whose institutions) happen to be able to afford to access the journal in which the publisher's official version happens to be published. Publishing in OA journals should be encouraged where possible, but not mandated. IF SSHRC wishes, it can offer to help support authors' OA journal publishing costs. In sum: Mandating OA self-archiving and encouraging (and supporting) OA publishing is all that's needed from SSHRC. The rest will come with the territory. But if SSHRC instead needlessly over-reaches, needlessly trying to strong-arm publishing reform directly, the whole thing will just get needlessly bogged down for years more....nstitutional repositories present no "operation challenges," either for SSHRC or for institutions. Institutional Repositories (IRs) can and will take care of themselves. SSHRC should mandate self-archiving, and the IRs will be created and filled. SSHRC needs no central SSHRC or Canadian archive. Distributed institutional self-archiving is the most natural, and efficient route to 100% OA: the IRs all interoperable because they are OAI-compliant. If SSHRC wishes, it can harvest from them the articles it has funded. See the Swan/Brown JISC study on central vs. distributed institutional self-archiving and central harvesting....

New blog on OA

OA Librarian is a new blog devoted to OA for librarians. It's a group blog currently staffed by Marcus Banks, Lesley Perkins, Andrew Waller, and Heather Morrison. From Heather Morrison's announcement yesterday:
Postings are on topics relating to open access that are particularly relevant for libraries and librarians: comments on open access activities from our perspectives, thoughts about what librarians will be doing in an open access world, celebrations of OA library accomplishments and stories about OA advocate librarians....Comments are welcome on OA Librarian, and there is room for more on the blog team, so if you would like to join, let one of us know! Or, if you have a single news item or resource to add, send it to one of the team members to blog.

Welcome to the OA blogosphere!

Intro to Science Commons

John Wilbanks, What is Science Commons, Creative Commons blog, November 9, 2005. Excerpt:
Science Commons (SC) was launched in early 2005. SC is a part of Creative Commons - think of us as a wholly owned subsidiary - drawing on the amazing success of CC licenses, especially the CC community and iCommons. But we're also a little different. Whereas CC focuses on the individual creators and their copyrights, SC by necessity has a broader focus. That necessity is caused by, for example, the fact that most scientists sign employee agreements that assign intellectual property rights to a host institution. Another example is that scientific journals regularly request that scientific authors sign over their copyrights, and scientists eagerly do so in return for citations in what are called "high impact" journals. There's a very real collective action problem here: no one individual or institution has strong incentives to change the system. But the system is causing problems in the scientific and academic communities. Scientific articles are locked behind firewalls, long after their publishers have realized economic returns. This means that the hot new article about AIDS research can't be redistributed much less translated into other languages (where it might inspire a local researcher to solve a local problem). The difficulties faced in relation to the "open access" of publications are easy compared to those presented when we consider access to tools and data....So Science Commons works on these problems: inaccessible journal articles, tools locked up behind complex contracts, socially irresponsible patent licensing, and data obscured by technology or end-user licensing agreements. We translate this into projects, with work in three distinctly different project spaces: publishing (covered by copyright), licensing (covered by patent and contract) and data (in the US, covered only by contract). We work on agreements between funders and grant recipients, between universities and researchers and between funders and universities --all in the service of opening up scientific knowledge, tools and data for reuse. We also promote the use of CC licensing in scientific publishing, on the belief that scientific papers need to be available to everyone in the world, not simply available to those with enough resources to afford subscription fees....Science Commons is devoted to using its legal and technical expertise to help scientific researchers make the best use possible of these new communication technologies. For example, some science publishers experimenting with a new business model for scholarly communication require authors of peer-reviewed articles to grant a Creative Commons license in their articles. These publishers include the Public Library of Science, BioMed Central, and Springer OpenChoice.... Using Huntington's Disease research as a case study, Science Commons is exploring a "technology trust," which will combine an intellectual property rights conservancy, patent pool and other related rights- bundling methods. We are assessing the types of problems of rights-fragmentation, a range of possible legal solutions to this problem (including compulsory terms in funder agreements), the institutional design of the trust or conservancy, and the question of what institution would be best suited to administer such a trust or conservancy....The Science Commons Data project has two aspects. First, we assert that data should not be covered by intellectual property law. As part of this project we provide a resource for database providers struggling with licensing. Second, we are looking to improve on the data economy by aiding in the construction of an integrated web of data, papers, tools, and policy with the explicit goal of facilitating research into brain disease - the NeuroCommons.

(PS: Disclosure: I serve on SC's Publishing Working Group.)

Critical flu database no longer OA

The Influenza Sequence Database (ISD) used to be free online until its funding ran out. Now it charges subscriptions. Scientists say it's a critical resource in averting an avian flu pandemic. ISD is run by the U.S. Los Alamos National Laboratory.

See James Njoroge, Major database on flu virus genetics runs out of cash, SciDev.Net, November 10, 2005. Excerpt:

US government funding for one of the world's main stores of information on the genetics of flu viruses has ended, potentially jeopardising research on the bird flu virus that experts fear could spark a human flu pandemic....Since November 2004, the database's administrator Catherine Macken has reported on the funding crisis — she estimates it has cost US$2.5 million to develop and maintain the database since it was set up in 1998. Over the past year, the database has tried to move to subscription-based access to support its running costs. Aware that access to the database is vital for flu researchers in poor countries, the database administrators are hoping to collaborate with the private sector to set up 'needs-based' scholarships for non-commercial institutions. Researchers who cannot afford a subscription are being given temporary access while their scholarship applications are processed. The database is one of the most comprehensive collections of genetic information about influenza viruses. By comparing the genetic sequences of flu viruses that are collected regularly across the world with older viral species and strains, scientists can learn about how the virus evolves and moves between countries. The shift from the database being free to charging for access is "improper," says Girish Kotwal, professor and chair of medical virology at the University of Cape Town, South Africa. Terry Besselaar of the Vaccine Preventable Virus Infections Unit at South Africa's National Institute for Communicable Diseases says, however, that he does not think that restricting access to the database will affect vaccine research. He points out that vaccine researchers can obtain virus samples from the World Health Organization. The database's administrator Catherine Macken would not comment on "sensitive questions" about the situation, and referred SciDev.Net to Allen Morris, the Los Alamos National Laboratory's technology transfer division. "We are trying to scramble whatever resources are available," Morris told SciDev.Net before adding that he could not provide any more information. Morris told US-based Time magazine last week that the database "is running on vapour".

(PS: This could shape up into the most dramatic case yet of harm caused by the lack of OA.)

Manjoo on Authors v. Google Library

Farhad Manjoo, Throwing Google at the Book, Salon, November 9, 2005. Excerpt:
Take this hypothetical scenario: Let's say that somewhere in the stacks at the University of Michigan there is an essay by a writer you've never heard of, on a subject you didn't know about, in a volume no longer in print, by a publishing house no longer in business; let's say, moreover, that even though you don't really know it, this essay is exactly what you're looking for, the answer to all your searching needs, in much the same way you find Web pages every day by people you don't know that turn out to be just the thing. Ideally, as Google envisions it, you could one day go to its search engine, type in a certain bon mot, and find this book, your book. Because it's still under copyright, Google would only show you a few sentences around your search term as it appeared in the text, not the whole volume; but you'd know it was there in the library, and if you wanted it, you'd be free to check it out, or find some way to buy it. Without Google's system, you'll never hear of this book. In such a scenario, proponents of Google's plan see nothing but good -- good for the company, for Internet users, and especially for authors. In most copyright disputes between content companies and tech firms, there is often a legitimate question over which party might benefit more from a new technology, notes Fred von Lohmann, an attorney at the Electronic Freedom Foundation, which sides with Google in this battle. "Take the Napster case," von Lohmann says. In that situation, Napster claimed that its file-swapping tool could increase CD sales by letting people preview music before they purchased it; the CD industry, meanwhile, said the system had caused a significant drop in sales. Both sides cited numbers to support their arguments, and each theory sounded at least plausible. "But with the Google Print situation, it's a completely one-sided debate," von Lohmann says. "Google is right, and the publishers have no argument. What's their argument that this harms the value of their books? They don't have one. Google helps you find books, and if you want to read it, you have to buy the book. How can that hurt them?" ..."But with the Google Print situation, it's a completely one-sided debate," von Lohmann says. "Google is right, and the publishers have no argument. What's their argument that this harms the value of their books? They don't have one. Google helps you find books, and if you want to read it, you have to buy the book. How can that hurt them?" ...

It's Google's profit motive that raises the suspicion of authors and publishers. As they see it, digital technology provides authors and publishers a new way to make a great deal of money on their back catalogs of books -- a huge source of revenue that is currently being untapped. Google is creating a system that exploits that back catalog, so why shouldn't Google pay content owners for the right use of that catalog? "The author is creating the value here," says Paul Aiken, executive director of the Authors Guild, "and the author should get some of the money. If there's a new value for books created on the Internet, the authors should be given new incentives to create works for it." ...Many authors feel differently. One is Julian Dibbell, author of "My Tiny Life," a memoir of the author's life in the virtual computer world called LambdaMOO. When told of Aiken's theory that Google's database would use authors like him in the same way that Hollywood might use them, and authors should get paid for allowing their books to go to Google, Dibbell said, "My blood is boiling just as you relay this to me." As Dibbell sees it, "Google is not piggybacking on my creative effort in the same parasitic way that a movie based on a novel might be doing." To Dibbell, Google is acting not like the Hollywood producer who steals an author's ideas, but instead like a book reviewer who popularizes an author's work. After all, Dibbell notes, book reviewers routinely use snippets from books in their reviews, and magazines and newspapers make loads of money from advertisements they run alongside book reviews. Authors don't feel entitled to any of that money, he says, so why should they get a slice of the money Google will make from its service? "Given what's at stake here, which is the creation of a resource that nobody is denying is a good thing, their stance seems wrong to me," Dibbell says of his fellow authors.

(PS: This is the most detailed and careful article I've seen on the controversy over Google Library.)


Wednesday, November 09, 2005

Review of ProQuest's DigitalCommons@

Majied Robinson, On The Digital Commons: A Commercial Self Archiving Solution, EPS, November 9, 2005 (accessible only to subscribers). A review of DigitalCommons@ from ProQuest and Bepress. Excerpt:
Though free to download, open source repository software requires maintenance as well as server space for storage. This can cost USD300,000 a year (on top of set-up costs) making it prohibitively expensive for many institutions. By contrast, ProQuest's Digital Commons@ is available only on a subscription basis. In the package, institutions get server space for content and the majority of maintenance is done by ProQuest. At the institution-end, ProQuest claims that library staff need only two or three days training and to devote a few hours a week to operating the program. In addition to this, ProQuest is offering a discounted trial subscription rate to clients. One significant feature of Digital Commons@ is its support for peer reviewing. This can be integrated into the content submission process, though it is up to the institutions themselves to find reviewers....[W]hile ProQuest have done well to include a peer review element to their product, none of the institutions questioned were using this, nor did they have any plans to....At the University of Surrey, authors are allowed by publishers to upload work onto repositories as long as it is in a pre-formatted, post-reviewed version. Many authors do not keep this version however, and Price claims that the library knows of over 1,000 papers within copyright legislation that could be uploaded if the authors had kept the pre-formatted versions. On the evidence of the Digital Commons@ repositories currently in use and the views expressed in the surveys, ProQuest's product's future may not be guaranteed. Many of the repositories are under-populated or contain non-premium content and until this changes, benefits in the technology will be hard to appreciate....[W]ithout pressure from university authorities from above, and motivation of academics from below, ProQuest's Digital Commons@, like other digital repository products, seems a product too far ahead of its time.

Spanish introduction to self-archiving

Alice Keefer, Los autores y el self-archiving, Thinkepi, October 25, 2005. A Spanish introduction to self-archiving.

Another book author defeds Google Library

Kevin Maney, Critics should grasp Google projects before blasting them, USA Today, November 8, 2005. Excerpt:
May Google give the book publishing industry a collective ulcer. Mostly because the publishers are being so stinkin' dense. Google has provoked an orgy of angst over its two book-related projects. One is Google Print, Google's program to scan books — with the publishers' permission — and make a limited number of those copyright-protected pages available online. The other is the Google Print Library project, which went live last week. That's aimed at making all books in the world searchable online. Books in the public domain can be seen in their entirety online. But for copyrighted works, Google's Library project gives you only card catalog-like information and a couple of sentences of text — nothing more. The misinformation and misguided attempts to stop these projects are mind-blowing. The Authors Guild and the Association of American Publishers (AAP) have sued to shut them down. Writers have been pounding out angry op-ed pieces. Writing in USA TODAY on Monday, AAP President and former congresswoman Pat Schroeder got the Google Library project wrong. She wrote that Google's Library project should ask permission "before they scan and make available our work to the world for free." But Google Library isn't doing that. The publishing industry seems to understand the Google projects as well as Homer Simpson understands how a nuclear reactor works. That hit its nadir on Monday, when a weepy story surfaced that the Great Ormond Street children's hospital in the U.K. might be endangered by Google Print because the hospital is funded by royalties from sales of Peter Pan, which author J.M. Barrie bequeathed to the institution in the 1920s. I'm waiting for the New York Post headline: "Google sinks sick kids!"

The idea that Google Print or Google Library threatens book sales doesn't ring true — especially because Google Print is almost an exact duplicate of Amazon.com's "Search Inside the Book" feature, which has been a way to help Amazon sell books. Google Print works like this: You type in words you're looking for and get back a list of matches from books. Click on one you want to see, and you get a full-screen image of that page in the book. From there, you can click to see two pages before or after the original page, but then you're cut off. If you want to read more, you have to search again and go through the same process. You can't even print the pages or copy the text and paste it elsewhere. There's no easy way to read a whole copyrighted book for free on either Google or Amazon. This isn't like music on Kazaa, with perfect free copies flying around the Internet. This is a clever technological middle ground between the utopia of free access to the world's best libraries and the hard fact that authors must be paid or they'll stop writing books. Yes, Google Library is scanning books from libraries without first seeking publishers' or authors' permission. But Google is only making the text searchable, so you can find whether what you're researching is in a book somewhere. At that point, you'd have to buy a physical book or go to a library to see the information. By exposing books this way, Google will most likely help publishers and authors make more money from their work....I'm the author of two books, and I want mine online. I'm all for having someone who's doing a research project land on a page from one of my books and decide it looks interesting enough to want to buy the whole thing....If programs such as Google Print and Google Library can keep long-form written communication relevant, authors and book publishers will be better off. The pages and card catalog info can only make people aware of books, not steal sales from them.

Journal rankings and career advancement

There's a discussion thread at the Chronicle of Higher Education on how journal rankings are used in promotion and tenure. (Accessible only to subscribers.) The discussion hasn't touched on OA issues yet, but it could if you wanted it to.

ACRL journal now OA with six-month embargo

The ACRL has announced that new issues of College & Research Libraries will be available free online after a six month embargo. From yesterday's press release:
The Association of College and Research Libraries (ACRL) is pleased to announce that its scholarly research journal College & Research Libraries is available free of charge in PDF format on the ACRL website six months after publication. Retrospective issues now are available through volume 58 (1997). "The editorial board is delighted that the journal is freely available in its electronic form,” said William Gray Potter, editor of College & Research Libraries. “This new arrangement will benefit both the authors and the readers by making the articles more widely available. Exposing the content to Internet search engines will also make it easier to find and read the articles. At the same time, we are committed to preserving the rigorous review process that has served ACRL and the profession well over the past sixty-six years." ACRL supports open access to scholarship as a principle for reform in the system of scholarly communication. In addition to this new access to College & Research Libraries, ACRL encourages author self-archiving of its published articles in both institutional and disciplinary repositories. “College & Research Libraries is a premier journal for research related to academic libraries,” said ACRL Scholarly Communication Committee Chair Ray English. “Making the contents of the journal openly accessible, following a six-month delay, will be of great benefit to practicing academic librarians and researchers, and it will also help authors increase the impact of their work. This move underscores ACRL's strong commitment to access to scholarly research.”

Eprints presentations

The presentations from the conference First [E-LIS] Workshop on Eprints in Library and Information Science (Geneva, October 22, 2005) are now online.

Tuesday, November 08, 2005

Clinical Biochemist Reviews -- reaching out to readers

Clinical Biochemist Reviews, a publication of the Australasian Association of Clinical Biochemists, is now freely available with a 6 month delay. A notice on the journal's primary website states:
In accordance with the Public Library of Science and PubMed Central initiative toward free information exchange, we will be making CBR articles available for access six months after publication.
In addition to providing access through the society's web presence, the journal is also being deposited in PubMed Central.

Clinical Biochemist Reviews - Fulltext 24+ (2003+) 6 month moving wall AACB | Fulltext v26+ (2005+) PubMed Central; ISSN: 0159-8090.

[Thanks to PMC-News for the PubMed Central portion.]

USA Today defends Google Library

Danny Sullivan battles misunderstanding, again

Danny Sullivan can't believe the media distortions of Google Print and sets out, once again, to clarify the difference between Google Library and Google Publisher. If you're reading OAN, you probably don't need it. But I strongly recommend it for the confused, especially for authors and publishers willing to do some fact-checking before they publish critiques of Google Library that miss the target, embarrass themselves, and create further confusion.

More on the Macmillan alternative to Google Print

Mark Chillingworth, Macmillan takes on Google Print, Information World Review, November 7, 2005. Excerpt:
Publishers wary of putting their titles online with Google can now side with publishing giant Macmillan, which, with its parent company Holtzbrinck Group, is developing a book digitisation platform to be called BookStore. Richard Charkin, chief executive at Macmillan , has called for the publishing industry to collaborate on digitisation and search. Like Google Print, BookStore will be a searchable repository of digital book content, with e-commerce technology for purchasing titles. Charkin said BookStore will appeal to publishers that want to take advantage of releasing their content online, but don’t want to surrender control of their copyright or invest in the technology required. Publishers are unhappy with Google’s strategy of asking publishers to send it opt-out list of titles not to be scanned. BookStore will give publishers the option of making their content available to search engines such as Google, Yahoo and MSN....Charkin admits that Google has increased pressure on publishers to make their content available online. But he denied that BookStore was a knee-jerk response to Google Print – he argued that it is an additional choice for the information sector. “I don’t think we or Google will get a monopoly.”

Comment. Free online full-text searching without free online full-text reading. Plus links for buying the books. That sounds like Google Library's plan for copyrighted books. It's not clear what the advantages of the Macmillan project are for publishers except the satisfaction of spiting Google. But that will come at a price, since Google will always get more eyeballs than Macmillan. If publishers join the Macmillan program because they think it will help sell books, they should admit that participating in Google Print too --either by not opting out of Google Library or by opting in to Google Publisher-- will sell many more books.

More on academic blogging

C.H.L. George, Blogging History Helpful in Research, OhmyNews, November 6, 2005. A useful survey of blogs devoted to historical research. Excerpt:
Blogging has already enabled academics to start reaching out to the wider world beyond the university gates. With so many translation tools available online, language barriers are not likely to be a problem for much longer. Blogging democratizes the practice of history. Before its invention scholars could not publicize their work without the approval of academic publishing houses and journals. This was relatively straightforward for university academics but made life difficult for independent scholars with no financial backing and less formalized training. As Jonathan Dresner points out, blogging is a cheap and quick form of publishing. It enables everyone to promote their research. The best-case scenario for the future of history blogging is that it will develop into a meritocratic online world, where good research and writing are recognized regardless of the author's qualifications and background.

Ejournals preferred but access barriers deter use

Patricia Serotkin, Patricia Fitzgerald, and Sandra Balough, If We Build It, Will They Come? Electronic Journals Acceptance and Usage Patterns, Portal, October 2005. (Thanks to Charles W. Bailey, Jr.) Only this abstract is free online for non-subscribers:
A focus group study conducted with health sciences students enrolled in graduate-level research classes at a small private comprehensive university revealed that these students preferred e-journals to print but that accessibility issues deterred their use. These findings provided valuable insights for local collection development decision-making and information literacy program development.
(PS: Presumably the access barriers that put off users were for toll-access ejournals.)

A copyright holder who should know better

Karen Gomm, Google Print upsets children's hospital, ZDNet UK, November 4, 2005. (Thanks to LIS News.) Excerpt:
Great Ormond Street children's hospital is worried that that Google's online publishing scheme could cost it much-needed income. The hospital, which receives all the royalties from sales and performances of Peter Pan in the UK, fears that it could suffer a drop in revenue if Google includes the children's classic in its plan to scan, digitise and make searchable the world's books. Great Ormond Street Hospital Children's Charity has received royalties from Peter Pan since 1929. An Act of Parliament, passed in 1988, extended the book's copyright indefinitely. If people stopped buying the book, and accessed it through Google's service, the hospital — which cares for seriously ill children — fears it could lose millions of pounds. A spokesman for Great Ormond Street said he hadn't had a chance to view the site yet but hoped Google would think twice before publishing the book. "I wouldn't be surprised if Google do this, but it will rob the hospital of a major core of its charity revenue," he said. Great Ormond Street's fears may be unfounded though. Peter Pan is already freely available online through Project Gutenberg, which catalogues books whose copyright has expired in the US — where the Hospital's copyright does not apply.

Comment. Hello? If Google treats Peter Pan as a book under copyright, then it will not "publish" the book in any sense. It will offer searching and short snippets of text. It will provide free advertising for the book. The hospital should be celebrating. If Google treats it as a book in the public domain --which it is everywhere but England-- then the Google Library program may offer free online full-text reading. But in that case, the "fault" is not with Google but with the copyright laws of the world, which actually let copyrights expire. The fact that the Ormand Street Hospital is an excellent charity should not obscure the fact that the special legislation to extend its copyright in perpetuity was an act of piracy from the public domain. It is perverse to criticize users who take advantage of their rights to use public-domain literature rather than to criticize privateers who want to re-privatize that literature and enclose the commons.

McAfee's OA economics textbook

Preston McAfee has written an OA textbook on economics covered by a Creative Commons license. McAfee is behind the Journal Cost-Effectiveness Calculator (with Ted Bergstrom) and one of the associate editors of the OA journal, Theoretical Economics. (Thanks to George Porter.) From McAfee's blurb:
Why open source? Academics do an enormous amount of work editing journals and writing articles and now publishers have broken an implicit contract with academics, in which we gave our time and they weren't too greedy. Sometimes articles cost $20 to download, and principles books regularly sell for over $100. They issue new editions frequently to kill off the used book market, and the rapidity of new editions contributes to errors and bloat....The publishers are vulnerable to an open source project: rather than criticize the text, we will be better off picking and choosing from a free set of materials. Many of us write our own notes for the course anyway and just assign a book to give the students an alternate approach. How much nicer if in addition that "for further reading" book is free. Moreover, by doing it this way, we avoid the problem of assigning a $100 book that has all sorts of errors and problems, an assignment which justifiably annoys students....You are free to use any subset of this work provided you don't charge for it, and you make any additions or improvements to it available under the same terms....There are some technical issues associated with open sourcing a book, and few books have been open sourced, although many are given away free. Some of the challenges are discussed [here]. My model is slightly different than the model espoused there. I hope some people will maintain their own version of this work, using mine as a springboard, adding to it over the years until it is personalized and superior for their course. I expect some others to just use the most appropriate "flavor" around. The best outcome would be a friendly competition between different flavors, with the result of several specialized versions for distinct purposes, all embedding best-of-breed insights. I'll maintain a list of recommended flavors, although I won't attempt to be comprehensive. If I deem a version unsuitable, I won't draw attention to it by linking to it. However, the nature of the license is that I have ceded control so that others can create versions I don't personally like. I also hope that others will create complementary websites with homework exercises and additional applications, although I will of course include some of those in my version.

OA is one reason to celebrate the unlikely web

James Boyle, Web’s never-to-be-repeated revolution, Financial Times, November 2, 2005. Excerpt:
The web is having a birthday. This month, we will have the 15th anniversary of the creation of the first web page....[T]here are three things that we need to understand about the web. First, it is more amazing than we think. Second, the conjunction of technologies that made the web successful was extremely unlikely. Third, we probably would not create it, or any technology like it, today. In fact, we would be more likely to cripple it, or declare it illegal.

Why is the web amazing?...When is the last time you looked in an encyclopedia? When is the last time that your curiosity...remained unsatisfied for more than a moment?...Much of that information is provided by volunteers who delight in sharing their knowledge. Consider the range of culture, science and literature – from the Public Library of Science and Wikipedia, to Project Gutenberg and the National Map. The web does not bring us to the point where all can have access to, and can add to, the culture and knowledge of the world....

Why is the web unlikely? ...For most of us, the web is reached by general-­purpose computers that use open protocols – standards and languages that are owned by no one – to communicate with a network (there is no central point from which all data comes) whose mechanisms for transferring data are also open. Imagine a network with the opposite design. Imagine that your terminal came hardwired from the manufacturer with a particular set of programs and functions. No experimenting with new technologies developed by third parties – instant messaging, Google Earth, flash animations?.?.?.?Imagine also that the network was closed and flowed from a central source. More like pay-television than web. No one can decide on a whim to create a new site. The New York Times might secure a foothold on such a network. Your blog, or Wikipedia, or Jib Jab need not apply. Imagine that the software and protocols were proprietary. You could not design a new service to run on this system, because you do not know what the system is and, anyway, it might be illegal. Imagine something with all the excitement and creativity of a train timetable....

Why might we not create the web today? The web became hugely popular too quickly to control. The lawyers and policymakers and copyright holders were not there at the time of its conception. What would they have said, had they been? What would a web designed by the World Intellectual Property Organisation or the Disney Corporation have looked like? It would have looked more like pay-television, or Minitel, the French computer network. Beforehand, the logic of control always makes sense. “Allow anyone to connect to the network? Anyone to decide what content to put up? That is a recipe for piracy and pornography.” And of course it is. But it is also much, much more.

Fez 1.0 now available

The University of Queensland Library has released version 1.0 of Fez, an open-source web interface for Fedora. (Thanks to Belinda Weaver.)

Monday, November 07, 2005

Michael Geist calls for Canadian commitment to OA

Michael Geist, What good are ideas if we lock them up? Toronto Star, November 7, 2005. Free registration required. Here's an OA copy requiring no registration. Excerpt:
Prime Minister Paul Martin's decision to appoint Dr. Arthur Carty, the former head of the National Research Council, as Canada's first national science adviser, clearly signaled the importance of research and development to the nation's future economic prosperity. This month Dr. Carty sent a clear message of his own — scientific success increasingly depends upon fostering a "culture of sharing" based on open-access models of communication that leverage the Internet to disseminate research quickly and freely to all. The move toward an open-access model represents a dramatic shift for the scientific and research communities. For decades, politicians and policy makers have emphasized greater intellectual property protections as the key to research success. As a result, Ottawa has regularly increased the level of copyright protection and is, in fact, facing renewed pressure to consider extending protection with new database rights and an extension in the term of copyright protection. It is no surprise that the primary winners under this approach have been the major scientific journal publishers. While researchers rarely receive compensation for their contributions, the publishers have enjoyed a financial windfall by charging thousands of dollars for journals filled with the free content generated with the financial support of the public purse through millions of dollars in research grants. To add to the frustration, the researchers are themselves the publishers' best customers — universities, supported by taxpayer dollars, spend millions on research only to buy back the results of that research with millions more for scientific journals. As Dr. Carty notes, the future success of scientific research depends upon changing this debilitating cycle. He argues "an open-access philosophy is critical to the system's success: if research findings and knowledge are to be built upon and used by other scientists, then this knowledge must be widely available on the Web, not just stored in published journals that are often expensive and not universally available." The movement toward open access is taking hold in many countries around the globe. The National Institute of Health in the United States and the Wellcome Trust in the United Kingdom have both announced that all of their funded research must be housed in archives that are available to other scientists and the general public. Moreover, last year a UK parliamentary committee recommended open access in a report titled "Scientific Publications: Free For All?." Open access has also attracted increasing attention from individual scientists. Several years ago more than 34,000 scientists in 180 countries called on publishers to make primary research articles available through online libraries, while the 2003 Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities encouraged scientists to publish in open-access journals.... In Canada, the Social Science and Humanities Research Council, a leading federal granting agency, has become the first such agency to commit to open access for federally funded research. The SSHRC, which just concluded a public consultation on the issue, plans to ensure that all funded research will be freely available online in full text. As Dr. Carty acknowledges, a culture of sharing will require "a new mindset among researchers, administrators, governments and in some cases companies — everyone involved in the creation and dissemination of knowledge." This is certainly true of politicians....[L]ast week Canadian Heritage Minister Liza Frulla assured the House of Commons Canada's current copyright reform proposal "does not touch education." Elsewhere Industry Minister David Emerson used an address to the Canadian Club to rightly emphasize the need for national broadband connectivity and greater scholarship funding, yet he neglected to reference adoption of open access models to disseminate Canadian research. The failure to include policy reforms to facilitate the unlocking knowledge is an embarrassment....If Canada is to maintain [recent] growth, we should follow the advice of our new national science adviser. Science and research success depends on tearing down barriers, not erecting them. A national commitment to open access is the right place to start.

OA not the leading reason to fill institutional repositories

T. Scott Plutchak has blogged some notes on Susan Gibbons' presentation November 5 at the AAHSL conference.
An hour after Susan Gibbons started her presentation yesterday, I felt I knew twice as much about institutional repositories as I had beforehand. She's a very engaging speaker, very effective at taking the complicated issues and making them very clear. I particularly appreciated her anthropological approach to understanding the ways in which faculty might find IRs to be of some value. The technology issues are not a big deal -- it's the human factors that are daunting. The most surprising thing about the day overall was the decided shift in emphasis concerning the role of IRs. Many librarians initially got excited about them because of the potential for finding an open access mechanism to published research. But there was relatively little discussion of that, either in Susan's talk, or in the various discussions and presentations that occupied the rest of the day. It is clear that, in some institutions at least, the notion of depositing postprints of published articles is still met with indifference, if not outright hostility. The most promising approaches seem to be to look at a university's gray literature, or unique collections, and promote the IR as a way to give that stuff much more visibility and permanence. It may be that while we initially glommed on to IRs for one purpose, we will actually find that we use them (at least in the near term) for other worthwhile purposes altogether. As IRs become a more familiar part of the landscape, we will undoubtedly find other unexpected ways in which they can contribute to the work of the university.

CC corrects excesses of copyright law

Oliver Burkeman, Is it time for copyright law to change? The Guardian, November 7, 2005. Excerpt:
Creative Commons coordinates a new kind of licensing system, designed to supplement the "all rights reserved" of traditional copyright with an alternative: "some rights reserved". Works thus licensed can be borrowed from, developed and modified - creating, proponents argue, a more fertile ground for new ideas in the long run. The original term of US copyright was 14 years. Now it's the lifetime of the author, plus 70 years. But the fact that the term is limited at all shows that the law recognises what we all, in some nebulous way, understand: intellectual products aren't quite like other kinds of property. That family heirloom you've passed down through generations won't ever, under normal circumstances, revert to the public domain. So why should an out-of-print novel? Because the production of ideas relies on two opposing ingredients: not just a system of ownership, which allows people to profit from their creations, but also a healthy public domain, which provides the raw material that ideas spring from, and where free collaboration can bring new ideas into being. One way to guarantee a flourishing public domain is to limit the length of time for which works are protected. Another is to limit the degree to which they are protected. Which is where Creative Commons - started by the Stanford law professor Lawrence Lessig - comes in.

OA news service joins the OCA

AXcess News has joined the Open Content Alliance. From yesterday's press release:
AXcess News petitioned the Open Content Alliance to join the nonprofit organization and offer its vast archive of news articles to help build a permanent archive of multilingual digitized text and multimedia content from around the world. The Open Content Alliance (OCA) represents the collaborative efforts of a group of cultural, technology, nonprofit, and governmental organizations from around the world that will help build a permanent archive of multilingual digitized text and multimedia content. The OCA was conceived by the Internet Archive and Yahoo! in early 2005 as a way to offer broad, public access to a rich panorama of world culture. AXcess News is the largest closely held electronic publisher of broad news online in the nation with 92 cities in its network. "We believe the Open Content Alliance is one of the best cooperative efforts to truly adhere to what the World Wide Web is all about - free access to information globally," said Eric Stevenson, AXcess News editor. Stevenson noted that many news organizations use their archives as a profit center, charging users access to that information, which he believes is to "old school". "If you are going to put your news online then you shouldn't be forcing readers to sign up for unwanted email just to read that content and worse, charge them to access your archives," said Stevenson. "The Open Content Alliance is the future of the web at its purest and AXcess News in its offer to join and contribute is trying to do its part to expand information and knowledge to everyone. We challenge old-school print news organizations to do the same and support the development of the OCA."

Using OA-based library savings to support OA

Heather Morrison, Open Access Economics: Funding Agencies and Leverage, Imaginary Journal of Poetic Economics, November 2, 2005. Excerpt:
Here is a thought: if publishers are beginning to receive revenue from a new stream (publishing charges for open access), subscription fees should decrease by a correponding amount, should they not? Indeed, has not Springer promised this with their Open Choice program? Something to keep in mind when that renewal comes up....Why not use these savings to create a fund to support further open access publishing? This could take a number of approaches, such as paying for membership fees to open access publishers, or fully or partially paying publication fees on a per-article basis. From my point of view, one of the keys to success of a production-based economics model is ensuring that it is cost effective. There are likely many ways to encourage cost effectiveness. Here is one idea: Libraries could fund publication fees based on a sliding scale. For example, modest publication costs (e.g. $500 per article) could be paid in full, with a decreasing percentage of the fee paid based on the amount - e.g. 80% of costs up to $1,000, 75% up to $1,500, and so forth. This ensures that faculty are aware of the costs of publication, at least whenever the costs are high enough that it is important to be aware. Here is a thought for libraries wondering where the monies for staffing to adminster such a system might come from: why not interlibrary loans staff? Why knows more about how to manage payment for information on an item-by-item basis, or how to make a system based on this as efficient as humanly possible? One reason this makes sense is that every article that becomes open access no longer needs to be obtained, by anyone, through interlibrary loan (staff may help patrons to discover the item, but there will not be a need to request from another library, track and pay for the service, etc.). It seems logical that there would be some correspondence between the percentage of material becoming open access, and a decrease in interlibrary loans - perhaps slow at first, then gradually growing.

Grants to cover PLoS institutional memberships

The Open Society Institute and the Public Library of Science are offering grants to universities in developing countries to cover one-year PLoS institutional memberships. PLoS waives processing fees on accepted articles for faculty, staff, and students at member institutions. Applications are due by December 15, 2005.

Sunday, November 06, 2005

BCLA comment for the SSHRC consultation on OA

The British Columbia Library Association has publicly released its October 27 comment for the SSHRC consultation on OA. (Thanks to Heather Morrison.) Excerpt:
Overall, our belief is that the results of SSHRC funded research should be immediately openly accessible, with perhaps some exceptions, such as monographs. Results of research should be placed in an institutional repository. SSHRC funding for journals should cover all peer review, administration and manuscript preparation costs, but not costs associated with distribution. SSHRC funding for journals should be limited to fully open access journals....Whether funded by SSHRC or not, librarians are very actively engaged in open access. Many are depositing their articles in institutional repositories, or one of the two disciplinary repositories for Library and Information Sciences: E-LIS and D-LIST. Many are also publishing in open access journals. The British Columbia Library Association, in partnership with other library associations across the province, is in the process of developing a new, open access journal. Libraries and librarians are also leading the development of institutional repositories, as exemplified by the Canadian Association of Research Libraries’ Institutional Repository program.

Update to OA timeline

I just added about two dozen new entries to my Timeline of the Open Access Movement and made a lesser but long-overdue update to my page of OA-related lists.

OA mandate to drug info from the FDA

The US Food and Drug Administration (FDA) is requiring drug manufacturers to provide structured, detailed information on their FDA-approved drugs, which the agency will then make OA on its web site. The new info will use standard terminology for human readability and embedded tags for machine readability. For details, see the FDA press release (November 2):
[T]he Food and Drug Administration (FDA) today began requiring drug manufacturers to submit prescription drug label information to FDA in a new electronic format. This electronic format will allow healthcare providers and the general public to more easily access the product information found in the FDA-approved package inserts ("labels") for all approved medicines in the United States. "Providing health care providers and patients with clear, concise information about their prescriptions will help ensure safe use of drugs and better health outcomes," said Health and Human Services Secretary Mike Leavitt. "Now medication information will be easy to access on a publicly available web site, and this will lead to future innovations with health information technology." These new electronic product labels will be the key element and primary source of medication information for DailyMed -- a new interagency online health information clearinghouse that will provide the most up-to-date medication information free to consumers, healthcare providers and healthcare information providers....In the future, this new product information will also be provided through facts@fda.gov [PS: not yet online], a comprehensive internet resource designed to give one-stop access for information about all FDA-regulated products. Under regulations that became effective today, drug manufacturers are now required to submit to FDA prescribing and product information (i.e., the package insert or label) in a structured product labeling (SPL) format that provides accurate, up-to-date drug information using standardized medical terminology in a readable, accessible format. Using embedded computer tags, the prescribing and product information in the SPL format can be electronically managed, allowing a user to search for specific information. These tags can instruct computers to read specific sections of a drug label including product names, indications, dosage and administration, warnings, description of drug product, active and inactive ingredients, and how the drug is supplied. With this information, physicians will be able to quickly search and access specific information they need before prescribing a treatment, resulting in fewer prescribing errors and better informed decision making. In addition, having the labels submitted to FDA in SPL will improve the FDA drug labeling review process, so that FDA can provide immediate access to the most recent information about medications to doctors and patients....The SPL project, led by FDA's Center for Drug Evaluation and Research, is the first in an agency-wide initiative regarding the public provision of electronic information.

Fiction debut of OA

Chris Howard is writing a novel in which an open-access journal plays a critical part.