News from the open access movementJump to navigation
Michel Prévot, La publication scientifique à accès libre: de l'idéal aux modalités concrètes. Application aux sciences de la terre, Bulletin de liaison de la Société Française de Minéralogie et Cristallographie, 17, 2 (August 1, 2005) pp. 23-30. (Thanks to Bruno Granier.)
On February 7, the Berkeley Electronic Press (bepress) logged its 5 millionth download from ResearchNow, its OA repository. Excerpt from its press release:
ResearchNow...is a collection of academic materials drawing from several primary sources: the roster of peer-reviewed, Berkeley Electronic Press journals (27 and counting), bepress-hosted subject matter repositories such as the bepress Legal Repository and COBRA: The Collection of Biostatistics Research Archive, and all working papers, preprints and other "grey literature" content from institutional repositories hosted by bepress that have opted for inclusion. More than 50 schools - including the University of California system, the University of Pennsylvania, Boston College, Cornell, and the University of Nebraska, as well as major universities in Europe and Australia - use the bepress platform for their institutional repositories, co-marketed with ProQuest Information & Learning since 2004 as Digital Commons.
The Ad Astra Association has launched the Ad Astra Open Access Archive, which I believe is the first OA archive in Romania. It's not limited to Romanian research, however, but open to scientists from all countries. From the front page:
Part of the infrastructure needed for doing science is a community of like-minded scientists. For science to take hold in a country, especially if the usual mechanisms for promoting personal contacts are few, it is essential to develop a community that sustains itself through mutual support and interactions. Because of relative isolation that still persists in developing countries, scientists are often unaware of the extent and nature of science that is being done in their own countries, and have inadequate personal knowledge of the fellow scientists. They also have some difficulty in publicizing their work quickly.
Tom Wilson, Open access and Weblogs - working together, Information Research Weblog, February 10, 2006.
We've had occasional instances of the value of Weblogs in spreading news about papers in Information Research and we have another at the moment. Nahyun Kwon's paper on virtual reference service has been noted in a number of Weblogs and, as a direct result, the hits have soared to more than 2,400 in less than one month. By comparison, the other papers in the issue have an average hit rate of about 400. There's a lesson here for authors - if you want your paper to be noticed, make sure it's noticed in the 'blogosphere' - and you are the ones who will know which Weblog authors are likely to be interested, so get to it!
Comment. Good point. I'd only add that the best way to harness this power is to make the article OA. This works two ways. First, OA helps bloggers and other meme-spreaders (who might prefer to use listservs or private email) discover the work in the first place and learn that it's interesting, important, and worth spreading. Second, OA helps them spread the word to other potential readers. Readers are much more likely to read the article --and spread the word further-- if they receive a link to free online full-text than if they receive a link to a stop sign or pay-per-view page.
John Willinsky's book, The Access Principle, has won the 2006 Blackwell's Scholarship Award from the Association for Library Collections and Technical Services (ALCTS), a division of the American Library Association. Congratulations, John!
Update. Also see the 3/14/06 press release from the ALA/ALCTS.
Alberto Pepe, Jean-Yves Le Meur, Tibor Šimko, Dissemination of scientific results in High Energy Physics: the CERN Document Server vision. A presentation to be given next week at the conference, Computing in High Energy and Nuclear Physics (Mumbai, February 13-17, 2006).
Abstract: The traditional dissemination channels of research results, via article publishing in scientific journals, are facing a profound metamorphosis driven by the advent of the Internet and broader access to electronic resources. This change is naturally leading away from the traditional publishing paradigm towards an archive-based approach in which institutional libraries organize, manage and disseminate the research output. Within this context, CERN has been committed since its early beginnings to the open divulgation of scientific results. The dissemination started by free paper distribution of preprints by CERN Library and continued electronically via FTP bulletin boards and the World Wide Web to the current OAI-compliant institutional repository, the CERN Document Server (CDS). By enforcing interoperability with peer repositories, like arXiv and KEK, CDS manages over 500 collections of data, consisting of over 800,000 bibliographic records in the field of particle physics and related areas, covering preprints, articles, books, journals, photographs and more. In this paper we discuss how the CERN Document Server is becoming a solid base for the collection and propagation of research results in high-energy physics by implementing a range of innovative library management services. In particular, we focus on metadata extraction to create information-rich library objects and groupware and collaborative features that allow users to comment and review records in the repository. Moreover, we explain how the existing document ranking techniques, based on usage and citation statistics, may provide original insights on the impact of selected scholarly output.
Heather Morrison, Toward a vision for scholarship...and communications, Imaginary Journal of Poetic Economics, February 9, 2006. Excerpt:
The combination of the electronic medium and the world wide opens up a world of possibilities - open access to the scholarly literature being only one. Scholars around the world and across disciplines can work together on their research using electronic means....It is through open sharing of information and working collaboratively rather than competitively that the human genome was mapped in an amazingly short time. We could take this approach to other important research questions - like how to find economical, renewable, environmentally friendly sources of energy; or, a renewed focus on the humanities and social sciences research that could lead to the answers to the question of how we can live together in peace in this global village of a modern world. Until we grasp this full potential, open access to the peer-reviewed postprint might look to some like a little piece of a puzzle. The moment you shift to this bigger picture, the puzzle piece falls into place. The beauty and necessity of open access - and the sheer folly of pursuing any other model - is immediately obvious.
In this comment, blogged today at On the Commons, David Bollier is not talking about scholarly journal publishing. But how far does his point carry over?
It's not widely appreciated that "Centralized Media" - broadcasting, cable television, films, recorded music - have a serious Achilles' Heel. They have huge overhead costs. A small number of large companies are able to dominate their respective markets primarily because they control critical "choke points" of product development and distribution. But it costs A LOT to control these choke points -- and those costs are only going up even as the costs of online alternatives go down.
UniProt (Universal Protein Resource), the "the world's most comprehensive catalog of information on proteins", now uses a Creative Commons license. Excerpt from John Wilbanks' announcement on the Science Commons blog:
We spent a lot of time talking to the Uniprot folks over the last year. I'd encourage everyone to check out the FAQ we wrote on database licensing and Creative Commons licenses to understand exactly which elements of the DB are copyrighted and which are not.
Digital repositories programme launches wiki and mailing list, a JISC press release February 9, 2006. JISC has previously announced (and OAN has blogged) both the wiki and mailing list. Excerpt:
The University of North Carolina Library and School of Information and Library Science have joined the Open Content Alliance. See the UNC press release (February 9):
Two members of the University of North Carolina at Chapel Hill family have joined efforts to build a permanent archive of digitized text and multimedia materials on the World Wide Web. The University Library and the School of Information and Library Science recently joined the Open Content Alliance, a group of organizations from around the world that are constructing the archive. The school is the first from a university to join the alliance; the library is the first library to contribute manuscript materials. Collections included in the archive are freely available for access and re-use by all, provided they respect the rights of content owners and contributors. The library initially will focus on a potential project to digitize manuscripts from its Southern Historical Collection....Besides documents, the library will contribute expertise acquired through its "Documenting the American South" Web site and related projects, [Sarah] Michalak [university librarian and associate provost for University Libraries] said. "Since we launched DocSouth in 1996, we have committed ourselves to free and open access," she said. "The e-mails of thanks we receive from all over the world make it clear why libraries need to share their treasures this way and make it easy for people everywhere to use them."
Shuichi Iwata, Message from the President, CODATA Online Handbook, February 10, 2006. Iwata is the President of CODATA. Excerpt:
If someone were to ask me to identify three major scientific developments I envisage taking place over the next five to ten years within the scientific data community that could have a major impact on the future development of science and which could serve the needs of society, I would list the following:  The entire store of available scientific data and information - the results of several centuries of work - will become available electronically and accessible virtually everywhere.  Science will be carried out more and more through long-distance collaborations enabled by the internet. These collaborations will rely on access to large data collections, large scale computing resources and high performance visualization. The powerful infrastructure needed to support e-science will be the Grid. These mammoth and comprehensive collections will be a major source of scientific discovery in the future, with e-science gradually gaining precedence.  Systems will be developed that allow the general public to access, understand and take advantage of the data and information collections mentioned above.
SwetsWise Online Content has added journals from six new publishers to its collection, including OA journals from e-Med and TheScientificWorld. See its press release (February 7).
Comment. I believe that Swets first added OA journals to its collection in February 2004. I argued e.g. in SOAN for July 2005 that OA journals joining priced aggregations are not selling out. "If you publish your work in an OA journal, then it's already visible to users who look in the places where OA work can be found. But if your OA journal is also distributed in a priced aggregation, then without losing the first audience you'll gain the audience of researchers who look first or look only in that aggregation. Among the priced aggregations that include some OA journals are EBSCO A to Z, SwetsWise Online Content, and WilsonWeb. The real advantage here may be small...[b]ut the advantage is still real, and authors of articles in OA journals should not complain, or suspect anything sinister, when those OA journals are picked up by priced aggregators."
Brian Robinson, EU focuses on health education, Government Health IT, February 7, 2006. Excerpt:
The European Union has launched the European Health Information Platform, a $1.7 million multimedia initiative to improve the quality of information its citizens receive about health issues. Also known as Health in Europe, the program will create a rights-free bank of health-related reports, documentaries, radio broadcasts and print and online articles for distribution throughout the EU.
Also see the EC press release (February 2).
Today we have naming of parts, The Economist, February 9, 2006. An unsigned news story. Excerpt:
[N]early 250 years after Carl von Linné, a Swedish naturalist, invented the modern system of naming living creatures, taxonomists still have no official list of all the animals discovered so far. This makes the work of biologists, ecologists and conservationists --who rely on species names to know just what it is they are studying and conserving-- more difficult than it need be. Linnaeus, to give his familiar, Latinised name, introduced the system of binomial nomenclature in 1758 by classifying more than 10,000 species of animals and plants with two-part names, also Latinised, such as Homo sapiens. But so many species since then have been named in such a haphazard way that animal nomenclature is in trouble. Although Linnaeus's big idea was that each species would have one scientific name, so that scientists could know immediately what they were discussing, the lack of a single official “telephone directory” has frustrated the entire enterprise. Around 1.5m species are thought to have been described so far, but more than 6m names have been used....The result is that taxonomists must struggle long and hard to figure out whether a name has been used before and also what other, similar, animals look like. In entomology alone, relevant data may be found in any one of more than 1,000 specialised journals. No wonder such a large proportion of the world's museum specimens are labelled incorrectly. The solution, proposed by a group called the International Commission on Zoological Nomenclature (ICZN), based at the Natural History Museum, is called ZooBank....Exactly how ZooBank would work is still under discussion. When Andrew Polaszek, the executive secretary of the ICZN, proposed the idea, it was as much a call for proposals as a blueprint. But the non-negotiable core is for an open-access, web-based system that would set out to be definitive. Anything that does not appear would be an un-species as far as taxonomy is concerned.
Google to digitize Hindi Literature, Silicon India, February 9, 2006. Excerpt:
In its 'mission' to "organize all the world's information and make it universally accessible," search behemoth Google is now eyeing the Hindi book segment for digitization as part of its Book Search initiative...."We don't currently have any Hindi language books in our search programme as we have just begun discussions with Hindi language publishers. We will have the service available once we have a critical mass of Hindi language books," says Gautam Anand, strategic partner development manager, Google Inc....What will the publishers gain through this programme? "Publishers can avail of the benefits of this free marketing programme as the details of a publisher will be published. Anybody from anywhere in the world can contact the publisher," says Anand. "This will definitely help Hindi publishers as they will be able to reach out to more people interested in Hindi books," says Shakti Malik, President, Federation of Indian publishers. In a recent interaction with publishers from various parts of country on the sidelines of the World Book Fair in New Delhi, Google was flooded with enquiries about the programme in other Indian languages. Google says that it is thinking about it. "We hope to include other languages as well. It is not incumbent upon the response we get from Hindi language publishers, but on when we feel comfortable that the OCR technology is in a state where we can accurately index books in other regional languages," says Anand....According to Google, the response from the Indian publishers has been encouraging. "We started discussions with Indian publishers this year at the Frankfurt Book Fair and the response has been extremely positive," says Anand.
(PS: Note that this is the opt-in Google Publisher program, not the opt-out Google Library program.)
The NIH will partner with Pfizer and other private corporations to launch an OA database of genetic data. See yesterday's press release from Pfizer:
The Foundation for the National Institutes of Health (FNIH), the National Institutes of Health (NIH), and Pfizer Global Research & Development, New London, Conn., today announced the launch of a unique public-private medical research partnership -- the Genetic Association Information Network (GAIN) -- to unravel the genetic causes of common diseases over the next three years. The information derived from GAIN will be publicly available to researchers world-wide. GAIN brings new scientific and financial resources to the NIH's existing whole genome association programs, encouraging all partners -- across and beyond NIH -- to work together toward the common goal of understanding the genetic contributions to common diseases. Organizers of the GAIN partnership believe the model holds promise of achieving rapid, scientifically sound results that any single researcher or institution working alone would be hard-pressed to equal. GAIN is designed to help medical researchers quickly identify the many genetic contributions to common illnesses such as heart disease, Alzheimer's disease, diabetes, osteoarthritis and stroke by comparing the genetic makeup of people with the disease to people who are healthy. Identifying genetic differences between these two groups will speed up the development of new methods to prevent, diagnose, treat and even cure common illnesses....The GAIN initiative proposes to raise $60 million in private funding for genetic studies of common diseases. The initiative does not require new expenditures of public funds nor will it be implemented at the expense of any existing or pending publicly funded biomedical research programs...."Virtually all diseases have a hereditary component, which is transmitted from parent to child through the three billion DNA letters that make up the human genome," said Francis S. Collins, M.D., Ph.D., Director of the National Human Genome Research Institute (NHGRI) at NIH. "But progress in identifying the genetic factors that influence health or disease, or even the response to treatment, has been difficult. This initiative promises to identify rapidly the many genes in an individual that, taken together, contribute to an increased risk of illness -- or that increase the chances of a healthy life. As these genetic underpinnings become clear, researchers will be empowered to develop targeted treatments that either prevent illness from occurring or treat it effectively once it does."
Also see Jeffrey Brainard, NIH Proposes New Project and Database to Study Genetic Causes of Disease, Chronicle of Higher Education, February 9, 2006 (accessible only to subscribers). Excerpt:
The National Institutes of Health announced on Wednesday a new research effort and free, public database about the genetic causes of common illnesses. The project, to be financed jointly by the NIH and biotechnology companies, will be designed to protect the publishing priority of researchers who put data in the repository. The database, to be managed by the agency's National Library of Medicine, would contain genetic data from thousands of patients with particular diseases, with details that could identify the patients removed. The NIH will require users of the database to wait nine months before publishing papers based on data that they did not deposit themselves, Elias A. Zerhouni, the agency's director, said at a news conference. That policy is meant to prevent scientists who donate data from being scooped by competitors. The database will allow researchers to comb through the DNA of ill and healthy people to explore whether the sick individuals have genes in common. The NIH will also pay to study how genes and environmental causes, like pollutants, combine to cause disease....The new work will begin this year, financed with at least $20-million from Pfizer Inc., the pharmaceutical company, and additional contributions from Affymetrix Inc., a biotechnology company. The research will initially focus on seven illnesses and health conditions, to be determined based on a peer review of research proposals. However, the NIH cited as possible candidates arthritis, asthma, cancer, heart disease, and Alzheimer's disease. In his 2007 budget, released on Monday, President Bush proposed that the NIH spend an additional $68-million for work on other diseases. The project announced on Wednesday would be one of the NIH's few new efforts for 2007, when, under the president's proposal, the agency's overall budget would get no increase....The overall partnership between the NIH and the companies will be managed by the Foundation for the National Institutes of Health, an independent group that promotes similar alliances, and is to be called the Genetic Association Information Network [GAIN]....Pfizer, Affymetrix, and other contributors to the database will receive no special intellectual-property rights, according to Dr. Zerhouni. The data are considered "precompetitive," meaning that companies must perform further research on the publicly available data before they can reap commercially valuable discoveries. A similar model was followed by companies that contributed data to the HapMap project.
Comment. One of the largest impediments I've seen to OA data is the fear of being scooped. Many researchers are only willing to share their data after they've published all they have to say about it themselves. I applaud the GAIN proposal as one way to solve this problem and make more data more sharable more quickly. It's likely that the benefits of sharing the data will outweigh the costs of the nine-month embargo on publishing new results based on the data --but it's not inevitable. I hope the NIH monitors the balance of these costs and benefits over time and shortens the embargo as much as it can without deterring submissions.
Does CIRM Have a "Secret" Proposal on Openness? California Stem Cell Report, February 8, 2006. Excerpt:
We are sad to report, once again, the failure of the California stem cell agency to comply with its own promise of the highest standards of openness and transparency. We have written repeatedly about the failure of CIRM [California Institute for Regenerative Medicine, the agency disbursing state funds for stem-cell research] to provide adequate background material in a timely fashion on the important matters on its agenda. Even some of its own directors have complained publicly. The most recent example is Friday's meeting of the Oversight Committee. For example if you care about open access to CIRM-funded findings, you would be hard pressed to determine whether that is a subject to be considered at the session -- aside from the IP draft rules. But apparently it is. There is a brief mention on the agenda of a proposed venture with the Public Library of Science. If you dig into that enterprise, it is all about making scientific findings widely available. Why isn't there additional information available from CIRM about the venture? There is a bit of irony in all this – an apparent openness proposal that is basically being advanced in secret.
Derek Law, Delivering Open Access: From Promise to Practice, Ariadne, February 2006. Predicting "how the open access agenda will develop over the next ten years." Excerpt:
Although Swan's work has demonstrated the willingness of researchers to deposit articles in repositories, this has tended to be a passive rather than an active agreement, judging by the thin population of most institutional repositories. Open Access journals have also grown in numbers. In November 2005, the Directory of Open Access Journals lists almost 1900 open access journals. But open access is a long way from being at the heart of scholarly communication and is ranged against large commercial forces in the STM (Scientific, Technical and Medical) publishing area; and although optimists will feel that the tide has turned on Open Access and that moves such as the much heralded but still awaited Research Councils' mandating of deposit will tip the balance, it has to be acknowledged that the UK scientific community looks more like donkeys led by lions (to paraphrase Max Hoffmann) than the reverse. The community looks remarkably unmoved by considerations of the future of scholarly communication. And yet it is common ground between at least some publishers and some proponents of open access that the present model is disintegrating and cannot survive....In sum then Open Access has made good progress (although as the mailing lists show there remains substantial confusion between the green and gold routes, between Open Access and Open Archives), but commercial STM publishing remains in rude and profitable health....Another key driver is national ambition of small countries. A number of programmes have begun in Europe in countries as disparate as the Netherlands, Portugal and Scotland, where Open Access is seen as a key element of national strategy to cover everything from the dissemination of publicly funded research to encouraging inward investment. The DARE Project in the Netherlands is the most developed of such programmes but pragmatism rather than optimism encourages one to believe that other countries will see advantage in co-ordinating and optimising the dissemination of their research....Open Access is a battle where a ragamuffin band of academics and librarians are challenging the imperial pomp of billion dollar global companies. In those terms the contest is both unequal and unwinnable, since too much inertia is built into the system. However, as this article has tried to show there are powerful drivers and change agents in place - technology; the nature of research; Google; national interest - which coupled with the sheer bloody-mindedness and persistence of the proponents of open access will lead to its growth as the dominant form of scholarly discourse.
Comment. I have only two quibbles. (1) "[T]here remains substantial confusion...between Open Access and Open Archives." This is itself an example of the confusion. OA archives are themselves OA. OA is not limited to OA journals. (2) "Open Access has made good progress...but commercial STM publishing remains in rude and profitable health." This assumes that the purpose of the OA movement is the destructive one of harming commercial publishers. It's not. The purpose of the OA movement is the constructive one of providing OA to more and more research literature. It's possible for the commercial publishers to join in this endeavor, and many of them are doing so, if not by converting non-OA journals to OA then by permitting their authors to deposit their postprints in OA repositories.
James Watson, UK to launch online life sciences archive, VNUnet, February 9, 2006. Excerpt:
UK scientists have started work on the creation of a free online digital archive of peer-reviewed medical research in the UK. The site, due to be launched at the beginning of next year, will hold more than 500,000 research articles. An estimated one million unique users are expected to visit the site every month, accessing about three million full-text medical articles. Led by the Wellcome Trust, with support from a group of major UK biomedical research funding bodies, the UK PubMed Central (UKPMC) project aims to mirror a similar repository in the US, while also accepting UK research material. ‘The implementation of UKPMC represents the creation of a significant resource for the life sciences community,’ said Robert Terry, senior policy adviser at the Wellcome Trust. ‘It is about a subject-based repository for life sciences that is freely available with the internet. It is part of a movement in research to improve access to peer-reviewed published literature.’ The process of finding a supplier to develop, host and manage the service started this month. The appointed contractor will host and manage three separate components, comprising a UK-hosted mirror of the USPMC, a local author manuscript submission and tracking system, and a system to provide authenticated login services for manuscript submission. ‘We are making a standalone UK portal of the US version. It mirrors US material and takes UK material,’ said Terry....A contract is expected to be awarded in July.
Klaus Graf, Ist nur unmittelbarer Open Access sinnvoll? Archivalia, February 5, 2006. On whether the definition of OA should include items whose free online access is delayed (yes), items that remove price barriers but not permission barriers (no), items other than journal articles (yes), and a commitment to long-term preservation (yes).
Joe Miller has rediscovered an early glimpse of OA by Benjamin Kaplan, An Unhurried View of Copyright (Columbia University Press, 1967), pp. 120-121:
Copyright is likely to recede, to lose relevance, in respect to most kinds of uses of a great amount of scholarly production which now sees light in a melange of learned journals and in the output of university presses. In the future little of this will ever be published in conventional book or journal form. Authors will offer their manuscripts for editorial screening; upon acceptance the material will enter directly into the electronic system [described on p. 119], where it will be open to quick retrieval for consultation and study. (One energetic mind has conceived that the cost of introducing works into a system may finally run so low as to justify inclusion, in earmarked ‘compartments,’ of works rejected by the editors: an authors’ paradise!) For many of the uses available through the machine, exaction of copyright payments will be felt unnecessary to provide incentive or headstart–especially so, when the works owe their origin, as so many will, to one or another kind of public support.
(PS: Matthew Bender has reprinted Kaplan's book with commentary by a dozen contemporary law professors.)
Hillel Italie, Publisher to Offer Book Content Online, Washington Post, February 6, 2006. (Thanks to William Walsh.) Excerpt:
At a time when publishers are suing to prevent Google from putting excerpts of copyrighted books online, HarperCollins has started an advertiser-supported program that will offer a free look at the full text of selected works. The Harper program, announced Monday, is being launched with Bruce Judson's "Go It Alone! The Secret to Building a Successful Business on Your Own." The book was published in hardcover at the end of 2004, and recently came out as a paperback. Anyone who wants to read the whole text can visit the author's Web site. "We hope this pilot will demonstrate a win-win for publishers, authors and search engines. The new era does not need to be a zero sum game," HarperCollins CEO Jane Friedman said Monday in a statement....[On the lawsuits against Google:] "This has always been an issue about control, who gets to decide what gets put online," Brian Murray, group president of HarperCollins, said of the legal battles. With control of its books in mind, HarperCollins announced late last year that it was digitizing its vast catalog....Murray told The Associated Press that a measure of the Harper program's success will be whether "the new revenue stream" of advertising money compensates for any lost sales....But several writers, including marketer Seth Godin and science fiction author Cory Doctorow, have made a point of offering free content online, believing that it helps sales. M.J. Rose, a marketing expert and author of "Lip Service" among other novels, praised HarperCollins for its "smart" initiative. "We all know that readers don't want to read the whole book online," Rose said. "But as Seth Godin proved with `Unleashing the Idea Virus' - people will start a book on line and if they get hooked - click over and purchase it."
Mary Sue Coleman, Google, the Khmer Rouge and the Public Good, text of a talk delivered February 6, 2006. The President of the University of Michigan defends the Google Library program, and Michigan's participation in it, to the Association of American Publishers, which is suing to shut it down. (Thanks to John Battelle.) Excerpt:
Digitizing the entire Michigan library was a project our librarians predicted would take more than one thousand years. Larry [Page, Google co-founder and Michigan alumnus] told us Google could make it happen in six....It is [the AAP] criticism of the project that prompted me to accept your invitation to speak — and explain why we believe this is a legal, ethical, and noble endeavor that will transform our society. Legal because we believe copyright law allows us the fair use of millions of books that are being digitized. Ethical because the preservation and protection of knowledge is critically important to the betterment of humankind. And noble because this enterprise is right for the time, right for the future, right for the world of publishing, right for all of us....[T]he Google project is a remarkable opportunity – and a natural evolution – for a university whose mission is to create, to communicate, to preserve and to apply knowledge....The University of Michigan’s partnership with Google offers three overarching qualities that help fulfill our mission: the preservation of books; worldwide access to information; and, most importantly, the public good of the diffusion of knowledge....We were digitizing books long before Google knocked on our door, and we will continue our preservation efforts long after our contract with Google ends. As one of our librarians says, “We believed in this forever.” Google Book Search complements our work. It amplifies our efforts, and reduces our costs. It does not replace books, but instead expands their presence in the marketplace. We are allowing Google to scan all of our books – those in the public domain and those still in copyright – and they provide our library with a digital copy. We insisted on this for one very important reason: Our library must be able to do what great research libraries do – make it possible to discover knowledge. The archive copy achieves that. This copy is entirely, and only, for preservation and research. As for the public domain works, we will use them in every way possible. For in-copyright works, we will make certain that they remain dark until falling into the public domain. Let me assure you, we have a deep respect for intellectual property – it is our number one product. That respect extends to the dark archive and protecting your copyrights....Let me repeat that: I guarantee we will protect all copyrighted materials. I assure you we understand that providing public access to materials in copyright, particularly those still in print, would be unlawful. Merely because our library possesses a digital copy of a work does not mean we are entitled to, nor will we, ignore the law and distribute it to people to use in ways not authorized by copyright....At the same time, we absolutely must think beyond today. We know that these digital copies may be the only versions of work that survive into the future. We also know that every book in our library, regardless of its copyright status today, will eventually fall into the public domain and be owned by society. As a public university, we have the unique task to preserve them all, and we will....Most recently, Hurricane Katrina dealt a blow to the libraries of the Gulf Coast. At Tulane University, the main library sat in nine feet of water – water that soaked the valuable Government Documents collection: more than 750,000 items … one of the largest holdings of government materials in Louisiana … 90 percent of it now lost. In the 1970s, the Khmer Rouge regime in Cambodia decimated cultural institutions throughout the country. Khmer Rouge fighters took over the National Library, throwing books into the street and burning them, while using the empty stacks as a pigsty. Less than 20 percent of the library – home for Cambodia’s rich cultural heritage – survived....Remember, we believed in this forever. We have been a leader in preservation and will continue to do so – I expect nothing less of Michigan. By digitizing today’s books, through our own efforts and in partnership with others, we are protecting the written word for all time. Just as powerful as the preservation aspect of Google Book Search is the fact our venture will result in a magnitude of discovery that seems almost incomprehensible. I could not have imagined that in my lifetime so much diffuse information literally would be at my fingertips. It is an educator’s dream, knowing that the vast body of information held in the libraries of Michigan, Stanford, Harvard, Oxford and the New York Public Library will be universally searchable and, in the case of public domain works, accessible....Google Book Search, with the results it provides users, is a massive, free directory to your publications. That directory includes snippets, which I know is a four-letter word with you. But I confess I see no difference between an online snippet, a card catalog, or my standing at Borders and thumbing through a book to see if it interests me, if it contains the information I need, or if it doesn’t really suit me. So what will Google Book Search, snippets and all, do for book sales? It will whet the appetites of users and drive them to libraries, bookstores, and online retailers to buy more books. I believe we are seeing an exciting new business model unfolding, and I can’t understand why any bookseller or publisher, especially scholarly presses with such narrow audiences, would oppose an approach that all but guarantees increased exposure....The bottom line, for me and for you, is that our publishing houses and our authors can only benefit financially and reputationally from the widest possible awareness of books and their availability....At its essence, the digitization project is about the public good. It transcends debates about snippets, and copyright, and who owns what when, and rises to the very ideal of a university – particularly a great public university like Michigan. This project is about the social good of promoting and sharing knowledge. As a university, we have no other choice but to do this project.
Update. Also see Andrea Foster, U. of Michigan President Defends Library's Role in Controversial Google Scanning Project, Chronicle of Higher Education, February 7, 2006 (accessible only to subscribers). Excerpt: "[I]n a question-and-answer session following Ms. Coleman's speech, several book publishers loudly disputed Ms. Coleman's assertion that the project is legal. They asserted that the enterprise would enrich Google's coffers while neglecting the rights of publishers and authors."
Swiss scientists sign up to open access, Swissinfo, February 7, 2006. An unsigned news story to accompany last week's flurry of Swiss OA activity. Excerpt:
Swiss scientific organisations have agreed to allow open access to their research information for all interested parties, free of charge. The joint signing of the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities will help break down barriers for accessing scientific knowledge. The declaration, launched in 2003, is the response from the science world to the new information-sharing opportunities offered by the Internet, a statement from the Swiss National Science Foundation said on Tuesday. The National Science Foundation, along with four other organisations, is one of the signatories of the Berlin Declaration. Open-access publishing allows readers to access, copy, and distribute research papers freely, subject to proper attribution of authorship. In treating science as a public asset for researchers, the objective is to "return science to the scientists" and stimulate new research ideas. "We want to open up our archives to a wider public because as taxpayers, they fund our work," Andreas Dick of the National Science Foundation told swissinfo. "Researchers will put whatever they have published online, making it easily available and fast-tracking the way from the journal to the public," he added. The Conference of Swiss University Libraries has already been canvassing for the widespread signing of the declaration for some time, mainly on cost-saving grounds. Open-access systems will offer an alternative to the rapid increase in prices for subscriptions to commercial magazines from scientific publishers. In all, around 2.5 million articles are published every year in 24,000 scientific magazines....In October 2003, the Berlin Declaration issued an open invitation to governments, universities, research institutions, funding agencies, foundations, libraries, museums, archives, learned societies and professional associations to sign up to the principle of open access.
Also see today's press release from the Swiss National Science Foundation (SNF), one of the new signatories to the Berlin Declaration.
Barbara Quint, HighBeam Introduces Free Full-Text Journal Articles, Information Today, February 6, 2006. Excerpt:
With unlimited access to 35-plus million full-text articles available to subscribers for $19.95 a month or $99.95 a year, HighBeam Research is already a considerable bargain, particularly when compared with traditional services charging pay-per-view on top of subscription fees. Now, HighBeam has made 1.5 million articles from its library available to anyone at no charge; the service does not even require registration information. To reach the new “freebies,” searchers conduct a standard HighBeam search and receive results reflecting the entire library with labels identifying the free and “premium” content. However, a Modify Results box to the left on the screen allows searchers to display only the free material.... HighBeam draws its content from arrangements with full-text aggregators, such as Thomson Gale and ProQuest, and increasingly from direct licensing from publishers. The free 1.5 million articles come from more than 200 sources selected for their user interest and availability. Most of the material is current as of a day or two; archives can go back as far as 20 years or only 2 or 3. Examples of available titles are BusinessWire, Financial Management, Science News, and USA Today. Patrick Spain, chairman and CEO of HighBeam Research, said that the free service will usually include all of the articles HighBeam carries from the 200-plus sources. In contrast, the full HighBeam collection of more than 35 million articles includes more than 3,000 business, trade, academic, special, and general interest publications. The addition of Knight-Ridder’s 30 newspapers will support local and regional news searches, while TheWashington Post’s addition will offer national news archiving back to 1987. Spain stated that he planned to expand both the free and premium content on HighBeam. Although the announcement of the new free full-text collection stated that it encompassed 200-plus sources, the actual count is 368....I asked Spain why he had chosen to change its policy and offer material without requiring either subscription or registration. He admitted that when he started the company, he never planned to offer anything totally free, but times had changed. “We were driven by two reasons, one from publishers and one from users. A number of our publishers have adopted the free model. For example, the Fortune line is now all free. We have had it in our premium area for a long time. The challenge became, ‘Is it fair to sell something that’s available elsewhere for free?’ As the Web becomes more transparent, we effectively can’t. The second is that users are increasingly unwilling to pay, which is why publishers are making more [available] free.” I asked Spain whether the free service might be withdrawn if it didn’t work out. He said, “No. We understand costs and will keep testing our new free content, but we’re unlikely to withdraw it. Basically we’re driven by a market we don’t control, both publishers who keep evolving in response to advertiser rates and user behavior and users who are increasingly aware that some advertiser will pay for what they want if they go looking for it.” He admitted: “If users hate ads, then we may evolve back to an ad-free environment, but if you want your site highly rated on Google and Yahoo!, free ranks higher and that keeps driving traffic.”...Spain told me, “An interesting thing is starting to happen. Free content gets people to look at our site and use our tools. While they’re getting the free material, they keep running into our premium content and see the tools that subscribers get. It seems to help us get subscriptions.”
Biology Direct is a new peer-reviewed OA journal from BMC. From today's press release:
BioMed Central is pleased to announce the launch of Biology Direct, a new online open access journal with a novel system of peer review. Biology Direct will operate completely open peer review, with named peer reviewers' reports published alongside each article. The journal also takes the innovative step of requiring that the author approach Biology Direct Editorial Board members directly to review the manuscript. The journal...launches with publications in the fields of Systems Biology, Computational Biology, and Evolutionary Biology, with an Immunology section to follow soon. Biology Direct considers original research articles, hypotheses, and reviews and will eventually cover the full spectrum of biology. Biology Direct is led by Editors-in-Chief David J Lipman, Director of the National Center Biotechnology Information (NCBI) a division of the National Library of Medicine (NLM) at NIH, USA; Eugene V Koonin, Senior Investigator at NCBI; and Laura Landweber, Associate Professor at Princeton University, Princeton, NJ, USA. Lipman has long been interested in open access and has been central in the development of PubMed, GenBank and PubMed Central, the NLM's open access repository for literature in the life sciences.
Comment. I'm glad to see a new OA journal from BMC, glad to see experiments with peer review, and glad to see David Lipman, whom I respect very much, take a leading role in this. I just want to make my usual point that the openness of access and the openness of peer review are not intrinsically connected, even if there are synergies worth exploring. OA journals can use any kind of peer review, from the most traditional to the most innovative. No one associated with Biology Direct is denying that. I'm just trying head off the misunderstanding that OA journals must use open forms of review, or worse, the misunderstanding that achieving OA must wait for peer-review reforms.
I've added a section on Open access by the numbers to the Wikipedia article on OA. At first I thought I'd keep this list on my page of lists. But while I knew I could start the list, I also knew I wouldn't have time to flesh it out or keep it up to date. Wikipedia was the perfect solution. The numbers on the list now are just a small sample of what we can eventually provide.
Please help enlarge the list and keep it current. If you need numbers for articles, presentations, or policy decisions, consult the list. If you don't know whether the numbers are up to date on the day you visit, then you can verify (and update) them by following the links on the page. I'm hoping this leads to an era of better-informed debate, journalism, and policy-making.
Stevan Harnad, Open Access vs. Back Access, Open Access Archivangelism, February 5, 2006. Excerpt:
Lower tolls are preferable to higher tolls, shorter embargoes are preferable to longer embargoes, longer temporary access is preferable to shorter temporary access, wider access is preferable to narrower access, but Open Access is still Open Access, which means free, immediate, permanent online access to any would-be user webwide, and not just to those whose institutions can afford the access- tolls of the journal it happens to be published in. The measure of the percentage of OA is the percentage of current annual article output that is freely accessible online. The rest is merely measuring Back Access (BA). BA is welcome, but it is not OA; and not what the research community wants and needs most today. Research uptake, usage, impact and progress do not derive any benefit whatsoever from embargoes, delaying full access and usage. That is not what research is about, or for....Gold OA publishing is a welcome bonus; so is hybrid "open choice" optional gold. BA is welcome too; but it cannot and should not be reckoned as OA, any more than re-runs should be reckoned as fresh movies, hand-me-downs as fresh fashion, or left-overs as fresh fare. One of the biggest and most important components of the OA impact advantage, especially in fields that have already reached 100% OA, such as astrophysics, is EA (Early Access). One would think that earlier access merely brings earlier impact, not more impact. But Michael Kurtz's data shows that EA not only adds a permanent increment to citation counts, but to their continuing growth rate too. It is as if earlier usage branches early, and the branches keep branching and generating more usage and citations. Of course, this will vary with the uptake-latencies, time-constants and turn-around times of each field, but I doubt that progress in any field benefits from, or is even unaffected by, access delays, any more than it is likely to be immune to publication delays. If a work is worth publishing today, it is worth accessing today, not just in 6 months, 12 months, or still longer. That is what needs to be counted and tallied if we are tracking the growth of OA today. If we want to maintain a separate tally for BA too, that's fine, but beside the point, because after the fact, insofar as OA and immediate research progress -- research's immediate priority today -- are concerned. BA may be useful to students, teachers and historians, but it is OA that is needed by researchers, today. Researchers are both the providers and the primary users of research: They (and their institutions and funders) are also the ones in the position to provide -- and benefit from -- immediate OA.