News from the open access movementJump to navigation
Gregory Ippolito and three co-authors, Towards Open Access, Molecular Cancer, June 2, 2005. An editorial. Excerpt: 'Indeed it seems that an increasing number of subscription-based journals are exploring hybrid approaches, such as that used by PNAS, that enable authors to offer their content online by paying publication costs for open access. But the publishing world is uniquely devoid of market forces that might expedite this transition. One need only look to the skyrocketing subscription costs (print or online) for scientific and medical journals, and the inability of institutional libraries to afford them. A study by the NIH concluded that journal prices increased during the last decade at a rate that was over 6 times inflation. A recent editorial published in the New England Journal of Medicine acknowledges that some journal editors and publishers will perceive some aspects of the Public Access Policy as "potential threats" to their "revenue sources". In addition, some traditional publishers have assaulted Open Access journals and the NIH policy initiative with dubious arguments....As the number of Open Access journals and publishing houses grows, the scientific community is now in the position to 'vote' for one or the other model. We firmly believe that immediate and unrestricted access to scientific information will be the gold standard for scientific publication and urge every researcher to submit their manuscripts to Open Access journals. The growing number of freely available articles, which are archived in PubMed Central, marks the trend towards Open Access.'
Bill Dyszel, Google Print Brings Publishers Business, Publish, June 3, 2005. Excerpt: 'The recent release of the Google Print Service grabbed the attention of attendees at the Book Expo here [New York] Thursday. A panel discussion featured several publishing executives who reported on their experiences with posting searchable full-text versions of their titles for viewing on Google. While some of the publishers encountered early resistance within their organizations over fears of piracy and improper use, all eventually found that the benefits of making their context searchable and viewable far outweighed any risks.'
The European Commission has launched AthenaWeb, a portal of OA audiovisual information about European science. From yesterday's press release: 'The European Commission, in association with a number of professional media and science organisations, is launching an innovative web portal designed for audiovisual and scientific communities in Europe, to support their work in promoting and communicating about science. Functions of this new platform include an electronic library of science programmes, an online agenda of key events, a European science news service and a forum for co-productions and partnerships.' (Thanks to ResourceShelf.)
Update. AthenaWeb falls regrettbly short of OA, especially for a publicly-funded site. Access is only free for registered users and registration is only free for "professionals". I can't find the site's criteria for what counts as a professional. Perhaps students, autodidacts, and unprofessional curious mind will get a publicly-funded error message. (Thanks to Klaus Graf.)
Richard Poynder, No Stranger to Controversy, Open and Shut, June 3, 2005. Excerpt: 'Critics have been quick to point out that the threatened CAS Registry database was itself built with taxpayers' money — in the form of a grant from the National Science Foundation (NSF). They also argue that as a non-profit organisation CAS has over the years benefited substantially from the public purse in the form of tax concessions. As open-access advocate Jan Velterop put it on the Liblicense mailing list "It's not just the generous tax concessions that come with a non-profit status, but also the NSF grant (I hear it was in the order of $25 million) that enabled the ACS to establish a chemical registry system."...Those who have followed the development of CAS over the years, for instance, will recall that on 7th June 1990 the online service Dialog made a 36-page complaint about ACS to the Washington DC Federal Court, alleging unfair competition, and seeking $250 million in damages....Dialog argued that CRSD "has no substitutes and thus constitutes a separate and distinct market" which CAS was monopolising in contravention of anti-trust laws. Amongst the 10 claims made by Dialog were five alleging monopoly practices, one alleging unfair competition, and one relating to a subsidy of $15 million plus that had been provided by the NSF at the time the CAS database was being created (the same database and the same grant that critics of the ACS complaint against PubChem are now citing), and which Dialog claimed obliged ACS to license all data at a fair price. This obligation had been breached by CAS, alleged Dialog, both by its withdrawal of connection table data essential for graphical structure searching, and because CAS had for some years refused to licence its abstracts of the chemical literature....ACS responded to the lawsuit by countersuing on issues of accounting. The case was eventually settled out of court.'
This month marks the launch of R4L (Repository for the Laboratory), a JISC-funded, Eprints-based, open-access repository for data and documents from laboratory science in the UK. From the web site: 'This project will address the area of interactions between repositories of primary research data, the laboratory environment in which they operate and repositories of research publications into which they ultimately feed (through documented interpretation and analysis of the results and in explicit linking and citation of the data sets). It will develop prototype services and tools to address the issues of working with, disseminating and reporting on experimental data. In collaboration with scientific equipment manufacturers the project will develop methods to make raw experimental data available and richly annotated with metadata, as it is generated in the laboratory. The possibilities for aggregating heterogeneous raw experimental data from different sources and experiments, via effective management of the repository for the laboratory, will also be explored and prototype tools developed to enable, manipulate and derive reports for publication purposes. It will also engage in discussions with publishers and societies to determine anticipated requirements.'
The presentations from the CNI-JISC-SURF conference, Making the strategic case for institutional repositories (Amsterdam, May 10-11, 2005), are now online.
A critical mass of Spanish research institutions have purchased memberships in BioMed Central. From today's press release: 'Spain has taken huge strides in supporting Open Access publishing, with over 67 universities, hospitals and research institutions becoming BioMed Central members in recent months. The membership agreements cover the cost of publication, in BioMed Central's 130+ Open Access journals, for all researchers, teachers and students at member institutions. On acceptance, articles will be immediately and freely available online to all, in accordance with BioMed Central's Open Access policy. BioMed Central has recently formalised agreements with four regional health authorities in Spain: Aragon, Madrid, Galicia and Asturias. Madrid, the most recent health authority to come on board, agreed membership for two years for 22 institutions, thanks to the Consejeria de Sanidad y Consumo de Madrid / Agencia Pedro Lain Entralgo. In Aragon, Instituto Aragones de Ciencias de la Salud agreed membership with BioMed Central for all five hospitals in the health authority. In Galacia, eight hospitals became members via Servicio Galego de Saude, while the universities of Galicia are also members through the BUGalicia Consortium....In addition, the Spanish Research Council has taken out membership for all 13 of its biomedical institutions countrywide.'
In yesterday's issue of SOAN I said that the four journals of the American Diabetes Association (ADA) required a six-month embargo on public access for any of their articles deposited in PubMed Central as part of the NIH public-access policy. I was right at the time I did my research but wrong at the time I mailed the issue. A week before I went to press, the ADA changed its policy in order to permit its NIH-funded authors to choose immediate public access. Thanks to Peter Banks, the ADA Publisher, for the correction. I thanked and congratulated the ADA on two mailing lists for its change of policy and I'm glad to repeat my thanks and congratulations here. The ADA is now the only publisher of non-OA journals that has announced a policy allowing its NIH-funded authors to request immediate public access through PMC. I hope other publishers will follow suit.
Michele Barbera and three co-authors, HyperJournal, PHP scripting and Semantic Web technologies for the Open Access, a presentation at the Workshop on Scripting for the Semantic Web (Heraklion, May 30, 2005). Abstract: 'In this article we present an high level overview of the HyperJournal project, an effort to provide novel possibilities both in Scientific Publishing and in access to Scientific Contributions, according to the Open Access movement guidelines. All the work has been implemented using the PHP Web Script language and interfacing with Java modules such as Sesame and RDFGrowth. Such interfaces, here illustrated, are of general use for project with similar needs. While the HyperJournal project itself is in its infancy stage, a first release is already available for download and public use, thus representing one of the certainly not many real and deployable examples of Semantic Web applications.'
Update. This paper has now been self-archived at E-LIS.
On May 31, the Nature Publishing Group launched a new journal, Nature Chemical Biology (NCB). I'm not announcing this because NCB is OA (it's not), but because it relies on the beleaguered, OA PubChem. Excerpt from the press release: 'A unique feature of the journal's website is linkage of chemical compounds mentioned in research articles to the PubChem database, a new initiative of the US National Center for Biotechnology Information (NCBI) of the National Institutes of Health. This is the first example of a commercial publisher linking into PubChem, says Nature....The PubChem link allows readers of Nature Chemical Biology to go in a single click, from the mention of a molecule in a paper to a rich and growing collection of information about Chemical Structures and properties, and biological assay results, hosted by the NCBI. In addition, when a manuscript is accepted for publication in Nature Chemical Biology the acceptance process includes an automated deposition of the article's compound data to PubChem, and the creation of mutual web links between these PubChem records and the paper concerned. This will enhance the utility of Nature Chemical Biology for the scientific community by providing new data for the PubChem database.'
(PS: NPG clearly finds PubChem to be useful and has endorsed it in the most practical way by becoming invested in its survival. Can NPG make a public statement to defend PubChem from the lobbying blitz by the American Chemical Society?)
Update. The first issue of Nature Chemical Biology is now online. At the same time, NPG has removed the press release announcing the launch. So my link to it above is no longer working. But you can still find a copy e.g. at the LaboratoryTalk web site.
The Hidden Web at your fingertips, for engineering, mathematics and computing, a press release on EEVL Xtra (June 2, 2005) from its developers. Excerpt: 'EEVL Xtra is a brand new, free service which can help you find articles, books, the best websites, the latest industry news, job announcements, technical reports, technical data, full text eprints, the latest research, teaching and learning resources and more, in engineering, mathematics and computing. EEVL Xtra is for anyone looking for information in engineering, mathematics and computing. Academics, researchers, students, lecturers, practicing professionals and anyone else looking for information in these subjects should find it useful. EEVL Xtra cross-searches (hence the 'X' in Xtra) over 20 different collections relevant to engineering, mathematics and computing, including content from over 50 publishers and providers. It doesn’t just point you to these databases, but rather it 'deep mines' them, so you can search them direct from EEVL Xtra....EEVL Xtra helps you find subject-based information.. Many of the things you’ll find through EEVL Xtra come from the 'Hidden Web', and are not indexed by search engines. In many cases, the full text of items found via EEVL Xtra should be freely available. In some cases, the full items are details of books, websites or articles. In some cases, the full text of items may be available to you if your institution subscribes to the publication.'
Rachel Deahl, Google Isn't the Only Digitizer in Town, Book Standard, June 02, 2005. Excerpt: 'Google is not the only player in the [book] digitization game. A number of other organizations, mostly academic institutions and other non-profits are digitizing books as diligently, if a bit more quietly, than Google. [Ed] McCoyd [Director of Digital Policy at the Association of American Publishers] said most publishers are not focused on these mostly non-commercial efforts, but they remain an interesting component of the effort to bring books to a computer screen near you.  Carnegie Mellon's Million Book Project, also known as The Universal Library: a non-profit academic effort with the goal of putting a million books online, freely searchable, by 2007. Gloriana Sinclaire, Dean of Carnegie Mellon's University Library, says the program currently has 125,000 books digitized and available on the web. Launched four years ago, and funded by the National Science Foundation, The Universal Library is primarily focused on digitizing books in the public domain, though it does feature some more recent titles (most from university presses).  Project Gutenberg: another not-for-profit digitization effort (the brainchild of Michael Hart), an online collection of ebooks that traces its history back to 1971. Like The Universal Library, the interface at Project Gutenberg is significantly lower-tech than Google Print’s offering—Google allows readers to digitally flip through books (replicating any art and fonts) whereas both Project Gutenberg and The Universal Library offer text versions that don't have pages numbers, much less virtual bindings --the goal of Project Gutenberg is to disseminate texts as freely as possible.  The University of California and the University of Virginia: two universities that have digitized their collections on their own, amassing a significant online libraries, as opposed to joining the Google Print for Libraries initiative.  And not all publishers have abandoned the DIY model, either. The National Academies Press has digitized many of its own titles, making them viewable on its website, with purchasing options. While such efforts from publishers remain the exception to the rule --McCoyd said he hadn't heard of any publishers digitizing their own titles-- the strategy may become commonplace in the future.'
A controversial bill in the California Assembly would require public schools to use shorter textbooks (200 pp. max) with appendices of URLs pointing to additional material. The bill has just been approved by the House and moves to the Senate. For details, see Jim Sanders' story from the May 26 Sacramento Bee. (Thanks to LIS News.)
Update (June 23). The bill was tabled.
Google has launched SiteMaps to help webmasters make sure that Google --and other search engines-- can find and crawl their content. The best source of details so far is Danny Sullivan's interview with Shiva Shivakumar in yesterday's SearchEngineWatch. Shivakumar is the engineering director and technical lead for Google Sitemaps. Excerpt (quoting Shivakumar): '[How does this work?] Webmasters create XML files containing the URLs they want crawled, along with optional hints about the URLs such as things like when the page last changed, and the rate of change. They host the Sitemap on their server and tell us where it is. We provide an open-source tool called Sitemap Generator to assist in this process. Eventually, we are hoping webservers will natively support the protocol so there are no extra steps for webmasters. When a Sitemap changes, we support auto-notifying us so we can pick up the newest version....At this early stage, we cannot guarantee that we'll crawl or index all your URLs. But as we understand the data better, we hope to get more of the data into our crawl and indices....[Is this free?] Absolutely. Also, this is an open protocol [under a Creative Commons Attribution/Share-Alike license]. We are hoping all webservers and search engines adopt this protocol and benefit from the increased collaboration.'
(PS: OA repositories and journals should definitely try this.)
Hiding the Data on Drug Trials, New York Times, June 1, 2005 (free registration required). An unsigned editorial. Excerpt: 'Any Americans gullible enough to believe that the drug industry can be trusted to report fully on what clinical trials it is sponsoring or what results were found must be sorely disappointed by recent developments. A government survey determined that three of the largest drug companies have effectively reneged on their pledges to list trials in a federal database. A report in yesterday's Times by Alex Berenson reveals that this intransigence also extends to a voluntary industry database....A public listing of trials is important to prevent drug makers from hiding results that reflect badly on their drugs while publishing only results that make their drugs look good. By law, the companies are supposed to register important trials with a government Web site. Most manufacturers are complying, but the three big obfuscators - Merck, GlaxoSmithKline and Pfizer - are often getting around the requirement by not naming the drugs they are testing, instead using phrases like "an investigational drug." Merck was the worst offender, failing to provide a drug's name some 90 percent of the time. Glaxo withheld a name 53 percent of the time, and Pfizer 36 percent of the time....A coalition of medical editors has just stiffened its announcement that leading journals will soon refuse to publish the results of any clinical trial that has not complied with tough international standards for transparency. That should apply useful pressure to recalcitrant companies. But the best hammer would be federal legislation to compel all companies to provide critical information when a trial is begun and full results when a trial is completed, with stiff penalties for noncompliance.'
HAL (Hyper Article en Ligne) is a fairly new OA eprint archive from France's Centre pour la Communication Scientifique Directe (CCSD). It accepts deposits from just about any field of the sciences and humanities. When deposited eprints belong to any of the disciplines covered by arXiv, then HAL automatically co-deposits them in arXiv. It also supports more metadata specificity than Dublin Core. One effect of the more structured metadata is that users can look at the eprints from a given lab or university and form other ad hoc collections. (PS: HAL's English-language interface is not yet as extensive as the French one.)
Alan Ryan, Fading Ivy, Financial Times, June 3, 2005. A joint review of three recent books on research universities: Jennifer Washburn's University, Inc., Ross Gregory Douthat's Privilege, and Richard Bradley's Harvard Rules. This excerpt focuses on Washburn's book: '[Washburn's third thesis] is that since the public spends a great deal of money on the research done in universities, the public ought to get a fair share of the benefit. Otherwise, the public pays for the research twice over, first through its taxes and then through the profits made by the companies that exploit the research. The question is what a fair return to the public actually is. Might the public get a better bargain if the results of research remain available to anyone who wishes to use them, as part of the intellectual "commons"? And might the public have grounds for thinking that, if private sponsors fund research, the quality of what is produced should be more carefully policed than at present? When more than 90 per cent of papers reporting sponsored research into the effectiveness of drugs report positive findings, and only 60 per cent of non-sponsored research do so, anxiety about the corruption of the researcher's judgment - even inadvertent - is not misplaced. The temptation is to denounce the wickedness of corporate capitalism, university administrators and the other "usual suspects". Washburn is too intelligent to succumb to that temptation. Universities have been starved of public funding over the past two decades and can hardly be expected to pass up the offer of private funds; and the pace of innovation in fields such as bio-technology is such that no company can pass up the chance to be involved in research. What is needed is not a retreat into an ivory tower but better regulation.'
On May 20, the Academic Senate of the University of California at Santa Cruz adopted a Resolution on Scholarly Publishing. (1) To control prices, when publishers propose "systemwide contracts for access to online content with prices which exceed the consumer price index by more than 1.5% in any one year averaged over five years," then the Libray Committee of the Academic Senate will be asked to comment on the offer. (2) To prevent prestige-seeking from rewarding journals that charge exorbitant prices, the university will create a task force "to explore ways to meet the challenge of academic evaluation in an era when publication and performance possibilities are changing....It is hard for academic evaluations to avoid using the prestige of particular outlets, from journals to performance venues, as a metric for quality. By so doing, however, the power of some publishers and some performance venues, and their ability to charge high and rising prices, is maintained. This resolution proposes that an appropriate group of faculty be asked to find ways of circumventing this problem." (3) To help faculty negotiate more beneficial contracts with publishers, the university will "take urgent steps to explore the restructuring of the University's copyright policy to assert a collective right, under the direction of individual faculty, to distribute faculty work for research and teaching....Our intention is that scholarly work would remain the property of individual faculty, but faculty members would no longer have to struggle individually with publishers to retain the right to disseminate their work." (4) And to preserve digital scholarship, the university will "explore the establishment of an Office of Scholarly Communication or similar administrative unit to take responsibility for the persistent stewardship of all forms of scholarly communication."
Mark Chillingworth, OUP opens up author choice, Information World Review, June 3, 2005. Excerpt: 'Oxford Journals, the journal publishing arm of Oxford University Press (OUP), is embracing open access (OA) publishing with the launch of Oxford Open, an author pays publishing model for 21 of its journals. From July, authors will be given the choice of paying £1,500 and publishing their articles online immediately under the Oxford Open scheme, or remaining with the traditional publishing process....Martin Richardson, managing director of OUP's Oxford Journals division, said: "Oxford Open is a logical extension to our current open access experiments, and will allow us to collect valuable first-hand data on the demand for OA by authors." Charges will be discounted for institutions which subscribe to the titles that authors are submitting to. OUP will revise the subscription costs of the titles in 2007. Open Access champion Jan Velterop backed the OUP initiative, telling IWR that he believes Oxford Open is the most convincing OA adoption from a traditional publisher yet, because authors retain copyright. "The choice OUP is offering authors is meaningful. Funding bodies insist on the author retaining the copyright," he said. Springer, the first traditional publisher to offer an author pays model with its Springer Open Choice launched in July 2004, insists that authors sign over their copyright. Velterop believes Springer Open Choice is inhibitive of true open access publishing, because of its copyright demands, and the high level of fees it charges. OUP's £1,500 fee is acceptable to funding bodies, he said.'
The Internet Archive contains a version of a published article of mine (I've retained copyright), entitled "Cancer-related electronic support groups as navigation-aids: Overcoming geographic barriers". A "shadow file" of metadata for this version of the article was prepared with the aid of MyMetaMaker and a copy was self-archived in OpenMED, the new repository for documents relevant to research in the medical and allied sciences. This example illustrates a way in which open access can be provided to a document that's already in the Internet Archive.
Google has a "partial list" of open-access, full-text research articles written by current employees. The purpose is to woo new talent. But why not call it an institutional repository, make it OAI-compliant, and aim to make it complete? That would do at least as much to woo new talent. It would also benefit the old talent and serve everyone who conducts research on the same topics. For-profit companies will always generate proprietary research that they will not want to disclose. But when the company doesn't want to keep employee research secret or sell it, then why not OA? When companies employ scientists who publish research, then why not showcase their work and help the world at the same time? (Thanks to ResearchBuzz.)
The Bibliotheca Alexandrina has joined the Digital Library Federation. From yesterday's press release: '[The BA is the DLF's] first strategic partner from outside the United States or Europe. "I am delighted that the Bibliotheca Alexandrina has accepted our invitation to join," said David Seaman, executive director of the DLF. "We are a fast-moving consortium of very active academic digital libraries and the addition of this remarkable Egyptian library will enrich our collaborative work and inform our world view of digital library endeavors. The Bibliotheca Alexandrina is already working closely with DLF member institutions: it is a contributing member of the Million Books Library led by Carnegie Mellon, and Yale has just announced new funding for collaboration with the Bibliotheca Alexandrina to digitize early 20th century journals." Michael Keller, university librarian at Stanford University and president of the DLF's Board of Trustees, adds, "The Bibliotheca Alexandrina's digital library initiatives extend the leading edge of digital librarianship by the creation of new professionals, by experimentation, by portal development, and by the addition of content, which includes the digitization of 15,000 Arabic books annually, the development of the Digital Library of the History of Egypt, and the scanning of numerous image collections."...Egypt's Bibliotheca Alexandrina (BA), the new Library of Alexandria, was inaugurated in 2002 to recapture the spirit of the ancient Library of Alexandria, a center of world learning from 300 BC to 400 AD. The new Library and its affiliated research centers are devoted to using the newest technology to preserve the past and to promote access to the products of the human intellect.'
Jim Giles, UK research councils claim success for open-access publishing plan, Nature, June 2, 2005 (accessible only to subscribers). Excerpt: 'Britain's main public funders for research seem to have achieved the impossible — they've come up with a policy that pleases both sides in the debate over open-access publishing. But appearances can be deceptive. Behind public praise for the statement, some publishers are voicing fears that small journals will go out of business, which could put scientific societies at risk....Supporters of open access are claiming victory in the wake of rules drawn up by Britain's research councils, which distribute most government science funding. The policy has delighted them because it requires all council-funded papers be put in an open-access archive "as soon as possible" after publication. Other major funders of research around the globe, including the US National Institutes of Health (NIH), allow researchers to wait up to a year before depositing their work. Stevan Harnad, an advocate of open access and a cognitive scientist at the University of Southampton, believes that the UK policy's insistence on submission will make the use of open-access archives a regular part of academic life. "Once the history of this is written, this statement will be the single most important factor," he says....But a crucial change to the policy, made following complaints from publishers, could dilute the power of archives. After consulting on an initial draft issued last autumn, the councils changed the policy so that submissions to archives will be subject to the copyright and licensing arrangement of the journal publishing the paper. Publishing executives say privately that they can now rewrite their rules so that submission takes place after a delay of several months, which will protect their subscription revenues....The Wellcome Trust, Britain's biggest medical charity, is even more bullish about the idea. It said on 19 May that all papers produced using its money will have to be submitted to the NIH archive PubMed Central or to the British equivalent that is being developed. "Old journals sometimes cease to publish, but new ones spring up," says Mark Walport, the trust’s director. "I have some sympathy with the learned societies, but it is not the primary mission of funders to support them." The councils' statement still has to be "fine-tuned", say officials. Originally due for release this month, it has been put back until the summer, but is not expected to undergo significant changes before then.'
Sidebar to the article: 'The UK policy. Scientists will submit papers to subjectpspecific archives or to an equivalent run by their institution. The paper would only be the final text document accepted for publication, not the formatted version that is printed. If this causes a range of archives to proliferate, access to papers should still be straightforward. Scholarly search engines, such as that unveiled last year by Google, automatically look through institutional repositories, so users shouldn't need to know where an article is actually held.'
(PS: This article is about the long-awaited open-access policy of the Research Councils UK (RCUK). In mid-April the RCUK predicted that the policy would be released in mid-May and we're still waiting. However, there have been enough leaks that Nature can run a story on the responses to the leaks. The policy is exciting because it's so strong --mandating OA rather than just requesting it, and applying to all publicly-funded research rather than just biomedicine. Unfortunately it looks like the final stage of consultation watered it down, just as with the NIH policy. However, the NIH policy was watered down further. The RCUK policy may invite publishers to delay OA through their copyright agreements, but apparently it will still mandate OA and still apply across all the fields receiving public funding.)
I just mailed the June issue of the SPARC Open Access Newsletter. In addition to the usual round-up of news from the past month, it takes a close look at software that would help the cause of open access and journal policies on NIH-funded authors. It takes a briefer look at the new Oxford Open policy, the new Wellcome Trust open-access policy, the DARE program's Cream of Science, the ACS complaint against PubChem, the AAUP complaint against the Google library project, and a new crop of university resolutions endorsing open access.
Matthew Cockerill, Access all articles, The Guardian, June 2, 2005. An excellent introduction to OA. Excerpt: '"Sorry, but this article is available only to subscribers." Try to view a science journal article online and, more often than not, that is the message you will see. This is not just a problem for members of the public - scientists and medical practitioners face it every day. There are so many science journals that no library can afford to subscribe to them all. The internet has the potential to give researchers instant access to all the information they need, but this potential is not exploited because scientific journals still operate a subscription-based model inherited from the days of print publishing....For the past few years, however, change has been brewing. The research funders, who spend hundreds of millions of pounds each year on scientific research in the first place, have grown impatient with traditional publishers, who take an exclusive license to all the research findings that are published in their journals and then sell limited access back to the scientific community. Funders realise that the scientific literature represents a distillation of the knowledge that has been obtained at huge expense through the research they have paid for. The literature is an extremely valuable resource, so why should they surrender control of it to publishers who are responsible only for the final stage of the process? In many cases, funders do not even have full access to the research they themselves have funded: a recent study found that fewer than half of the articles resulting from NHS research grants ends up accessible online to NHS employees. Scientists, too, want as many people as possible to see their research, since the wider the readership, the greater the impact the research will have and the more it will benefit their career....It is against this background that the Wellcome Trust, the UK's biggest non-governmental funder of biomedical research, has taken the historic step of announcing that, from October 1 2005, recipients of its funding will be required to deposit a copy of all resulting research articles in an online archive, and that this archive will make those articles freely available within six months of publication....It cannot be overemphasised how significant a change this represents for science.'
Susan Veldsman has written a report on the workshop, Institutional repositories: creating tomorrow's information infrastructure for today's scholarly community (Pretoria, May 11-13, 2005).
The web gets social, an editorial in the new issue of Nature Materials. Excerpt: '[A]lthough almost every corner of society is adopting RSS, blogs and other 'disruptive' technologies, scientists, and physical scientists, in particular, seem reluctant to embrace them....Connotea (an open source software developed by the Nature Publishing Group) and Cite-U-Like are 'social bookmarking' services. They are academic clones of the popular general public services Del.icio.us and its photo-sharing counterpart Flickr. With one click a scientist can store any URL or reference on Connotea, where they are publicly visible on the web. As a perk, the software goes off to publisher sites, finds associated metadata for the link such as author names, and displays this. Taking sharing further, users tag papers with whatever they feel best describes the article, for example, 'cold plastic deformation' or 'residual stress'. As more users use such services, they begin to discover that the clicking on the tag 'cold plastic deformation' now brings up papers not only just posted by you but by others. In this way communities and resource discovery develops. But as with ArXiv, eBay, or any service that depends on user input, the more use it, the more valuable it becomes. Intense competition means, however, that researchers are often loathe to use such tools. They don't want competitors to see what they are reading. Yet the value of greater sharing is obvious. In particular in cutting across disciplinary boundaries. Sharing nurtures serendipity. RSS is also much more than just an alerting service. Its real power kicks in when it is considered in the context of social software such as blogs and social bookmarking services. RSS allows you to find information. Blogs and tools such as Connotea and Cite-U-Like allow you to store information, and more importantly to share it with others in new ways. RSS then comes full circle, allowing you again to keep abreast of changes made by others to blogs or Connotea tags you are keeping an eye on. It's a sort of collaborative glue, like that Berners-Lee first had in mind....Scientists should not shirk from stepping outside the formal publishing system....Researchers should be leading the use of innovative tools, instead of lagging behind.'
WIPO is holding a two-week Online Forum on Intellectual Property in the Information Society (June 1-15, 2005). The forum focuses on 10 themes, the third of which is The public domain and open access models of information creation: at odds with the intellectual property system or enabled by it? The discussion has begun. Read the contributions and take part.
Richard Roberts, winner of the 1993 Nobel prize for Physiology or Medicine, has pulled out of an ACS conference because of its position on PubChem. He has permitted me to post a copy of his letter of withdrawal to SOAF. Excerpt: 'I regret that I am going to have to pull out of the ACS-CSIR conference in India next January. For some time now I have been deeply troubled by the actions of the ACS and this has finally reached breaking point with the violent opposition to the PubChem initiative at NCBI. I find myself no longer able to support anything that carries the imprimatur of the ACS. I was greatly troubled when ACS so vehemently opposed the Open Access initiative. This led me to resign my membership in the society after more than 20 years as a member....[T]he current opposition to PubChem is reprehensible and without any redeeming merit. As an advisor to PubChem I am aware of what they are trying to do and it is in no way a threat to anything that ACS is doing. Rather it complements those activities very nicely and provides for the biological community an important resource that is not provided by CAS. Furthermore, PubChem is keen to provide links to CAS and thereby enhance the usefulness of both resources. My only interpretation of the recent actions by the ACS Board and management is that it is no longer trying to be a scientific society striving towards the goals of its Congressional charter, which is to represent the best interests of the scientists who form its membership. Rather it seems to be a commercial enterprise whose principle objective is to accumulate money. The ACS management team might be well-advised to poll its members to discover if they are happy about the recent actions taken in their names....Frankly, the recent actions of the ACS are a disgrace to its image in the USA and around the world. They engender such bad feelings as to raise in question the motivations of its leadership. I cannot in good faith support any of the activities of a body that has gone so seriously wrong.'
When Canada's Social Science and Humanities Research Council (SSHRC) funds scholarly journals, it uses the subscriber tally as a rough measure of worthiness. The result is discrimination against OA journals, regardless of their excellence. Almost exactly a year ago, I reported that International Review of Research in Open and Distance Learning faced this kind of discrimination. Gunther Eysenbach now reports that his own Journal of Medical Internet Research has faced the same discrimination. He and other members of the University of Toronto's SSHRC Consultation on Open Access have written a Response to SSHRC on its OA policies. The response recommends not only that SSHRC stop discriminating against OA journals but that it should fund only OA journals. It also recommends that SSHRC mandate OA archiving for all results of SSHRC-funded research not already published in OA journals. (Kudos to the Toronto consultation for these recommendations.)
The open-access Journal of Medical Internet Research (JMIR) now offers memberships --for individual scholars, universities, departments, libraries, and even societies and funding agencies (eight flavors in all). See the web site for the variety of prices and benefits. According to Gunther Eysenbach's blog, one funding agency to buy a membership is Health eTechnologies (an office of the Robert Wood Johnson Foundation). As a result of the membership, Health eTechnologies will encourage its grantees to publish in JMIR and up to two of its principal investigators per year may publish in JMIR without charge.
Mark Chillingworth, Google Scholar gets OpenURL links, Information World Review, June 1, 2005. Excerpt: 'Google has adopted OpenURL technology for its Google Scholar initiative, after academic information professionals expressed fears that students using the search engine will be unable to discover the resources available from their institutes. "Google Scholar needs OpenURL in order to link users to the appropriate copy - a subscription paid article - rather than the publisher's home page," said Ex Libris marketing VP Jenny Walker of the initial weaknesses in Scholar. Library automation (LA) applications from Ex Libris, Serials Solutions and Openly Informatics now provide OpenURL Link Resolving tools to Scholar following successful lobbying from vendors and universities. Google has confirmed that institutional collections will feature prominently within search results. Although Google Scholar surprised the information industry, Google is said to have been startled by the sector's complexities. "The library world is saturated with metadata, and so far Google has been able to dodge this," said JR Jenkins, product manager at Serial Solutions, adding that if Google is to be successful as an academic search resource, it will have to tackle the complex metadata which exists....As part of the OpenURL adoption, Google has called on libraries to provide their e-journal holdings before it will enable the links, a demand concerning many information professionals. Garry Horrocks, UK e-Information Group (UKeIG) chair said: "This could lead to an unfair advantage to Google and what would they do with all this information?"'
On May 29, the National Library of China opened a portal to its digital resources. From the announcement: 'As the latest public service platform launched by the library, the D-portal combines 37 Chinese-language data banks, 77 foreign language data banks, 16,000-odd periodicals in both Chinese and foreign languages, as well as special resources including local records, Dunhuang documents, periodicals of the Republic of China (1912-1949) and doctoral dissertation and master's thesis, all purchased or established by the national library. The library also inaugurated its sci-tech retrieval center on the same day, the only comprehensive one of its kind among Chinese libraries. The center will provide timely and authoritarian service for the establishment and assessment of major sci-tech research projects, as well as for the application and approval of sci-tech awards at various levels.' (Thanks to ResourceShelf.)
Sue Bushell, Access Allowed, CIO Magazine, June 1, 2005. Excerpt: 'For much of this vast country's history the tyranny of distance has meant rural Australians have been pretty much denied access to our largely Canberra-based cultural institutions, and research has traditionally suffered most. Now, thanks to digitization and a Web presence, institutions like the National Library of Australia [NLA], the Australian War Memorial [AWM] and the National Archives of Australia [NAA] have found their mission transformed. With a new-found ability to deliver a service beyond their doors, their focus has drastically changed, from being mere providers of collections to providers of access....The secret of that collaboration? It has been underpinned, Missingham says, by developments like the Open Archives Initiative and the protocol for meta data harvesting developed by the NLA. But its main basis has been the subtle recognition that the cultural institution's major clients are many individuals who now have access to the Internet - authors, journalists, historians and academics, and also family historians as well as those with personal and recreational interests. If the AWM, the NAA and the NLA can work together to provide these clients with desired services, they can much better fulfil their mission.'
David Prosser, The Next Information Revolution - How Open Access will Transform Scholarly Communications, in G.E. Gorman, and Fytton Rowland (eds.), International Yearbook of Library and Information Management 2004-2005: Scholarly Publishing in an Electronic Era, Facet Publishing, chapter 6, pp. 99-117. Abstract: 'Complaints about spiralling serials costs, lack of service from large commercial publishers, and the inability to meet the information needs of researchers are not new. Over the past few years, however, we have begun to see new models develop that better serve the information needs academics as both authors and readers. The internet is now being used in ways other than just to provide electronic facsimiles of print journals accessed using the traditional subscription models. Authors can now 'self-archive' their own work making it available to millions and new open access journals extend this by providing a peer-review service to ensure quality control. SPARC and SPARC Europe play a prominent role in the new scholarly communication landscape as they encourage the progress of open access while working closely with scholars and scientists, who must recognize the benefits of change within academe in order for such progress to occur.'
Hindawi Publishing has launched the International Journal of Biomedical Imaging (IJBI), its seventh peer-reviewed, open-access journal. From today's press release: 'The scope of the journal covers data acquisition, image reconstruction, and image analysis, involving theories, methods, systems, and applications....The journal will use an open access business model based on Article Processing Charges, which are to be paid from the research budgets of accepted authors. Authors will be charged €100/page for pages 7 and up (there are no mandatory page charges for the first 6 pages) with a maximum of €800. The journal shall have an online edition, which is freely available with no subscription or registration barriers, and a print edition, which shall be priced at a level that simply covers the costs of printing and distribution. All articles published in this journal shall be distributed under the "Creative Commons Attribution License," which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited....Hindawi is planning to launch several more open access journals in the next few months.'
The beta release of STIX fonts has been delayed until September 2005. From yesterday's press release: 'A group of scientific publishers has announced the target date of September 2005 for the beta release of STIX Fonts, with a goal of general release in November. STIX Fonts is a free, comprehensive set of special characters, mainly mathematical, intended for scientific, technical, and medical publishing. The successful completion of the Scientific and Technical Information Exchange (STIX) Fonts project is intended to alleviate the need for publishers to assemble symbols from a variety of fonts. Perhaps more important, when posted to a Web site, documents using STIX Fonts will be properly rendered, regardless of the fonts installed on a particular computer....By making the fonts freely available, the STIX project hopes to encourage the development of applications that make use of these fonts. In particular the STIX project will create a TEX implementation that TEX users can install and configure with minimal effort. TEX is a computer language designed for typesetting, with particular application to mathematics and other technical material.'
The Indian Medlars Centre have pre-launched a new service providing facilities to authors to self-archive their articles. The new archive is christened OpenMED. OpenMED is a discipline based International Archive. It accepts both published and unpublished documents having relevance to research in Medical and Allied Sciences including Bio-Medical, Medical Informatics,Dental, Nursing and Pharmaceutical Sciences. Submitted documents will be placed into the submission buffer and would become part of OpenMED archive on their acceptance. The pre-launch was announced in a post to HIF-Net.
Alex Berenson, Despite Vow, Drug Makers Still Withhold Data, New York Times, May 31, 2005 (free registration required). Excerpt: 'When the drug industry came under fire last summer for failing to disclose poor results from studies of antidepressants, major drug makers promised to provide more information about their research on new medicines. But nearly a year later, crucial facts about many clinical trials remain hidden, scientists independent of the companies say. Within the drug industry, companies are sharply divided about how much information to reveal, both about new studies and completed studies for drugs already being sold. The split is unusual in the industry, where companies generally take similar stands on regulatory issues. Eli Lilly and some other companies have posted hundreds of trial results on the Web and pledged to disclose all results for all drugs they sell. But other drug makers, including Merck and Pfizer, release less information and are reluctant to add more, citing competitive pressures. As a result, doctors and patients lack critical information about important drugs, academic researchers say, and the companies can hide negative trial results by refusing to publish studies, or by cherry-picking and highlighting the most favorable data from studies they do publish. "There are a lot of public statements from drug companies saying that they support the registration of clinical trials or the dissemination of trial results, but the devil is in the details," said Dr. Deborah Zarin, director of clinicaltrials.gov, a Web site financed by the National Institutes of Health that tracks many studies.'
Open access online veterinary journal launches, Guardian Unlimited, June 1, 2005. Excerpt: 'The push to make research freely available on the web received another boost today when the open access publisher BioMed Central (BMC) launched BMC Veterinary Research, the first international open access journal to cover veterinary science and medicine....Professor David Eckersall, a BMC editorial board member from the University of Glasgow, said: "BMC Veterinary Research will be greatly welcomed by the research community involved with advancing veterinary science and medicine. The benefits of open access publishing, which has proved so successful in human medicine and biological sciences, will now be available for the wide range of specialities that are encompassed in veterinary research." Earlier this month, research indicated that Britain is already in the vanguard of the drive to make academic research freely available to anyone over the internet. While the US has more open-access archives - 127 - than any other country and Britain is second with 54, Sweden has the most archives relative to its population. By this measure, Britain is in third place and the US 10th in terms of open access provision.' (Thanks to Yong Liu.)
The University of California Office of Scholarly Communication has created a page on The American Chemical Society and NIH's PubChem, collecting the position statements, the major documents, and a list of actions that can support PubChem. It's a good place (along with OAN, of course) to monitor new developments in the unfolding story.
If these two new OA declarations weren't issued today, then at least I didn't learn about them until today.
Ivyspring international publisher currently publishes two Open Access journals, International Journal of Medical Sciences and International Journal of Biological Sciences. Both titles have recently been added to the collections at PubMed Central.
[Thanks to Jennifer Jentsch, PMCnews.]
FreeForAll was launched in October 2004 to provide free digital access to health journals for users in developing countries. The service is offered by an international consortium of libraries --so far, 34 libraries from 17 countries. FreeForAll delivers articles as PDFs, sometimes scanned from faxes, and makes use of existing document delivery services and ILL protocols. The costs are borne by the participating libraries. (Thanks to Sigma Xi for the alert and to Laurel Graham for answering my questions by email.)
In Luck's Music Library v. Gonzales, the DC Circuit Court of Appeals ruled last week (May 24) that it was constitutional to re-copyright works that had fallen into the public domain. Excerpt: 'Plaintiffs challenge the constitutionality of § 514 of the Uruguay Round Agreements Act...which implements Article 18 of the Berne Convention for the Protection of Literary and Artistic Works. The section establishes copyright in various kinds of works that had previously entered the public domain, and plaintiffs argue that any such provision violates the Copyright and Patent Clause of the U.S. Constitution. U.S. Const. art. I, § 8, cl. 8. Finding no such bar in the Constitution, the district court dismissed plaintiffs' claims....We review the district court's order de novo...and affirm.' (Thanks to Ann Bartow.)
William New, Medical R&D Treaty Debated At World Health Assembly, IP Watch, May 30, 2005. Excerpt: 'Experts attending the U.N. World Health Assembly this month debated a proposal to create an international treaty on medical research and development aimed at making medical treatment affordable globally while ensuring continuing innovation in the area. The proposed treaty was created in consultation with a variety of non-governmental and government experts, and in February, 162 politicians, academics and non-governmental organisations called on the World Health Organisation to consider the proposal. The treaty was discussed at a Consumers International briefing in Geneva on 19 May. "This is an attempt to look at this issue from the public health point of view instead of the commercial point of view," said James Love, director of the Consumer Project on Technology (a Consumers International member), and an architect of the proposed treaty....Dr. Tim Hubbard, head of human genome analysis at the The Wellcome Trust Sanger Institute and another treaty architect, emphasized that researchers have recognised the advantage of access to scientific research through the Internet, although he noted that "openness isn't just giving things away for free." But Hubbard said that the argument for patents on data has diminished even while patents are being filed by scientists more than ever. Hubbard added in a statement that "people are dying needlessly because of economic dogma."'
(PS: The treaty has a provision mandating OA to publicly-funded research. See Draft 4, Section 13.1. Disclosure: I am one of the drafters.)
Aliya Sternstein, Chemical publisher goes after NIH, Federal Computer Week, May 27, 2005. Excerpt: 'Officials at the two organizations [ACS and NIH] have exchanged letters, meetings and phone calls since 2004....NIH officials said they are confused as to why ACS insists PubChem will affect the organization's business when the two organizations' missions and audiences are different. "What is in common is a relatively small number of compound structures and names," said Christopher Austin, senior adviser to the translational research director at the NIH Chemical Genomics Center at the National Human Genome Research Institute. "ACS has gotten hung up on this. They have taken this, frankly, rather disingenuously, to implicate that PubChem duplicates CAS. CAS has 25 million structures. PubChem has about 850,000. PubChem is a subset. Not everything that is in CAS is relevant to biomedical research."...NIH officials understand that refocusing PubChem will slow medical progress. "It would have profoundly negative effects on this new paradigm of making medical discoveries, right at the time that it is just getting started....Unfettered access to a large number of different types of information is what allows fundamental new discoveries to be made," Austin said. "To kill this thing when it's still in the cradle would have a dramatically negative effect on medical discovery." The rational arguments that NIH has made to ACS have had virtually no effect, he said. "They are fundamentally not understanding that PubChem deals with an entire intellectual area that they know nothing about," he said. And NIH officials said that many of ACS' arguments are untrue. "PubChem does not come at the expense of basic research; it empowers basic research," said Larry Thompson, chief of the Communications and Public Liaison Branch at the National Human Genome Research Institute...."Clearly there would be no purpose to establishing a working group to further discuss our fundamental disagreement," wrote Madeleine Jacobs, executive director of ACS, in her last letter to NIH April 13. ACS has cut off the conversation, in favor of seeking a political solution. Rick Johnson, director of the Scholarly Publishing and Academic Resources Coalition, said, unlike ACS, NIH researchers are not hiring chemists to pore through patents to extract chemical names and structures. "They're taking on something that is not any threat to them and they are precluding an activity that will be key to returning on the NIH investment [in the human genome]...new drugs and better health care," he said. “What they want to do is neuter [PubChem] so it's useless to anyone. It's all about protecting the CAS franchise, not about what's best for biomedicine," Johnson added.'
Radiology has announced a policy for its NIH-funded authors. Excerpt: 'In response to the NIH policy, the RSNA [Radiological Society of North America] Publications Council recommended, and the RSNA Board of Directors approved, that:  Authors of articles in Radiology will be encouraged but not required to specify a release date 12 months from the month of print publication.  RSNA may, pending NIH's decision, establish a service for authors through which RSNA will send manuscripts to PubMed Central on behalf of the author if the author specifies the 12-month release date...."Some publishers have required authors to specify the 12-month release date because an earlier release jeopardizes subscriptions and thus the funding for a journal," explained RSNA Board Liaison for Publications and Communications Hedvig Hricak, M.D., Ph.D. "We understand that the NIH policy puts authors in a difficult position. Should they abide by the copyright statement they sign when submitting their manuscripts or should they agree to the NIH request that they provide a copy of their manuscript to PubMed Central immediately following acceptance? The RSNA Publications Council and the RSNA Board of Directors recognize that authors cannot treat a request from a granting agency lightly. That's why we've decided to make it as easy as possible for authors to comply."...RSNA is willing to provide an electronic copy of accepted manuscripts to PubMed Central, if the authors specify the 12-month release date; however, NIH has not yet decided if it will allow "third-party submissions."..."For most journals, and certainly for Radiology, the accepted manuscript differs from the revised, copyedited final version. The differences are often substantive and, thus, the manuscript on PubMed Central may contain errors," said Roberta E. Arnold, M.A., M.H.P.E., assistant executive director for RSNA publications and communications. "This is why RSNA is working with other publishers to develop a uniform statement that would appear on manuscripts deposited with PubMed Central. The statement would indicate the journal name and provide a Web link to the final published version," Arnold explained.'
Natali Helberger, A2K: Access to Knowledge – Make it happen, Indicare, May 30, 2005. Abstract: 'A2K stands for "Access to Knowledge" and is the acronym for a global initiative that took its start in 2004 and that is progressing quickly. The goal of the A2K initiative is to restore the instable balance between the interests of holders of exclusive rights in creative content and users of such content. One element of the initiative is the drafting of a proposal for a treaty to protect and promote access to knowledge.'
Dan Hunter, Digital rights management and mass amateurization, Indicare, May 31, 2005. Excerpt: 'Open access and open source usually have no truck with DRM. Clearly the common view of DRM that it is about access control is inconsistent with both open access and open source philosophies. One cannot subscribe to open source or open access principles without accepting that the user is free to pass the material on to others, to read without cost, use and reuse, and so on. But as Poynder (2005) explains in an earlier INDICARE article, if one views DRM in its widest form, it is not necessarily inconsistent with open access. He makes the important point that open access authors still want to retain some rights, most notably the right of attribution, and he suggests this interest can be supported by DRM. Purists might argue that this can be achieved with digital watermarking, which is of course correct. But watermarking is a form of DRM; and this form of DRM happens to support the interests of open access. I agree here with Poynder....The vast majority of Creative Commons licenses that have been adopted to date (around 95%) require the licensee to attribute the work to its author, no matter what other conditions of use are attached. The lesson of this, and of various other examples of amateur content, is that the attribution interest is probably the most fundamental incentive of creativity in areas that are not driven by commercial concerns. It is possible then that a truly beneficial role for DRM exists in making attribution run with content, so that the author will know that her name will live as long as the content is being used. Of course this is not the traditional view of DRM, and indeed DRM generally speaking does not handle this particularly well....We should be careful therefore to assume that DRM is always bad, and that commercial use of DRM will always trend towards over-control of the content.'
David Whelan, Google's Scan Plan Hits More Bumps, Forbes, May 31, 2005. Excerpt: 'Google's utopian project has hit a few bumps in the five months since it was announced, and Google is in the rare position of having to defend itself. The idea made a lot of sense given Google's search expertise, its incredible stash of technology resources and its stated mission to organize the world's information. Google Print also complemented efforts within the company to develop perfect machine language translation, which means that some day a scanned book will theoretically be readable in any language, anywhere in the world....Givler says he wrote the letter because managers at Google would give him canned or evasive answers to questions he and other publishers posed. Publishers don't know yet how much content from a particular book will be available to Google under fair use. Or even whether Google's digital copies of copyrighted books are permissible in the first place, even if they never get shared. Google's Tom Turvey responded with a letter to the Web site Publisher's Lunch saying that publishers don't have to participate in its program. "Publishers remain in complete control of which books are displayed on Google, just as Web publishers are regarding inclusion of their Web sites in the index. For any books scanned at libraries, publishers may simply choose not to display these books within Google's index--no questions asked." But Givler says the company has never fully addressed the propriety of scanning the books in the first place, regardless of whether they let publishers opt out on the back end. "Google's really asking for trouble here," says Jeffrey Neuburger, an intellectual property lawyer at Brown Raysman in New York, who doubts that Google can make initial scans without permission....The company hopes [publisheers] might eventually realize the copyright concerns are minor related to the ability to hock more books to searchers. According to a Google statement: "We believe there are many business advantages for publishers to participate in Google Print."..."While legal issues involving copyright law are always complex, we believe the project is wholly consistent with the core principle of copyright law," says James Hilton, the head librarian at the University of Michigan. "That principle, the very foundation of copyright law, is to promote the advancement and dissemination of knowledge." In other words, the scanning will probably go on.'
Jeffrey Young, From Gutenberg to Google: Five views on the search-engine company's project to digitize library books, Chronicle of Higher Education, June 3, 2005. Excerpt: 'It's been nearly six months since the company announced that it would work with five of the world's largest libraries in an effort to scan millions of books and make the full texts part of its popular search index. But some librarians and publishers say they are still in the early stages of understanding what the project's impact might be on their fields, and on scholarship in general. The Chronicle asked five key players to comment on the project and its meaning.
Barbara Quint, Google Library Project Hit by Copyright Challenge from University Presses, Information Today NewsBreaks, May 31, 2005. Excerpt: 'Some might say it had to happen. Extending the Google Print program to the digitization of five of the world's largest university research libraries, including copyrighted as well as non-copyrighted material, would inevitably seem to lead to a challenge of copyright violation. Oddly enough, the challenge has come from the less commercial publishers --the nonprofit university presses....Together, AAUP members publish approximately 750 academic journals and 11,000 books per year and depend upon research libraries for their revenue. The AAUP is not the only possible combatant. Reports circulate that other publishers, e.g., John Wiley & Sons and Random House, have also protested to Google. The Publishers Association in the U.K. opened inquiries earlier this year. The Association of American Publishers (AAP), which initially seemed to approve the Google project, is expected to have the issue on the agenda for its board meeting in early June....Givler told me that publishers were very enthusiastic when first approached by Google as early as spring 2004 about the new Google Print program. However, when they heard about the expansion of the program through the digitization of library collections, members felt that Google had "bitten off what they would find difficult to chew."...He admitted the concept of Google Print was still "enormously seductive," especially for university presses with their usual small audiences, because it will enable these presses to reach "people who didn't know about the existence of a book they would want to read." Bottom line --Givler doesn't have immediate plans to take legal action. What he really wants is for Google to come to the table and negotiate. The letter was sent to "try to pressure them to talk, but so far, they haven't."...Kay Murray, general counsel for The Authors Guild...said that standard boilerplate language in the contracts between book publishers and their authors include terms for copyrights to revert to authors when the publisher allows the book to go out of print....Dealing with authors directly could prove more daunting for Google than dealing with publishers....[T]he prospect of seeing authors reap the harvest for any future Google monetization opportunities might bring publishers into Google Print fast. One legal advisor mused that publishers sending their material to Google Print for digitization would have taken "an affirmative act" to bring books back into "print" or, in this case, "digital print."'
Kathleen E. Joswick, Electronic Full-Text Journal Articles: Convenience or Compromise, T.H.E. Journal, May 2005. Excerpt: '[O]nly prestigious journals are included in standard indexes. Inclusion in an electronic aggregator database, in contrast, is not a sign of quality but the sign of a business contract between the journal publisher and the database provider. Users of databases are no longer reviewing only the discipline's premiere journals. On the contrary, in the race to provide access to the most journals, some databases obscure the best articles by overwhelming the user with material that is neither respected nor relevant. As a result, the responsibility to identify the most appropriate resources is shifted from the experts in the discipline to the users --often students who lack the skill to separate the acceptable from the unacceptable....Because many aggregated databases provide the article text rather than page reproduction, graphical features disappear from the articles. And articles stripped of graphs, photographs, charts, special characters, tables, or illustrations are not the same as the original versions. Even more deplorable, full-text databases do not always include every article from each issue of a journal. Cover-to-cover full-text is rarer than most vendors lead their customers to believe. Letters to the editor, book reviews, editorials, poetry, and brief articles are routinely skipped, while in some instances, so are major articles....Interestingly, comparisons of printed journals and their articles' availability in full-text databases have shown an inclusion rate as low as 60 percent....Users are often misled about the currency of the material offered in electronic databases....Many journals also embargo electronic access to their recent issues as an incentive to retain their print subscribers. It is often difficult to identify these embargoed or tardy journals from the lists provided by vendors.'
The PLEIADI Project is a new portal of OA research literature on deposit in Italian institutional repositories. "PLEIADI" is an acronym for Portale per la Letteratura scientifica Elettronica Italiana su Archivi aperti e Depositi Istituzionali (Portal for the Italian Electronic Literature in Open and Institutional Archives). The project is sponsored by two Italian university consortia, CASPUR and CILIA, working under the framework of AePIC, their electronic publishing initiative. For more details, see the PLEIADI Project Manifesto or today's announcement. (PS: This is an excellent service for all who use Italian research literature. Kudos to CASPUR, CILIA, and AePIC.)
Terje Sagvolden, Behavioral and Brain Functions: A new journal, Behavioral and Brain Functions, April 22, 2005. Editorial in the inaugural issue of a new OA journal from BMC. Excerpt: 'Behavioral and Brain Functions (BBF) is an Open Access, peer-reviewed, online journal considering original research, review, and modeling articles in all aspects of neurobiology or behavior, favoring research that relates to both domains. Behavioral and Brain Functions is published by BioMed Central....Behavioral and Brain Functions is published online, allowing unlimited space for figures, extensive datasets to allow readers to study the data for themselves, and moving pictures, which are important qualities assisting communication in modern science....Behavioral and Brain Functions' Open Access policy changes the way in which articles are published. First, all articles become freely and universally accessible online, and so an author's work can be read by anyone at no cost. Second, the authors hold copyright for their work and grant anyone the right to reproduce and disseminate the article, provided that it is correctly cited and no errors are introduced. Third, a copy of the full text of each Open Access article is permanently archived in an online repository separate from the journal....Open Access has four broad benefits for science and the general public. First, authors are assured that their work is disseminated to the widest possible audience....This is accentuated by the authors being free to reproduce and distribute their work, for example by placing it on their institution's website. It has been suggested that free online articles are more highly cited because of their easier availability. Second, the information available to researchers will not be limited by their library's budget....Third, the results of publicly funded research will be accessible to all taxpayers and not just those with access to a library with a subscription....Fourth, a country's economy will not influence its scientists' ability to access articles because resource-poor countries (and institutions) will be able to read the same material as wealthier ones (although creating access to the internet is another matter).'
The Association of American Medical Colleges (AAMC) has issued a public statement in support of PubChem. Excerpt: '[The AAMC] statement of September 30, 2003 applauding NIH Director Elias Zerhouni's Roadmap initiative,...noted that 'perhaps the most exciting component of the [Roadmap] plan are those proposals promoting the development of new research tools, ranging from libraries of chemical structures, small bioactive molecules, and imaging probes, all of which are to be made 'freely accessible' to the research community. NIH is now moving apace to construct these 'molecular libraries' and related resources that will provide biomedical researchers with new tools to study the functions of genes, biochemical pathways, and cells. 'PubChem' is the database component of the molecular libraries initiative, integrating output from the small molecule screening centers with other publicly available data sources, such as NIH's protein structure resources and the biomedical citation literature in PubMed central. Like other facets of the Roadmap, the molecular libraries and PubChem were conceived after extensive consultation with hundreds of experts from the scientific community. NIH assures that these resources are not intended to duplicate or supplant established, commercially available resources. Rather, they are designed to leverage capabilities unique to NIH to develop resources that academic investigators or institutions would not be able acquire on their own, and are similar in many respects to GenBank and PubMed Central, both or which provide public access without onerous financial, reach-through, or other restrictions. Based on the great potential of these resources to advance medical understanding and strengthen the public's health, the AAMC reaffirms its strong support for NIH's Molecular Libraries initiative, including the PubChem database.'
Peter Murray-Rust and Henry Rzepa have written An Open Statement on the Value of PubChem. Excerpt: 'We write as scientists committed to the sharing of chemical information on the public Internet....We wish to emphasize in the strongest terms the current and future value of the NCBI/NIH's PubChem to the scientific and medical community. (SPARC's recent statement outlines the concerns that PubChem may be closed down or severely restricted). We have been using the molecules in PubChem and promoting their value in research. The substances in PubChem are those critical for research in bioscience and healthcare....Until PubChem, virtually no chemical information was freely available (i.e. without a library or a subscription to an information supplier). It is generally not possible to look up freely the chemical formulae of common drugs, food additives, or materials in the environment. Yet much of this information was first published many decades or centuries ago. PubChem provides a reliable, instant, resource for anyone. As an example of its value, the UK had a recent concern about a red dye in chili powder. From home we were able to use PubChem to find out what the chemical formula of this material is and what its reported toxicity and biological properties are. We know of no other freely available resource that we could have used with confidence....In our laboratories we are using PubChem for systematic research and are enhancing its value by publishing the results to the world. We have systematically computed the properties of over 200,000 molecules and published our peer-reviewed results freely . These properties are typical of those used in computer-aided drug discovery or the prediction of the safety of compounds. We have automated the process so that eventually all molecules in PubChem will have this information. Using InChI we have recently created a web site so that anyone can use search engines (e.g. Google(TM) or MSN(TM)) on this database without prior chemical knowledge. This is typical of the way in which information-driven science builds on, and enhances, existing knowledge. Even now it is very difficult for many bioscientists to read papers which include chemical names. We have therefore recently urged that scientific publishers should link their electronic publications to PubChem to help the reader understand the chemistry. Since PubChem also provides important biological background to many entries it enhances the scientific process, speeding it up and reducing the chance of error. We see PubChem as a universal tool for authors, helping them to reduce the chance of mistakes.'
Burt Helm and Hardy Green, Google This: "Copyright Law", Business Week, May 27, 2005. Excert: 'Google's library project would make digital versions of whatever libraries hand over -- including copyrighted books -- regardless of whether authors or publishers agree....Problem No. 1 is that Google's plan is a clear violation of copyright laws, according to publishers and many lawyers....[S]ome legal experts, such as Stanford law professor Lawrence Lessig, say there is some flexibility in the "fair use" exemption in copyright law, which makes allowances for limited reproduction for purposes such as criticism, scholarship, or research....If servers full of books were hacked, copyrighted material would be freely available all over the web....Another reason for the industry's vehemence is the way Google's digital library was announced. Just two months before, many major publishers had worked out their own agreements with the search giant to display selected titles, in a project called Print for Publishers. Similar services, like Amazon's "Search Inside the Book," have increased online sales. And many publishers remain excited about the prospects of Google's program doing the same. "We're very careful about protecting our content," says Kate Tentler, vice-president and publisher at Simon & Schuster Online. "But we do think this could be a great additional marketing and sales tool." But now the library program has caught them off guard.'
Jeffrey Tucker, Should Journals Be Online? Ludwig von Mises Institute Blog, May 29, 2005. Excerpt: 'A friend is busy trying to persuade the publisher of an old-line journal to put the back issues online, and the publisher has a predictable response, albeit one that you hear less and less of these days: if I put the journal (substitute article, book, etc.) online people won't pay for it. Variation on the theme: if you give away the cow, people won't buy the milk. Here is the compressed case for going online: First, if it is true that people won't subscribe if it is available online, this surely doesn't apply to back issues of journals....Second, there turns out to be no strong empirical support for the idea that online and offline journals are necessarily substitutes, and not complements. This long study by Jordan J. Ballor in the Journal of Scholarly Publishing makes this point....Third, there is something very odd going on when a nonprofit supposedly dedicated to getting ideas out to the world decides, as a matter of policy, to withhold ideas from people pending payment when, in fact, the marginal costs of providing that information to people approaches zero, as they often do on the web. If there are large and growing revenues stemming from an offline-only system, one can see a case for the status quo, but that hardly applies to any publication nowadays. Offline-only, nonprofit publishers are losing money so they might as well try something different. Fourth, these days, a journal without a serious web presence is threatened with extinction by irrelevance. In contrast, a journal with a large and vibrant web presence just might discover that it had previously underestimated the exent of its market. This is certainly what the Mises Institute has found. The market for libertarian and Austrian ideas does not number in the hundreds or thousands but in the hundreds of thousands and millions.' (Thanks to Kimmo Kuusela.)
(PS: The fourth rationale is cast as an argument going online, not necessarily going OA. But the same argument applies to going OA.)