News from the open access movementJump to navigation
Michel Bauwens, The Peer to Peer Foundation needs your cooperation, P2P Foundation, January 7, 2006. Excerpt:
The Foundation aims to develop a knowledge base concerning three related social movements:  those in favour of participation, i.e. the P2P movement proper. It refers to peer to peer as the relational dynamic at work in distributed networks and the largest possible equipotential participation;  those in favour of the open paradigm (open access, open sources, open ‘everything’). This refers to the conditions needed to make participation a reality: open access to knowledge in all its forms;  the Commons movement, in particular the creation of a Digital Commonwealth. This refers to the institutional format that is being created by P2P processes: universal common property regimes. Our aim is to find more interconnection between the different aspects of this ‘three-legged stool’. The foundation wants first of all to create a knowledge base for this: through our newsletter, our blog, the various directories available at the P2P Foundation website. After some time, we would like to favour the creation of communities around these knowledge bases, not just online, but physically as well.
Ove Bäck, Anders Vahlquist, and Agneta Andersson, Acta Dermato-Venereologica becomes its own Publisher and Moves Towards Open Access, Acta Dermato-Venereologica, November 1, 2005. Not even an abstract is free online, at least so far.
S.B. Montgomery and eight co-authors, ORegAnno: an open access database and curation system for literature-derived promoters, transcription factor binding sites and regulatory variation, Bioinformatics, January 5, 2006.
Abstract: Motivation: Our understanding of gene regulation is currently limited by our ability to collectively synthesize and catalogue transcriptional regulatory elements stored in scientific literature. Over the past decade, this task has become increasingly challenging as the accrual of biologically-validated regulatory sequences has accelerated. To meet this challenge, novel community-based approaches to regulatory element annotation are required.
Lorenz, The Anthropology Year 2005, antropologi.info, January 6, 2006. A review of major developments in 2005, with this note on OA:
Open access to research material is crucial here and this topic was widely discussed. Although a survey by the American Anthropological Association revealed that there's a minimal willingness to post one’s own work online, more and more papers do appear online. Kerim Friedman put his dissertation in Anthropology online before it was published as a book. The most recent Open Access initiative is the full text journal: Ecological and Environmental Anthropology at the University of Georgia.
Matt Pasiewicz has podcast interviews with Duane Webster, Executive Director of the Association of Research Libraries (ARL), and Steve Wheatley, Vice President of the American Council of Learned Societies (ACLS). Both interviews touch on OA, among other topics. Unfortunately, I don't have time to listen to them and there are no transcripts.
Update. Matt also has a podcast interview with Glen Newton, acting group leader of the National Research Council of Canada's CISTI Research. In the interview, Newton "compares and contrasts the superstructures supporting research in the US and Canada, shares thoughts about Canada's National Consultation on Access to Scientific Research Data, and offers some commentary on the open source, open content and open access." Again, no transcript.
Daniel Terdiman, Wikipedia's co-founder eyes a Digital Universe, News.com, January 6, 2006. Excerpt:
Known as Digital Universe, [Larry Sanger's new] project is an attempt to present a diverse collection of information on just about any topic imaginable. Some will be links to other Web resources, while some will be citizen journalism. But the highest-profile part of the project is likely to be its encyclopedia. And while its entries will be written by the general public, the project is distinguishing itself from Wikipedia by having many entries vetted and certified as accurate by subject-area experts. Thus, the Digital Universe will attempt to become the largest and--its founders hope--most reliable, source of freely-accessible, publicly-created information on the Web. And as such, the project has already and will continue to line up a series of Ph.D.s to serve as "stewards." To pay them for their services, the Digital Universe Foundation has lined up more than $10 million in initial funding.
Richard Jones, Theo Andrew, and John MacColl, The Institutional Repository, Chandos Publishing, 2006. (Thanks to SHERPA.) From the publisher's summary:
This book discusses the concept of the Institutional Repository (IR) and examines how they can be set up, maintained and embedded into general institutional working practice. Specific reference is made to capturing certain types of research material such as E-Theses and E-Prints and what the issues are with regard to obtaining the material, ensuring that all legal grounds are covered and then storing the material in perpetuity. General workflow and administrative processes that may come up during the implementation and maintenance of an IR are discussed. The book notes that there are a number of different models that have been adopted worldwide for IR management, and these are discussed. Finally, a case study of the inception of the Edinburgh Research Archive is provided which takes the user through the long path from conception to completion of an IR, examining the highs and lows of the process and offering advice for other implementers. This allows the book the opportunity to introduce extensive practical experience in unexpected areas such as mediated deposit.
(PS: The book is available from Amazon UK for £39.95 but not yet from Amazon US.)
David Goodman, Kristin Antelman, and Nisa Bakkalbasi, Meta-analysis of OA and OAA Manual Determinations, a preprint, self-archived January 6, 2006.
Abstract: Stevan Harnad's group and our's have reported several manual measurements in order to evaluate the accuracy of Chawki Hajjem's robot program, which has been extensively used by Harnad's group. Our group has now prepared an overall metaanalysis of the manual results.
J.W. Grzymala-Busse, Infoscience technology: the impact of internet accessible melanoid data on health issues, Data Science Journal, January 5, 2006.
Abstract: In this paper the development of a new internet information system for analyzing and classifying melanocytic data, is briefly described. This system also has some teaching functions, improves the analysis of datasets based on calculating the values of the TDS (Total Dermatoscopy Score) (Braun-Falco, Stolz, Bilek, Merkle, & Landthaler, 1990; Hippe, Bajcar, Blajdo, Grzymala-Busse, Grzymala-Busse, & Knap, et al., 2003) parameter. Calculations are based on two methods: the classical ABCD formula (Braun-Falco et al., 1990) and the optimized ABCD formula (Alvarez, Bajcar, Brown, Grzymala-Busse, & Hippe, 2003). A third method of classification is devoted to quasi-optimal decision trees (Quinlan, 1993). The developed internet-based tool enables users to make an early, non-invasive diagnosis of melanocytic lesions. This is possible using a built-in set of instructions that animates the diagnosis of the four basic lesions types: benign nevus, blue nevus, suspicious nevus and melanoma malignant. This system is available on the Internet website.
Timothy E. Eastman and five co-authors, eScience and archiving for space science, Data Science Journal, January 5, 2006.
Abstract: A confluence of technologies is leading towards revolutionary new interactions between robust data sets, state-of-the-art models and simulations, high-data-rate sensors, and high-performance computing. Data and data systems are central to these new developments in various forms of eScience or grid systems. Space science missions are developing multi-spacecraft, distributed, communications- and computation-intensive, adaptive mission architectures that will further add to the data avalanche. Fortunately, Knowledge Discovery in Database (KDD) tools are rapidly expanding to meet the need for more efficient information extraction and knowledge generation in this data-intensive environment. Concurrently, scientific data management is being augmented by content-based metadata and semantic services. Archiving, eScience and KDD all require a solid foundation in interoperability and systems architecture. These concepts are illustrated through examples of space science data preservation, archiving, and access, including application of the ISO-standard Open Archive Information System (OAIS) architecture.
D. Haglin and three co-authors, A tool for public analysis of scientific data, Data Science Journal, January 5, 2006.
Abstract: The scientific method encourages sharing data with other researchers to independently verify conclusions. Currently, technical barriers impede such public scrutiny. A strategy for offering scientific data for public analysis is described. With this strategy, effectively no requirements of software installation (other than a web browser) or data manipulation are imposed on other researchers to prepare for perusing the scientific data. A prototype showcasing this strategy is described.
Norman Paskin, Digital Object Identifiers for scientific data, Data Science Journal, January 5, 2006.
Abstract: The Digital Object Identifier (DOI) is a system for identifying content objects in the digital environment. DOIs are names assigned to any entity for use on Internet digital networks. Scientific data sets may be identified by DOIs, and several efforts are now underway in this area. This paper outlines the underlying architecture of the DOI system, and two such efforts which are applying DOIs to content objects of scientific data.
Expert Opinion on Drug Safety is launching an OA letters section to show a wider range of opinion on drug safety. From today's press release:
Since the withdrawal of Vioxx, issues surrounding drug safety, adverse events and pharmacovigilance have become more important than ever, leading to a plethora of debate and discussion regarding drug safety issues. The fine balance between benefit and risk means that there are many differing opinions from experts spanning different disciplines including pharmacology, clinical medicine, pharmacoeconomics and regulatory affairs. Over the last year, Expert Opinion on Drug Safety has seen an increase in the amount of feedback received. To accommodate this, the journal has launched a correspondence section, in which readers can address issues arising from reviews published in the journal. This correspondence section also allows the author to reply, thus stimulating interesting debate - an important forum for discussion. All letters published in the correspondence section are freely available online.
(PS: Some OA is better than no OA. But do OA letters to the editor deserve a press release?)
Since last summer ProQuest has been changing PQDD (ProQuest Digital Dissertations) to PQDT (ProQuest Dissertations and Theses). Here's a glimpse of what's coming in the next phase of the migration:
Open Access Dissertations & Theses in PQDT - - as institutions/authors choose to make dissertations & theses available on an Open Access basis, PQDT will display and make them available for free download. Users will also have standard purchase options for Open Access titles, for example, if they wish to have a printed copy sent to them.
And coming in an even later phase:
Release of “Dissertations & Theses@” - (currently called “Current Research@”), this option will allow schools which publish their dissertations & theses through UMI/ProQuest to view all of their own school’s titles free of charge in the ProQuest platform. Titles from the school which are in PQDT in full-text PDF format will be available as well for free instant download. Qualifying schools will not need a PQDT subscription to access their titles via “Dissertations & Theses@”.
Fiona Godlee, The BMJ is evolving, BMJ, January 2006. An editorial. I can tell from the free online comments that the editorial explains why BMJ will reduce its OA content. But unfortunately, the editorial itself is part of the new regime and only the first 150 words are free for non-subscribers, at least so far.
Update. Here are some details I just got by email from BMJ, presumably sent to all registered users:
From January 2006, access to bmj.com will no longer be freely available during the first week of publication. However, the original research articles will remain freely available from the day of publication as part of our commitment to providing open access to peer reviewed research. Other selected articles will also be freely available where we feel these are of major public health importance. All other content will be behind access controls for the year following publication and will be available only to subscribers. For more information, [click here].
Update. I just got the text of the editorial. Excerpt:
Strengthening the research we publish means doing what all good journals are doing to attract the best research in their field. We aim to focus on research that will help doctors make better decisions. We aim to provide a great service to authors, offering speed, useful critique, and courtesy, and context setting if your work is published. And we need to shout a little more about our unique selling points for authors: our broad international readership in paper and online; our high standards of peer review and publication ethics; and our ability to give you as much space as you need on bmj.com while also providing a shorter, more readable, version of your work online and in print. In addition, uniquely among the five major general medical journals, we provide open access to all of our original research articles with free full text available online from the day of publication (although with the business models for scientific publishing in flux, we are keeping these policies under review). We also feed these articles straight to PubMed Central, the most prominent open access archive.
Infotrieve's ArticleFinder is now free of charge. From yesterday's press release:
Infotrieve, Inc., today announced that it had converted ArticleFinder, its online scientific, technical, and medical (STM) database with more than 26 million citations and eight million abstracts from over 54,000 journals, to a free access model. The move provides scientists and researchers, who work for corporations and are subject to different copyright regulations than their academic counterparts, with an end-to-end solution for conducting STM searches across literature from multiple providers. The solution seamlessly retrieves full-text scholarly journal articles that they need on a pay-per-view basis.
(PS: Note that only the search is free. Access to found articles is either pay-per-view or prepaid through the searcher's institutional license. Articles with free online abstracts are marked on the hit page. Are any serious scholarly search engines still charging for search itself?)
Declan Butler, Mashups mix data into global service, Nature, January 5, 2005 (accessible only to subscribers). Excerpt:
Will 2006 be the year of the mashup? Originally used to describe the mixing together of musical tracks, the term now refers to websites that weave data from different sources into a new service. They are becoming increasingly popular, especially for plotting data on maps, covering anything from cafés offering wireless Internet access to traffic conditions. And advocates say they could fundamentally change many areas of science — if researchers can be persuaded to share their data. Some disciplines already have software that allows data from different sources to be combined seamlessly. For example, a bioinformatician can get a gene sequence from the GenBank database, its homologues using the BLAST alignment service, and the resulting protein structures from the Swiss-Model site in one step. And an astronomer can automatically collate all available data for an object, taken by different telescopes at various wavelengths, into one place, rather than having to check each source individually. So far, only researchers with advanced programming skills, working in fields organized enough to have data online and tagged appropriately, have been able to do this. But simpler computer languages and tools are helping....The biodiversity community is one group working to develop such services. To demonstrate the principle, Roderic Page of the University of Glasgow, UK, built what he describes as a “toy” — a mashup called Ispecies.org. If you type in a species name it builds a web page for it showing sequence data from GenBank, literature from Google Scholar and photos from a Yahoo image search. If you could pool data from every museum or lab in the world, “you could do amazing things”, says Page. Donat Agosti of the Natural History Museum in Bern, Switzerland, is working on this. He is one of the driving forces behind AntBase and AntWeb, which bring together data on some 12,000 ant species. He hopes that, as well as searching, people will reuse the data to create phylogenetic trees or models of geographic distribution. This would provide the means for a realtime, worldwide collaboration of systematicists, says Norman Johnson, an entomologist at Ohio State University in Columbus. “It has the potential to fundamentally change and improve the way that basic systematic research is conducted.” A major limiting factor is the availability of data in formats that computers can manipulate. To develop AntWeb further, Agosti aims to convert 4,000 papers into machine-readable online descriptions. Another problem is the reluctance of many labs and agencies to share data. But this is changing. A spokesman for the Global Health Atlas from the World Health Organization (WHO), for example, a huge infectious-disease database, says there are plans to make access easier. The Global Biodiversity Information Facility (GBIF) has linked up more than 80 million records in nearly 600 databases in 31 countries. And last month saw the launch of the International Neuroinformatics Coordinating Facility....Page and Agosti hope that researchers will soon become more enthusiastic about sharing. “Once scientists see the value of freeing-up data, mashups will explode,” says Page.
Stephen Downes has a good comment on Thomas Bacher's doubts about OA.
Numerous points I could pick on, but I'll limit myself to this point, which comes up a few times: "In fact, the press would give away all scholarship if it could find a financial backer to allow this. However, the reality is that costs need to be recovered." One wonders, is the university library under the same constraint? Does it have to earn back the millions it spends on subscriptions? Obviously not - so why not spent the acquisitions budget on publishing instead? Stop paying to buy publications, and start paying to produce them.
(PS: I commented on Bacher's article in a blog posting on Tuesday.)
Dean Giustini, Who cites whom in open access? UBC Google Scholar Blog, January 4, 2006. Excerpt:
A physician up for promotion and tenure asked if Google Scholar could be used with confidence to see who had been citing him. The gold standard in citation tracking and impact factors...is still the ISI Web of Knowledge (WoK). WoK's strengths are high standards and rigorous content selection. However, it is expensive, accessible to subscribers only and does not track books. Google Scholar tracks books and articles, but its standards are much lower than WoK (ie. duplication of results, and inflated numbers). Librarians and researchers must learn to live with lowered bibligraphic standards brought on by open access. Open access trumps bibliographic control in the new web ecology. I am still waiting to hear whether MEDLINE will do selective indexing of open access sites in 2006. I expect the WoK or Web of Science to follow suit with a similiar announcement this year.
Comment. The answer to the tenure candidate's question has everything to do with the quality of citation tools and almost nothing to do with OA. It may be true today that the priced tool is better than the free tool, but that's contingent and may not last. In any case, the priced tool is selective in its coverage and probably doesn't give all the citations to the tenure candidate's work. OA literature makes citation tracking easier for newcomers to this field, especially those without deep pockets, and we can expect steady improvement in citation harvesting tools for OA literature. But it doesn't follow that as OA literature gains ground, the free tools will surpass the priced tools in quality. Again, this is contingent. First, the priced tools can harvest citations from OA literature as easily as the free tools, if they choose to do so. Second, free tools like Google Scholar also index non-OA sources and over time may or may not do so more comprehensively than the priced tools.
The December issue of Access Magazine is now online. This issue has stories on the book-scanning projects from the British Library and Library of Congress, the Kaufman-Wills report, and the DC Principles Coalition proposal for scaling back the NIH public-access policy. The story on the Kaufman-Wills report does not cover the report's post-publication addendum, which retracts and clarifies some of the report's negative conclusions about peer review at OA journals.
Peter Jacso, As we may search – Comparison of major features of the Web of Science, Scopus, and Google Scholar citation-based and citation-enhanced databases, Current Science, November 10, 2005. (Thanks to Stevan Harnad.) Excerpt:
As Garfield pointed out, traditional indexing has serious limits, and adding more indexers would not be a panacea. ‘Were an army of indexers available, it is still doubtful that the proper subject indexing could be made.’ He added that ‘by using authors’ references in compiling the citation index, we are in reality utilizing an army of indexers, for every time an author makes a reference, he is in effect indexing that work from his point of view’. Technology certainly improved the efficiency of some parts of human indexing, but the ever increasing indexing quotas of indexers, often makes the intellectual process look like an assembly line operation resulting in declining quality. Current developments validate his vision big time....I am already looking forward to what shall I write about in 2015 which will be not only [Garfield's] 90th birthday but also the 50th anniversary of his conference paper in which he pondered about the feasibility of automating citation indexing. This idea recently has become one of the hot issues in information science. Unfortunately, G-S [Google Scholar] gives a bad name to autonomous citation indexing. It shows lack of competence, and understanding of basic issues of citation indexing. G-S fails even in implementing the most basic Boolean OR operation correctly (Figure 9). Riding on the waves of the regular Google software which is great for processing the unstructured heap of billions of Web pages, G-S cannot handle even the meticulously tagged, metadata-enriched few million journal articles graciously offered to it by many publishers for free. Some bright minds who designed and implemented autonomous citation indexes [notably, CiteSeer and CiteBase], and citation parsing tools clearly proved that citation indexing can be automated successfully if one has some of the intellect, foresight, drive and stamina of Gene Garfield to whom I wish Happy 50th/ 80th.
In a posting yesterday on the Open... blog, Glyn Moody reflects on how OA initiatives may preserve Tibetan culture in the face of Chinese repression. Excerpt:
The only consolation is that however brutal China's treatment of Tibet itself becomes, Tibetan culture will live on. As well as a considerable number of Tibetans living in exile around the world (chiefly in India) who keep the flame alive, there are now a number of projects, some major international collaborations, to digitise the unique Tibetan cultural heritage. Once again, the world of bits offers a partial counterbalance to some of the terrible losses taking place in the world of atoms.
James Boyle, Two database cheers for the EU, Financial Times, January 2, 2006. Excerpt:
The European Commission recently did something amazing and admirable. It conducted an empirical evaluation of whether an EU initiative was actually doing any good. The initiative in question was the Database Directive [of 1996], which required the creation of a broad new Community-wide “sui generis” intellectual property right over compilations of facts. The report honestly describes this as “a Community creation with no precedent in any international convention.” Using a methodology similar to the one I described in an earlier column on the subject, the Commission found that “The economic impact of the “sui generis” right on database production is unproven. Introduced to stimulate the production of databases in Europe, the new instrument has had no proven impact on the production of databases.” In fact, their study showed that the production of databases had fallen to pre-Directive levels and that the US database industry, which has no such intellectual property right, was growing faster than the EU’s. The gap appears to be widening....Commission insiders hint that the study may be part of a larger – and welcome – transformation in which a more professional and empirically-based look is being taken at the competitive effects of intellectual property protection. Could we be moving away from faith-based policy in which the assumption is that the more new rights we create, the better off we will be?...[W]hy only two cheers? Well, while the report is a dramatic improvement, traces of the Commission’s older predilection for faith-based policy and voodoo economics still remain. The Commission coupled its empirical study of whether the Directive had actually stimulated the production of new databases with another intriguing kind of empiricism. It sent out a questionnaire to the European database industry asking if they liked their intellectual property right – a procedure with all the rigour of setting farm policy by asking French farmers how they feel about agricultural subsidies....Setting intellectual property rights too high can actually stunt innovation....In its conclusion, the report offers a number of possibilities, including repealing the Directive, amending it to limit or remove the sui generis right while leaving the rest of the Directive in place, and keeping the system as it is. The first few options are easy to understand. Who would want to keep a system when it is not increasing database production, or European market-share and indeed might be actively harmful? But why would we leave things as they are? The Report offers several reasons. First, database companies want to keep the Directive. (The report delicately notes that their “endorsement.. is somewhat at odds with the continued success of US publishing and database production that thrives without... [such] protection” but nevertheless appears to be “a political reality”.) Second, repealing the Directive would reopen the debate on what level of protection is needed. Third, change may be costly. Imagine applying these arguments to a drug trial. The patients in the control group have done better than those given the drug, and there is evidence that the drug might be harmful. But the drug companies like their profits, and want to keep the drug on the market. Though “somewhat at odds” with the evidence, this is a “political reality.” Getting rid of the drug would reopen the debate on the search for a cure. Change is costly – true. But what is the purpose of a review, if the status quo is always to be preferred?
Comment. What's the OA connection? First, faith-based IP laws harm research and access to research. Intelligent reforms would help both even if strictly speaking OA can succeed without IP reform. Second, the EU database directive is a large step toward the copyright of facts. There is already pressure for the US to harmonize with the EU on this point. But it would be much better, for all the parties and for all the reasons that Boyle discusses, if the EU gave up the special protection of databases and harmonized with the US instead. Let's keep facts free and possibility of copyrighting them a legal taboo.
John Savarese, The Impact of Electronic Publishing, Campus Technology, January 1, 2005. Excerpt:
[A] publishing milestone occurred this year. In a world where the top scientific journals can cost subscribers over $10,000 per year, an online science journal with a subscription price of $0 won a 13.9 impact-factor rating from the prestigious Thomson Scientific...citation-counting service (Institute for Scientific Information), which acts as the Nielsen ratings of science publishing. That rating placed PLoS Biology among the top journals in its category, although it was only two years old. The non-profit Public Library of Science currently publishes four journals that embody the principles of the Open Access movement, making peer-reviewed medical and scientific research available worldwide for free, under the Creative Commons license. In place of subscriptions, the authors themselves (or their funding agencies) cover the costs of online publishing, to the tune of about $1,500 per article. The Directory of Open Access Journals lists 1,761 publications that meet its criteria as “free, full-text, quality-controlled scientific and scholarly journals.” A second approach to open access publishing is called “author self-archiving.” Following this so-called “green road,” the author publishes the article in a traditional journal, but retains the rights to post it in an open-access repository. Harnad, who edits the American Scientist Open Access Forum, has advocated this approach. He has argued that self-archiving frees up research more quickly than waiting for every journal (and every scientist) to convert to a new business model. Maximize access to research and you maximize its impact, goes the argument. [Savarese then gives details on some OA repositories and related tools and policies.]
Thomas Bacher, Another View on ‘The Access Principle’, Inside Higher Ed, January 3, 2006. Bacher starts with the interview questions put to John Willinsky (author of The Access Principle) on December 20 and offers his own answers. Bacher is the director of the Purdue University Press.
Comment. Bacher's doubts about OA would carry more weight if he didn't misunderstand OA so frequently. (1) OA advocates do not say that all scholarly information should be free. The focus of the OA movement is on scholarship for which authors do not expect to be paid, such as journal articles. (2) OA advocates do not say that OA literature is costless to produce or that its production costs do not need to be recovered. Nor do they say that funding or subsidy opportunities are the same in every field; on the contrary. (3) It's not true that "easy retrieval" of OA literature "is not available". OA literature is much easier to discover and retrieve than TA literature; that's the point. Literature in OAI-compliant repositories is indexed by OAI-compliant search engines like OAIster and is increasingly being indexed by mainstream search engines like Google, Yahoo, and Scirus. (4) OA advocates object to six-month embargoes as much as Bacher does. OA advocates are battling the embargoes, not erecting them. (5) OA is not about bypassing peer review. It's about removing access barriers to peer-reviewed research.
Responding to the Challenges Facing Scholarly Communications, five white papers and a draft policy released for comment by the University of California Academic Council on December 14. All six documents are relevant to OA, in different degrees:
During University of California negotiations with publishers of scholarly works in 2004, it became clear to UC faculty that the current models of scholarly communication had become unsustainable. UC Librarians and budget officers had seen this crisis approaching for some years. But long as library budgets could be managed and access to the most critical work could be maintained, faculty members were largely insulated from the growing crisis. When it became clear, in the face of falling university budgets and rising costs of publications, that the UC community’s access to new knowledge would progressively be limited, and that the access by others to UC-produced scholarship would similarly be limited, the Academic Council (effectively the Executive Committee of the UC Academic Senate) established a Special Committee on Scholarly Communication (SCSC) to consider what role the faculty should take in addressing these important issues. The accompanying five short papers and appendices are the result of SCSC’s work. The papers define and explain the faculty’s view of changes that could improve dissemination of scholarly work to enhance the discovery and communication of new knowledge, and best serve the public interest. The current model for many publications is that faculty write articles and books, referee them, edit them and then give them to a publisher with the assignment of copyright. The publisher then sells them back to the faculty and their universities, particularly to university research libraries. While there clearly are costs of publication, a number of publishers (particularly, but not always, for-profit corporations) earn munificent profits for their shareholders and owners. However, maximizing profits for these latter groups may work to the detriment of faculty, educational institutions and the public. Meanwhile, opportunities to reduce production and distribution costs and to create innovative forms of publication and dissemination are increasingly manifest, and enabled by networked digital technologies, new business models, and new partnerships....One [white paper] discusses copyright issues, and recommends that faculty authors adopt the practice of granting to publishers non-exclusive copyright of their research results, while retaining copyright for other educational purposes, including placing work in open access online repositories....We feel that faculty, University administration, publishers and societies can work collaboratively not only to improve and sustain dissemination of scholarship, but can materially improve it using new technology. It is the Academic Senate’s intention to work actively with the University of California Administration to press for and enact the changes outlined in these papers, and to encourage their wide adoption throughout the world, both by other faculties and universities, and by the publishers of our scholarly work.
Excerpt from the white paper on journal publishing:
The Academic Senate calls upon – and seeks partnerships with – those who publish scholarly journals to:  Seek only the copyrights necessary for first publication;  Concentrate on adding value to, rather than ownership of, scholarship;  Pursue innovation to improve scholarly communication systems;  Avoid monopoly pricing;  Provide transparent financial information;  Enable ongoing access to the persistent scholarly record; and  Provide full information about peer review and copyright policies and processes.
Excerpt from the white paper on copyright management:
We call upon UC faculty and scholars at other institutions to exercise control of their scholarship, and their institutions to support this behavior, in at least the following ways:  UC and other faculty members must manage their intellectual property in ways that ensure the widest dissemination of works in service to education and research. Specifically, and with the understanding that copyright is actually a bundle of rights that can be separately managed, we urge faculty to transfer to publishers only the right of first publication, OR at a minimum, retain rights that allow postprint archiving and subsequent non-profit use.  As part of copyright management, faculty shall routinely grant to The Regents of the University of California a limited, irrevocable, perpetual, worldwide, non-exclusive license to place the faculty member’s scholarly work in a non-commercial open-access repository for purposes of online dissemination and preservation on behalf of the author and the public.  The University must explore and develop support services to assist faculty to manage their copyright and disseminate their scholarship.  University stakeholders must continue to partner, explore, and create a set of information management services including, but not limited to, alternative modes of publishing and disseminating information that allow broadest access at the lowest sustainable cost to the scholar, students and the public.
Excerpt from the proposed copyright policy:
In order to facilitate scholarly communication and maximize the impact of the scholarship of UC faculty1, the Academic Council’s Special Committee on Scholarly Communication (SCSC)s proposes that the Academic Council consider the following recommended UC copyright policy change:“A faculty member’s ownership of copyright is controlled by the University of California Policy on Ownership of Copyright. University of California faculty shall routinely grant to The Regents of the University of California a limited, irrevocable, perpetual, worldwide, non-exclusive license to place the faculty member’s scholarly work in a non-commercial open-access online repository. In the event a faculty member assigns all or a part of the faculty member’s copyright rights to a publisher as part of a publication agreement, the faculty member must retain the right to grant this license to the Regents.”Faculty can opt out of this agreement for any specific work, or invoke a specific delay before such work appears in an open-access repository....No income will accrue to the Regents, the University or the Academic by this non-exclusive copyright license.
Comment. I can't find the deadline for comments or the address to which comments should be sent. If I were at the U of California, I'd send supportive comments immediately to both the Academic Council and the Special Committee on Scholarly Communication. I might recommend a simplification of the policy, e.g. one that merely requires faculty to deposit their peer-reviewed journal manuscripts in the UC repository immediately upon acceptance for publication. But even without this kind of streamlining, the policy is strong and the accompanying statements of principle are excellent. If you have colleagues at the U of California, please alert them to these documents and urge them to send send supportive comments with or without reservations.
Stevan Harnad et al., Model Self-Archiving Policies for Research Funders, Open Access Archivangelism, January 2, 2006. Excerpt:
Two model self-archiving policies for public (and private) research funders have been added as links to the sign-up page of the Institutional Self-Archiving Policy Registry. The recommended policy model is the Stronger Version. The Weaker Version is only intended in cases where there is delay in getting the Stronger Version adopted. The policy models were drafted collaboratively by Alma Swan, Arthur Sale, Subbiah Arunachalam, Peter Suber and Stevan Harnad by modifying the Wellcome Trust Self-Archiving Policy to eliminate the 6-month embargo and the central archiving requirement. I append the Stronger Version below. The two items in which the Weaker Version differs are (2) and (g)....
Alex Golub, The AAA budget and publications, Savage Minds, January 1, 2006. Excerpt:
Over the break I’ve been reading John Willinsky’s The Access Principle: The Case for Open Access to Research and Scholarship. It’s a good book and I recomend it to anyone with an interest in Open Access issues but who can’t digest the massive stream of information that is Peter Suber’s Open Access News blog. In the case of the AAA [American Anthropological Association], I think the argument for open access is more or less won—given anthropology’s populist sensibilities and obsession with ‘relevance’ it’s not surprising that there is strong support (at least in my experience) for the AAA to make its journals open to the public. The biggest problem that open access advocates have is the business model—how will we pay the production costs of these journals? I was surprised, then, to see that in Appendix B of his book Willinsky inclued the annual publication budgets of a dozen or so scholarly associations—including the AAA. I have to admit that I had never thought of looking up any of these figures, or even wonder where they might be found. Willinsky’s data is based on the AAA’s tax forms for 2000. According to him the AAA made US$4,680,764 that year, US$637,950 of which came from publication revenue as well as US$6,679 in royalties. However, it cost US$790,113 to produce AAA journals. In other words, the AAA lost US$145,504 producing its journals in 2000....What does this all mean? I have absolutely no idea. I was just struck that I had never looked at or thought about these figures before—and this despite the fact that I have heard many, many rants from people on everything ranging from the incredible inefficiency and corruption of the AAA to endless griping about how the rank and file doesn’t understand that yes, it actually costs something to run the AAA. Seeing these figures at least provides some ballpark understanding of where the AAA is, and what kind of latitude it has to go to other places as it pursues different approaches to publication.
The Electronic Journal of Severe Storms Meteorology is a new peer-reviewed, open-access journal published by E-Journals of Meteorology, Inc.. (Thanks to Weather or Not.) From the site:
The Electronic Journal of Severe Storms Meteorology (EJSSM) is an open-access, international, scientific, formal, online journal for the publication of original and updated research. Through peer reviewed notes and articles, EJSSM serves the community of meteorology that is concerned with severe storms, including both convective and nonconvective severe weather. EJSSM exists to improve understanding, prediction, preparedness and mitigation of all severe local storm hazards....EJSSM encourages open access. Accepted manuscripts are available to all with internet access immediately upon publication. Accepted manuscripts are subject to only minimal page charges, and lack of ability to pay does not preclude or hamper publication.
Bo-Christer Björk, Ziga Turk, and Jonas Holmström, The Scholarly Journal Re-Engineered: A Case Study Of An Open Access Journal In Construction IT, ITcon, December 2005.
Abstract: Open access is a new model for the publishing of scientific journals enabled by the Internet, in which the published articles are freely available for anyone to read. During the 1990’s hundreds of individual open access journals were founded by groups of academics, supported by grants and unpaid voluntary work. During the last five years other types of open access journals, funded by author charges have started to emerge and also established publishers have started to experiment with different variations of open access. This article reports on the experiences of one open access journal (The Electronic Journal of Information Technology in Construction, ITcon) over its ten year history. In addition to a straightforward account of the lessons learned the journal is also benchmarked against a number of competitors in the same research area and its development is put into the larger perspective of changes in scholarly publishing. The main findings are: That a journal publishing around 20-30 articles per year, equivalent to a typical quarterly journal, can sustainable be produced using an open source like production model. The journal outperforms its competitors in some respects, such as the speed of publication, availability of the results and balanced global distribution of authorship, and is on a par with them in most other respects. The key statistics for ITcon are: Acceptance rate 55 %. Average speed of publication 6-7 months. 801 subscribers to email alerts. Average number of downloads by human readers per paper per month 21.
I just mailed the January issue of the SPARC Open Access Newsletter. This issue reviews open access developments in 2005, offers some predictions for 2006, and takes a close look at the CURES Act. The Top Stories section takes a brief look at the Ukrainian resolution to make OA a national priority, the new CERN task force on OA, the NeuroCommons initiative from Science Commons, new evidence of the OA impact advantage in 10 fields, the Wellcome Trust OA agreement with three publishers, and December developments on OA policy in the UK.
The December 2005 issue of Serials Review is now line. It contains a seven-article Special Focus on Publisher Initiatives with Developing Countries. These are enhanced-access initiatives like AGORA, AJOL, eIFL.net, HINARI, TEEAL --and OA in general. Only the TOC and abstracts are free online, at least so far. (Thanks to Charles W. Bailey, Jr.)
Matt Asay, What I Learned in 2005...Lesson #7: Safety in open access, InfoWorld, January 2, 2006. Excerpt:
[I was recently reading] Eusebius' Ecclesiastical History....Roughly 100 pages into it, I came across an interesting chapter detailing which books/epistles/etc. are counted as scripture, and which are not. The fascinating part of this is the role that authority plays in making such decisions. For instance, I've read a fair amount in the Old Testament Pseudepigrapha, as well as the New Testament Apocyrpha, but these are not considered "official" books of scripture. Centuries ago, a small group of experts determined what was official, and what was not....I appreciate the role of experts as well as the next person, but I'd rather trust in numbers, in open access. This is, I suppose, a Wisdom of Crowds-type argument. But really it's more a testament to my personal faith in openness. I think rubbish generally outs itself over time, as does truth, scientific or otherwise. Give enough people access, and good things will prove themselves good, and bad will prove themselves bad....Access is critical, both to those who look and to those who don't. As I've written before, the option of source code access in open source serves as a useful surrogate for actual exercise of that choice, just as an open process for choosing scripture over apocrypha, open voting, open/free market economies, etc. provide for (or, at worst, enable) better informed, optimal results. No decision is best made blindly. No product is best defined, designed, and implemented in an information/feedback vacuum. Opening up source code means customers can place greater trust in the software they use even if they never read a single line of code, precisely because others can exercise this choice in their stead. I need to spend more time thinking about how to implement this openness in one's business model. I'm hoping that you, along with I, will spend a great deal of time working this out in 2006.
D. Vlieghe and six co-authors, A new generation of JASPAR, the open-access repository for transcription factor binding site profiles, Nucleic Acids Research, January 1, 2006.
Abstract: JASPAR is the most complete open-access collection of transcription factor binding site (TFBS) matrices. In this new release, JASPAR grows into a meta-database of collections of TFBS models derived by diverse approaches. We present JASPAR CORE-an expanded version of the original, non-redundant collection of annotated, high-quality matrix-based transcription factor binding profiles, JASPAR FAM --a collection of familial TFBS models and JASPAR phyloFACTS-- a set of matrices computationally derived from statistically overrepresented, evolutionarily conserved regulatory region motifs from mammalian genomes. JASPAR phyloFACTS serves as a non-redundant extension to JASPAR CORE, enhancing the overall breadth of JASPAR for promoter sequence analysis. The new release of JASPAR is available [here].
(PS: Although NAR is an OA journal, the January 2006 issue isn't yet online. So for the time being, only the abstract is free online.)
Jeffrey M. Drazen and Alastair J.J. Wood, Trial Registration Report Card, New England Journal of Medicine, December 29, 2005. Excerpt:
The academic establishment and patients have argued that when patients, motivated by altruism, participate (or even consider participating) in a clinical trial, they are entitled to understand fully all the options available to them in the various trials that are currently recruiting subjects. In addition, their participation in a clinical trial should result in generalizable knowledge that will be available to future patients and investigators to improve patient care. This can happen only when appropriate details of the clinical trial are made available to the public in a timely fashion. The Internet and public registries have made this possible. Some in industry have argued that to open their portfolio of clinical trials to public scrutiny, particularly the scrutiny of other drug companies, would put them at such a competitive disadvantage that they would be unable to bring new products to market. Congress, however, decided to encourage openness by enacting on November 21, 1997, Section 113 of the Food and Drug Administration Modernization Act (FDAMA 113)....In September 2004, the International Committee of Medical Journal Editors (ICMJE) announced that its journals would not publish the results of any ongoing trial that had not been appropriately registered in ClinicalTrials.gov or another qualified public registry by September 13, 2005. In this issue of the Journal, Zarin et al. provide a report card on compliance with these legislative and ICMJE requirements and the quality of the reporting that occurred before and after the ICMJE deadline for clinical trial registration....Zarin et al. show that there was a dramatic change in the number of trials registered during the summer of 2005. There can be no doubt that this spike was related to the ICMJE statement and deadline, because the rate of registration fell (though to a rate higher than that before the statement) after the deadline for registration passed. In addition, they show that the Intervention Name field was universally completed in a meaningful fashion when the trials were sponsored by academic institutions or the National Institutes of Health. In contrast, among trials registered by commercial sponsors, compliance with this field was variable....The vast majority of commercial entities provided meaningful data in most of their entries before the ICMJE statement and continued to do so during the summer of 2005. However, in the spring, some companies, such as Merck, GlaxoSmithKline, and Pfizer, provided meaningful entries in the Intervention Name field in an astonishingly low number of registrations. During the summer, Merck amended most of its meaningless entries to include clinically useful information in this field; by October they were in compliance in more than 99 percent of their registrations. GlaxoSmithKline and Pfizer are still using meaningless entries in the Intervention Name field in 21 percent and 11 percent of entries, respectively. This is puzzling, since most other companies are able to comply fully with the requirements...The second critical measure examined by Zarin et al. was the number of records with the Primary Outcome field completed. Here the data are less reassuring, and the performance of some companies remains abysmal. Of note, by October 2005, Novartis had completed this field only 3 percent of the time, and Merck only 20 percent of the time. Again, many of their competitors were in virtually full compliance, undercutting any argument that this failure reflects a commercial imperative....The ICMJE requirement that clinical trials be registered if they are to be considered for publication has been a resounding success. But the report cards for some companies would read "improved but could do better." We demand complete compliance, because trial registration makes moral sense. When patients put themselves at risk to participate in clinical trials, they do so with the tacit understanding that their risk is part of the public record, not merely the secret record of the sponsor. In our opinion, it is unacceptable for a trial sponsor not to register its trial in a complete, meaningful, and timely fashion. We call for all clinical investigators and patients to participate only in fully registered trials. This call has recently been echoed by the major organization representing academic medical centers in the United States — the Association of American Medical Colleges. If a company continues to register trials using meaningless data, with no respect for the registration process and the patients who participate in those trials, investigators and patients should refuse to participate. If a company realizes that secrecy and failure to register mean that it cannot meet its recruitment goals, it will quickly recognize that the consequence of that secrecy is commercial failure, not competitive success. We must monitor the companies that are currently noncompliant to ensure that all live up to the spirit of the law as reflected in meaningful clinical trial registration.
The editors of eWeek have reviewed their position on Google Library and reaffirmed it. Excerpt:
Google, through its Google Print project, once again came up with an idea for an innovative and useful service, only to raise the ire of copyright zealots. We're in the publishing business ourselves, and we firmly support copyright laws. Google Print seems to recognize these rights, even as it seeks to enable cyber-citizens to quickly search and find what they need. We think that, eventually, content creators will recognize what a boon such a tool can be in providing information, in helping to develop new content and in exposing the works of content creators to a far wider public than would otherwise be possible.
In the December 31 issue of Evidence in Motion, Deydre Teyhen summarizes an OA article on a 14-year longitudinal study of aerobic exercise. The article was published in BioMed Central's Arthritis Research & Therapy, but Teyhen learned about it from the February 2006 issue of Runner's World. She notes:
It is really exciting to see how open access journals are allowing research to trickle down faster to magazines meant for the general population...Very exciting!(PS: Exactly.)
Dean Giustini has compiled a list of the Top Five 2005 Search Trends in Medicine. Number 2 on his list is the search of OA repositories, for example through OAIster, Google Scholar, Scirus, and Wiki-Med. The other four quickly: medical blogging, daily alerts, digital multimedia, and online medical education.
Heather Morrison, The Dramatic Growth of Open Access: Dec. 31, 2005 Update & 2006 Predictions, Imaginary Journal of Poetic Economics, December 31, 2005. Excerpt:
The dramatic growth of open access continues! Many areas of open access, particularly OA journals and repositories, appear to be showing a very rapid growth rate of more than 40% annually. The total number of articles in repositories is showing a somewhat lower growth rate, just over 25% annually. My predictions for 2006: Open Access Journals: continued high growth rate....Institutional Repositories: very high growth in repositories, slower growth in articles / documents, a trend that will gradually reverse....
(PS: I'll publish my review of OA in 2005 and prediction for 2006 in tomorrow's issue of my newsletter.)
Dominique Babini, Latin America and the Caribbean Social Science Virtual Libraries Network. (Open access to full-text social science publications from Latin America and the Caribbean: the case of CLACSO´s virtual libraries network), in: Proceedings 50th Annual Conference of the Seminar on the Acquisition of Latin American Library Materials (SALALM, Hilton University of Florida Conference Center, Gainesville-Florida), 2005. Self-archived December 31, 2005.
Abstract: Emerging trends in academic e-publishing and e-libraries in Latin America and the Caribbean are more related to particular problems in the region --reduced number of copies printed, inter-library loans nearly nonexistent among cities and countries due to postal costs, discontinuity in library collections-- than to the dynamics of the international academic editorial business. This presentation describes how CLACSO, an academic network gathering 168 social science research institutions from 21 countries in Latin America and the Caribbean, is working towards a cooperative portal for open access to full-text publications of CLACSO´s network in support of education and research.