News from the open access movementJump to navigation
The International Journal for Educational Integrity is a new peer-reviewed, open-access journal published by the Asia Pacific Forum on Educational Integrity. The inaugural issue (December 2005) is now online. From the web site:
The journal challenges readers to consider the changing nature of education in a globalised environment, and the impact that conceptions of educational integrity have on issues of pedagogy, academic standards, intercultural understanding and equity. Articles of interest to the IJEI readership may include but are not limited to the following areas as they relate to educational integrity: plagiarism, cheating, academic integrity, honour codes, teaching and learning, university governance and student motivation. Submissions may include original research (including practitioner research), theoretical discussions and review papers....This journal provides open access to all of its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Such access is associated with increased readership and citation levels. The journal uses open source software, developed by the Public Knowledge Project, to help make open access economically viable, as well as to improve the scholarly and public quality of research.
C. Hajjem, S. Harnad, and Y. Gingras, Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact, IEEE Data Engineering Bulletin, 2005. Self-archived December 16, 2005.
Abstract: In 2001, Lawrence found that articles in computer science that were openly accessible (OA) on the Web were cited substantially more than those that were not. We have since replicated this effect in physics. To further test its cross-disciplinary generality, we used 1,307,038 articles published across 12 years (1992-2003) in 10 disciplines (Biology, Psychology, Sociology, Health, Political Science, Economics, Education, Law, Business, Management). We designed a robot that trawls the Web for full-texts using reference metadata (author, title, journal, etc.) and citation data from the Institute for Scientific Information (ISI) database. A preliminary signal-detection analysis of the robot's accuracy yielded a signal detectability d'=2.45 and bias = 0.52. The overall percentage of OA (relative to total OA + NOA) articles varies from 5%-16% (depending on discipline, year and country) and is slowly climbing annually (correlation r=.76, sample size N=12, probability p < 0.005). Comparing OA and NOA articles in the same journal/year, OA articles have consistently more citations, the advantage varying from 25%-250% by discipline and year. Comparing articles within six citation ranges (0, 1, 2-3, 4-7, 8-15, 16+ citations), the annual percentage of OA articles is growing significantly faster than NOA within every citation range (r > .90, N=12, p < .0005) and the effect is greater with the more highly cited articles (r = .98, N=6, p < .005). Causality cannot be determined from these data, but our prior finding of a similar pattern in physics, where percent OA is much higher (and even approaches 100% in some subfields), makes it unlikely that the OA citation advantage is merely or mostly a self-selection bias (for making only one's better articles OA). Further research will analyze the effect's timing, causal components and relation to other variables, such as, download counts, journal citation averages, article quality, co-citation measures, hub/authority ranks, growth rate, longevity, and other new impact measures generated by the growing OA database.
Andrea Foster, Wikipedia, the Free Online Encyclopedia, Ponders a New Entity: Wikiversity, Chronicle of Higher Education, December 16, 2005 (accessible only to subscribers). Excerpt:
Fans of Wikipedia, the popular online encyclopedia that anyone can edit, have proposed the creation of Wikiversity, an electronic institution of learning that would be just as open. It's not clear exactly how extensive Wikiversity would be. Some think it should serve only as a repository for educational materials; others think it should also play host to online courses; and still others want it to offer degrees. On a Wikiversity Web site, Cormac Lawler, a doctoral candidate in education at the University of Manchester, in England, says the mission of Wikiversity is to use the open-source model -- based on software that anyone is free to modify -- to develop learning materials, teach, conduct research, and publish. Collaborative learning would be stressed, and students themselves could determine course content and activities. Mr. Lawler, who is a lead proponent of Wikiversity, says he wants the project to focus on original research....The Wikimedia board last month asked proponents to clarify the project. It decided that Wikiversity would not be a host for online courses or promote itself as a degree-granting institution. But many hope the board will eventually reconsider its decision about courses. In the meantime, about 15 people have already created online courses on the Wikibooks Web site.
Marco Marandola died of a heart attack last week at the age of 36. From the announcement by Paola Gargiulo:
With much regret I have to inform you that Marco Marandola, an Italian copyright expert and electronic licencing consultant suddenly passed away last week....He was a consultant for many international organizations of libraries and museums, including IFLA, EBLIDA, and ICOM. For more than ten years he was a strong advocate of the special position of libraries in copyright legislation and lobbied on copyright issues within the European Parliament, the European Commission and the World Intellectual Property Organization. Recently he became quite involved in the Open Access Movement, especially in Spain where he moved to live. He was well known not only in the Italian library community but also abroad. He was very much appreciated for his competence, generosity and gentleness. Marco will be greatly missed by all the people who had the chance to know him or to work with him.
(PS: I can add a personal note. Marco offered to translate my newsletter into Spanish, and finished the July 2005 issue before other obligations made it impossible for him to continue. I am profoundly saddened to lose such a committed friend and colleague at such a young age.)
Michael L. Nelson and Johan Bollen, If You Harvest arXiv.org, Will They Come? IEEE Technical Committee on Digital Libraries Bulletin, 2, 1 (2005). A poster with annotations. Excerpt:
The NASA Technical Report Server (NTRS) is an Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) compliant aggregator, harvesting from 17 repositories. When NTRS was created, there were few scientific, technology and medicine (STM) OAI-PMH repositories, so non-NASA STM repositories were included: arXiv.org, BioMed Central, Energy Citation Database, and the Aeronautical Research Council (the UK equivalent of NASA's predecessor, NACA). In NTRS's simple search mode, only NASA repositories are searched. Advanced searches have the option of including non-NASA repositories in their search. Thus users never receive non-NASA results unless they explicitly requested. We examined 13 months of NTRS log data. NTRS is instrumented to record when a user requests a download for the full-text content. Despite a large number of records, The Energy Citation Database, BioMed Central and arXiv.org contributed few downloads. ARC represents a significant number of downloads. This indicates users will select non-NASA repositories from the advanced search interface (logs show the advanced search is used 2X as simple search), and the prominence of both NACA and ARC suggests an interest in historical aeronautical publications. The subject matter of ARC is similar to the NASA repositories, suggesting NTRS remains aerospace-focused and the presence of other STM materials has yet to expand its user base. arXiv.org is the most well-known OAI-PMH repository and is harvested by many OAI-PMH service providers, but its presence did not guarantee its use in NTRS.
Jeroen Bekaert, Xiaoming Liu, and Herbert Van de Sompel, aDORe, A Modular and Standards-Based Digital Object Repository at the Los Alamos National Laboratory, IEEE Technical Committee on Digital Libraries Bulletin, 2, 1 (2005). A poster with annotations. Excerpt:
Over the last two years, the Digital Library Research and Prototyping Team of the LANL Research Library has worked on the design of the aDORe repository architecture aimed at ingesting, storing, and making accessible to downstream applications an ever growing heterogeneous collection of Digital Objects. The aDORe architecture is highly modular and standards-based.
Google: Search or Destroy? OpenDemocracy, December 16, 2005. The Australian National University brought five "librarians, lawyers, legislators and thinkers" together to discuss Google Library: Moyra McAllister, Roger Clarke, Chris Creswell, Sarah Waladan, Michael Handler, and Matthew Rimmer. This article is an edited transcript of their remarks, but the audio files are also available.
Fred Friend has written a report on yesterday's debate on OA in the House of Commons. Excerpt:
The Debate held on 15 December in the UK Parliament on the Report on Scientific Publications by the Science and Technology Committee (HC399) was disappointing and depressing. Disappointing because so many of the old myths about open access re-surfaced and depressing because the Junior Minister present took 20 minutes to say that the UK Government intends to do nothing. Nine Members of Parliament attended the Debate, not a large number but par for the course for a supposedly non-controversial topic. The full three hours allocated were used, and one disappointing feature was that (subjectively) around 85% of the time was spent on open access publishing, only about 10% on open repositories, and about 5% on trivia such as the fact that one MP has published in "Nature" while another has only published in Royal Society of Chemistry journals....The HC399 Report is a great tribute to the quality of the UK parliamentary system; the Debate on the Report did not live up to the quality of the Report.
Ed Oswald, Librarians Voice Support for OpenDoc, Beta News, December 16, 2005. Excerpt:
Five library associations voiced their support for the use of OpenDocument (ODF) in Massachusetts this week, sending a letter to William Galvin, the Commonwealth's Secretary of State. In it, the groups say the open source format is the best choice, as everyone has access to its specifications...."An important aspect of fostering access to information is ensuring that future generations will be able to read government information created today," the letter reads. The groups argued that although digital technology is creating new ways to access more information, it has made libraries' preservation role more difficult. Now, not only do these libraries have to store documents, but also ensure they have backwards-compatible applications to view them. The letter was backed by the American Association of Law Libraries, the American Library Association, the Association of Research Libraries, the Medical Library Association and the Special Libraries Association. The five groups together represent over 139,000 libraries in the United States employing 350,000 librarians.
C. Hajjem, Y. Gingras, T. Brody, L. Carr, and S. Harnad, Open Access to Research Increases Citation Impact, Technical Report, Institut des sciences cognitives, Université du Québec à Montréal, self-archived December 16, 2005.
Abstract: We analyzed the effect of providing 'Open Access' (OA; free online access to research articles) on their 'citation impact' (how often they are cited). Using a subset of the ISI CD-ROM database from 1992 - 2003, we compared, within each journal and year, articles to which their authors had (OA) or had not (NOA) provided open access by self-archiving them on the web. The number of OA and NOA articles and their respective citation counts were calculated within biology, business, psychology and sociology journals. The percentage of OA articles varied from 5-20% (mean and median, 12%). The citation counts (OA-NOA/NOA) showed a consistent OA advantage (mean 96%, median 73%) for all four fields and 28 subspecialties tested, varying from 25% to over 250%. An OA impact advantage has already been reported in the physical sciences and engineering (physics, computer science), but there was uncertainty about whether the same thing happens in other disciplines. Our data now show that both the biological and the social sciences show the OA advantage, and are hence likewise losing substantial amounts of potential impact for the 80-95% of their articles that are not yet self-archived. These results confirm that a mandatory self-archiving policy on the part of research institutions and funders would greatly enhance the impact of research results in all disciplines.
K. Antelman, N. Bakkalbasi, D. Goodman, C. Hajjem, and S. Harnad, Evaluation of Algorithm Performance on Identifying OA, Technical Report, North Carolina State University Libraries, North Carolina State University, self-archived December 16, 2005.
Abstract: This is a second signal-detection analysis of the accuracy of a robot in detecting open access (OA) articles (by checking by hand how many of the articles the robot tagged OA were really OA, and vice versa). A first analysis, on a smaller sample (Biology: 100 OA, 100 non-OA), had found a detectability (d') of 2.45 and bias of 2.45 and 0.52 (hits 93%, false positives 16%; Biology %OA: 14%; OA citation advantage: 50%). The present analysis on a larger sample (Biology: 272 OA, 272 non-OA) found a detectability of 0.98 and bias of 0.78 (hits 77%, false positives, 41%; Biology %OA: 16%; OA citation advantage: 64%) An analysis in Sociology (177 OA, 177 non-OA) found near-chance detectability (d' = 0.11) and an OA bias of 0.99 (hits, 9%, false alarms, -2%; prior robot estimate Sociology %OA: 23%; present estimate 15%). It was not possible from these data to estimate the Sociology OA citation advantage. CONCLUSIONS: The robot significantly overcodes for OA. In Biology 2002, 40% of identified OA was in fact OA. In Sociology 2000, only 18% of identified OA was in fact OA. Missed OA was lower: 12% in Biology 2002 and 14% in Sociology 2000. The sources of the error are impossible to determine from the present data, since the algorithm did not capture URLs for documents identified as OA. In conclusion, the robot is not yet performing at a desirable level and future work may be needed to determine the causes, and improve the algorithm.
Cyrus Farivar, The Web Will Read You a Story, Wired News, December 16, 2005. Excerpt:
This summer, Hugh McGuire was searching for free audio books online from his home in Montreal. He didn't find very much. So McGuire launched LibriVox by recruiting amateur readers to create audio files of works of literature. The project now includes almost two dozen complete works, including Joseph Conrad's The Secret Agent, Jack London's The Call of the Wild and other classic novels and poems....Like Project Gutenberg, which inspired McGuire to launch the project, LibriVox employs volunteers from around the globe to participate in recording works. Each book is divided up into chapters, and each person records one chapter, which usually ends up being about 20 or 30 minutes of audio. The files are hosted on Brewster Kahle's Internet Archive and are available in MP3 and OGG formats.
(PS: There's another connection to the Internet Archive. LibriVox is working with the Open Content Alliance to produce OA audio editions of the OCA's OA text editions.)
Michel Geist is criticizing the Association of Universities and Colleges of Canada (AUCC) for the limited range of questions it has put to Canadian party leaders. Excerpt:
The AUCC chose to ask essentially the same question in nine different ways. That question - how much are you prepared to spend on higher education? Now that is an important question and I' m glad that the AUCC is putting the funding issue on the national agenda. But it cannot be the only question. The education community must also use its position to focus on copyright reform, open access, the use of technology for distance education, digital libraries, and countless other issues that strike at the heart of teaching and research at universities across the country. The failure to raise even one of these issues is embarrassing.
Richard Poynder, A Real Tragedy, Open and Shut, December 16, 2005. Excerpt:
In writing my recent article about the Royal Society's position statement on open access I contacted a number of Fellows of the Society, including some of those who had written an open letter objecting to the "largely negative stance" taken in the statement. After publishing the article I received an e-mail from Professor Richard Roberts, chief scientific officer at New England Biolabs. Professor Roberts, who signed the open letter, had been travelling when I e-mailed my questions to him, so I was unable to incorporate his views into the article.... Professor Roberts is a Nobel Laureate, a Fellow of the Royal Society, a research editorial board member for the open access journal PLoS Biology, and senior executive editor of the journal Nucleic Acids, published by Oxford University Press.
Roy Tennant, The Open Content Alliance, Library Journal, December 15, 2005. Excerpt:
A year [after the launch of Google's book-scanning project] we still don't know much more about [its] procedures....By contrast, a similar initiative was recently announced about which we already know much more. Maybe that's why it's called the Open Content Alliance (OCA), put forward by the Internet Archive, Yahoo!, and a number of large libraries, including my employer, the California Digital Library. Microsoft shortly thereafter announced support as well, and additional libraries likely will join. Yahoo!, Microsoft, and the libraries themselves are paying the Internet Archive to digitize materials at 10¢ a page --an excellent price for nondestructive scanning. The resulting files will be made available at the Internet Archive web site and likely at other locations....Since the OCA is focusing on out-of-copyright material, it is dodging the legal fight that Google is taking head-on. This means that all OCA content will be viewable in its entirety online. But the project goes further. The digitized files and their associated metadata will be available for complete downloading, thereby allowing anyone to create singular presentations of this material....The importance of this becomes clearer by visiting the Open Library site, where the Internet Archive has mounted a few dozen of the books already digitized. The method closely resembles paging through a physical book. Although this presentation may seem compelling, some potential drawbacks soon become apparent. It's difficult to jump to a particular chapter, for example, and other features such as searching and the all-important ability to magnify the page don't work yet. Still, if you do not like this orientation, you can create your own. Clicking on “Details” while viewing an Open Library book pulls up a small window giving some core metadata about the title and a link to the Internet Archive site that allows anyone to download a PDF or DjVu format of the book, or even the entire package of digital files from which these presentations were created. These books, in other words, are as open and accessible as possible....It's unclear whether the OCA project will rival the Google Library project in size. Since it is easier for organizations to participate, the OCA will easily have more participants, but the Google project may lead in the number of digitized volumes if it fulfills its promise. Only time will tell. In any case, more digitized content is likely a better thing overall....Collaborations among participating libraries are also likely, if for no other reason than to minimize duplication. There are other opportunities for collaboration and not just among OCA libraries but with the “Google Five” and many other institutions involved with digitizing content. Open digitized content, after all, is a growing boon to all of our libraries and the users we serve.
Ben Crowell, All Systems Go: The Newly Emerging Infrastructure to Support Free Books, December 15, 2005.
Abstract: With the cost of college textbooks up 62% over the last decade, pressure is building for an alternative model of publishing: the free book. Five years ago, an author had to be very persistent --- maybe even a little crazy --- to try the new approach. But now a whole new infrastructure is springing up to make it easier.
From the body of the article:
Five years ago, people looked at me funny when I expressed my enthusiasm for free books. You mean like Project Gutenberg? Downloading Hamlet for free? No no, I explained, I was talking about books intentionally set free by their authors. Huh? You know, like Linux. Open-source books. You mean like that party game where you sit in a circle, and everybody takes turns making up the next part of the story? No no no, I'd explain, serious tomes on weighty subjects: calculus, Proust, cell biology.
Towards Knowledge Societies: UNESCO World Report, UNESCO, November 4, 2005. Excerpt:
[p. 26:] Without the promotion of a new ethics of knowledge based on sharing and cooperation, the most advanced countries’ tendency to capitalize on their advance might lead to depriving the poorest nations of such cognitive assets as new medical and agronomical knowledge, and to creating an environment that impedes the growth of knowledge. It will therefore be necessary to find a balance between protecting intellectual property and promoting the public domain of knowledge: universal access to knowledge must remain the pillar that supports the transition to knowledge societies....[p. 116:] [C]ost-free access is in no way equivalent to costfree production of the knowledge in question....While researchers are bent on access and publishers on control, everyone has an interest in making the production of scientific publications both abundant and diversified....[pp. 169-70:] UNESCO has undertaken to “promote free and universal access to public domain information for the purposes of education, science and culture” and adopted to that end, in 2003, the Recommendation concerning the Promotion and Use of Multilingualism and Universal Access to Cyberspace....Knowledge itself, as an inexhaustible commons available to all human beings, is, if not a global public good(cf. Box 10.5), at least a “common public good”. For not only can knowledge not be regarded as a marketable good like others, but also knowledge only has value if it is shared by all....[p. 172:] If we accept that scientific knowledge is a “public good”, it follows that scientific data and information should be made as widely available and affordable as possible, since the benefits for society will be a function of the number of people able to share them....Scientists are worried that the excessive privatization and commercialization of scientific data and information is undermining the traditional sharing ethos of science by shrinking the public domain and threatening open access to global public goods, with a consequential loss of opportunity at both the national and international levels. What would the consequences have been for global health research if the human genome project had been commercialized, for example? Initiated by the United States Government in the late 1980s, the project was threatened by a corporate rival in 1998. At that point, the Wellcome Trust, a United Kingdom charity, teamed up with the United States Government, increasing massively its investment in the project so that its own Sanger Institute could decode one-third of the 3 billion “letters” that make up “the code of life”. Today, the completed sequences are freely available to the world’s scientific community....Whereas there has been a strong focus on new commercial opportunities using digitalized information and on the intellectual property rights issue, comparatively little attention has been devoted to the importance of maintaining open access to the source of upstream scientific data and of information produced in the public domain for the benefit of all downstream users....How do you preserve and promote access to public science without unduly restricting commercial opportunities and the legitimate rights of authors?...[p. 173:] ICSU and CODATA have established a joint ad hoc Group on Data and Information. This Group drafted a core set of principles in June 2000 to support full and open access to data needed for scientific research and education (see Box 10.6)....Like private publishers, professional societies are searching for an optimum balance between open access and financial viability. Some professional societies and other groups have embraced the open access model, although the majority still tends towards a more protective approach....[p. 174:] ICSU’s core principles in support of full and open access to data:...Scientific advances rely on full and open access to data. Both science and the public are well served by a system of scholarly research and communication with minimal constraints on the availability of data for further analysis. The tradition of full and open access to data has led to breakthroughs in scientific understanding, as well as to later economic and public policy benefits. The idea that an individual or organization can control access to or claim ownership of the facts of nature is foreign to science....Legislators should take into account the impact that intellectual property laws may have on research and education. The balance achieved in the current copyright laws, while imperfect, has allowed science to flourish. It has also supported a successful publishing industry. Any new legislation should strike a balance while continuing to ensure full and open access to data needed for scientific research and education....[pp. 175-176:] Innovative models for low-cost access to online scientific information and data:...[on PERI, HINARI, eJDS, DATAD, Ptolemy Project, OAI, AGORA, and UNESCO's Virtual Laboratory CD-ROM Toolkit, PLoS, and JPGM].
The December issue of D-Lib Magazine is now online. Here are the OA-related articles.
David Allen Sibley and two co-authors, Sora is a bird but also a research tool, Charlotte Observer, December 15, 2005.
Soras are unobtrusive birds. A bit smaller than a robin, and a bit bigger than a bluebird, they generally spend their time hidden deep in dense marsh vegetation and are tough to get a look at. As such, they are not an obvious symbol of openness and visibility in the world of ornithology. Nonetheless, the Searchable Ornithological Research Archive has taken its name from the ready-made acronym provided by this small bird, and is set to become an important resource for anyone interested in the scientific study of birds. SORA is the avian contribution to the open-access movement, which makes scientific journal articles accessible over the Web, where anyone can can read them. The project results from collaboration between the leading ornithological societies in North America and the University of New Mexico to create an archive of research published in the top bird journals. And unlike the real soras, the electronic library is easy to find on the [internet]: [here].
The January 2006 issue of Learned Publishing is now online. Here are the OA-related articlecs. Only abstracts are free online for non-subscribers, at least so far.
Birgit Schmidt, Open Access. Freier Zugang zu wissenschaftlichen Publikationen - das Paradigma der Zukunft? In Konrad Umlauf and Hans-Cristoph Hobohm (eds.), Erfolgreiches Management von Bibliotheken und Informationseinrichtungen, Verlag Dashöfer, 2005, pp. 1-22. In German but with this English abstract:
Since a couple of years there is a strong voice for open access – that is unrestricted free online access to research articles for everyone. By presenting a typology of open access, we discuss the realization of open access journals using various combinations of business models. There are high expectations, but as business models are still in flux, new challenges arise for libraries dealing with "institutional memberships" and stagnating serials budgets.
Bongani M. Mayosi, SAMJ - Africa's top open access medical journal, South African Medical Journal, November 2005. An editorial. Not even an abstract is free online, at least so far. (The journal is OA, so I think the access problem is due to the fact that vol. 95, no. 11 is not yet online.)
Update (12/26/05). The article is now online and OA. Excerpt:
A revolution is taking place in the world of scientific publishing. In the traditional model of publishing scientific articles, the author raised money to conduct the research project, then submitted the paper to a scientific journal for consideration for publication; if the manuscript survived the brutal peer review process, the author would be required to assign copyright to the publisher and pay ‘page charges’ for publication of the article. Finally, the author (as reader) had to pay a subscription fee to the publisher of the journal in order to have access to his or her published paper! Authors, members of the public and funders of research are understandably in revolt against this apparent exploitation of authors and readers by traditional publishers who extract substantial profits from the production of scientific knowledge through the efforts and investments of others. This unfavourable situation has led to the rise of the ‘open access’ movement in scientific publishing....The winds of change are also sweeping through the South African Medical Journal and its publisher, the Health and Medical Publishing Group. The first step, taken several years ago, was to drop the page charge costs to authors. More recently, the full text of articles published in the Journal has become available on MEDLINE immediately on publication, free of charge to all readers. The SAMJ’s modernisation into a ‘free to publish’ and ‘free access’ publication has already had a noticeable impact on the quality, quantity, and international reach of papers submitted for publication....The SAMJ’s impact factor has been rising continuously over the past five years....The SAMJ is ranked number 1 among peer-reviewed medical journals in Africa, number 2 among comparable journals from Australasia (Medical Journal of Australia – impact factor 2, and New Zealand Medical Journal – impact factor 0.554), and number 44 among the 103 journals grouped in the ISI Medicine, General and Internal list.
Dana Blankenhorn, The academy vs. open source, ZDNet, December 14, 2005. Excerpt:
I recently noted, on another site, that a recent study by the Centre for Information Behaviour and the Evaluation of Research (CIBER) in England found that 96.2% preferred the "closed source" process of peer review over the "open source" process of open access, when evaluating the worth of academic papers. Rather than throw things on the Web and let a consensus emerge, in other words, researchers prefer having a few known authorities inspect the work before it's published by a known press. The credibility of authority, both the reviewer and the journal, are seen as more valid than the credibility of consensus. But look inside that study again.Nearly half believed that open access (OA) publishing would undermine the current system, with 41% saying that would be a good thing.
(PS: I appreciate that Blankenhorn is defending OA, but he perpetuates a harmful misunderstanding of OA by giving the impression that OA rejects peer review or that it entails peer-review reform and favors informal methods like those at Wikipedia. First, OA is about removing access barriers to peer-reviewed research, not about bypassing peer review. Second, removing access barriers and reforming peer review are independent projects. OA is compatible with every kind of peer review, from the most new and innovative to the most conservative and traditional.)
Update. Blankenhorn continues his defense of OA in a December 14 posting to Moore's Lore, this time without touching on peer-review issues. Excerpt:
Academic journals cost very little to print or distribute. They are produced, in fact, by researchers who agree to be part of the peer-review process. They are a bottleneck through which knowledge must pass before the rest of us get a crack at it. Yet these same journals are owned by for-profit publishers, who keep raising their prices, forcing universities to pay for them, often with government money....When private companies are allowed to gain monopoly profits, often paid-for by government funds, and act as a bottleneck to knowledge, something is clearly very wrong. With apologies to Bergstrom and McAfee there are, in fact, several things schools could do:  They could create competitors to the privately-run journals.  They could demand payment for their professors' work on those journals, as the authors suggest.  They could create a new method, acceptable to them, for creating peer-review products that are published online. That unavailable Library Journal article is a Clue. If you want to restrict access to academic journals, either before or after they've edited and approved a paper, you can do that. The Internet provides a highly flexible system, with a highly flexible set of business models, for doing that. Just stop making this refusal to consider "open access" and continue the present system as some kind of stand on principle. It is an economic issue. You're coddling monopolists.
Heather Morrison, The elusive art of costing (institutional repositories), Imaginary Journal of Poetic Economics, December 14, 2005. Excerpt:
Talk in open access circles of late has centred around the true costs of setting up and maintaining an institutional repository. The only accurate answer to this question, in my opinion, is: it depends - on a number of factors. At the low end of the cost range is the completely free institutional repository. An individual can easily download free software, such as gnu eprints, using computing and internet facilities already in place for other purposes at work. The amount of volunteer labor involved also depends on how the IR is set up. Are authors allowed to deposit their own works, or is there a central vetting process?...Even with no budget at all, we can easily get an institutional repository up and running with what we have. In fact, this might be easier and simpler for the smaller and poorer library. Decisions, for example, are easier, when one has fewer options to contemplate. This is another example of the Delightful Irony of open access; that the poor can afford, what the rich cannot (or claim that they cannot)....The highest single per-repository cost would come with a central system housing a variety of different types of information for a large university. This operation may well require a fair bit of hardware, connectivity, security and authentication arrangements, staff, and space to house the computers and staff - plus adminstrative overhead, of course....To sum up, when we look at the wide variety of costs reported for institutional repositories - from practically nothing to $6,000, to hundreds of thousands of dollars, and ask: which of these costs estimates is correct? There are two correct answers to this question: all of the above, and it depends - how much money do you have, and what are you willing to spend?...[F]or more on this perspective, see my SOAF posting on this topic.
Richard Poynder, Not written in the stars, Open and Shut, December 14, 2005. The most detailed and comprehensive article to date on the Royal Society's 11/24 position statement on OA and the controversy it has spawned. Excerpt:
Two weeks later, on 7th December, 42 disgruntled Fellows of the Royal Society — including James Watson, the scientist who discovered the structure of DNA, and Sir John Sulston, who headed the British end of the human genome project — responded by sending an open letter to the president of the Society, Lord Rees of Ludlow. Expressing disappointment that it had taken a "largely negative stance on open access", the letter urged the Society to support, rather than seek to delay, the RCUK policy. In its turn, the Fellows' letter elicited a reply from Lord Rees. "We certainly do not, as your letter implies," he wrote to the dissident Fellows "take a 'negative stance' to open access. We are simply concerned that open access is achieved without the risk of unintended damage to peer-review, quality control and long term accessibility of the scientific literature." Lord Rees went on to list a number of specific issues he had with open access, and concluded that before the proposed RCUK policy was introduced "[W]e believe that a study should be commissioned to assess the relative merits of the various models that have been proposed under the rather broad banner of 'open access'". OA advocates were quick to point out that since the RCUK was proposing self-archiving, not new publishing models, the Royal Society's stance was based on a misunderstanding. "[M]ost of the RS doubts focus on the viability of OA journals even though the RCUK proposal mandates deposit in OA archives, not submission to OA journals," commented a frustrated Peter Suber, on his blog Open Access News. "I can't count the number of times this misunderstanding has been corrected." On the American Scientific Open Access Mailing List (AmSci), meanwhile, OA advocate Stevan Harnad was reminding list members that physicists have in any case been posting their papers into arXiv.org for fourteen years without any negative impact on journals. For that reason, he said, any further studies would be redundant, and would unnecessarily delay open access. "If 14 years of evidence of peaceful co-existence between self-archiving and journal publishing is not evidence enough, what is?" he asked. Calls for more evidence, however, have become a mantra that no self-respecting supporter of the existing system can resist. Speaking to the BBCs' John Sudworth, for instance, the president of the Institute of Physics (and former vice president of the Royal Society) Sir John Enderby, said: "What the Royal Society has said — which seems to me to be blindingly obvious — [is] that before we abandon an economic model which has served us terribly well over the years we should make sure that any replacement is sustainable." Once again, Sir John was clearly focused on economic models, not self-archiving.... What was new in the discussion, however, was a greater vehemence. After asking the Royal Society for a comment on the Fellows' letter, for instance, I received a surprise e-mail from the Royal Society's senior manager of policy communication, Bob Ward. Apparently convinced that he was unmasking the real villain of the piece he wrote: "[Y]ou may be interested to learn that the open letter from Fellows of the Royal Society on open access appears to have been at least partly co-ordinated by BioMed Central, a commercial publisher of open access journals. Matthew Cockerill, the publisher of BioMed Central, registered the domain name of the web page at which the open letter was posted for signature."..."It is no secret that BioMed Central and others helped to co-ordinate the letter (for example by registering the domain name that was used)," responded [Grace] Baynes [of BMC], adding indignantly: "Given that many of the FRS's concerned are on our boards, or edit our journals, it was in no way inappropriate for us to do so."...By now OA advocates were also keen to turn the allegation around, pointing out that the Royal Society had far more to gain from sinking the RCUK policy than BMC had from supporting it. "The Royal Society has a financial interest in maintaining subscriptions," commented Suber on his blog. "I believe that its subscriptions are not threatened by the RCUK policy. But if it wants to argue that its fears are justified, then it has to start by admitting its financial interest, which is much stronger than BMC's." In his usual colourful way, Harnad speculated that the only people in the Royal Society who actually had a problem with open access were those working in its journal publishing division. "I'll bet this is not really the voice of the RS at all: It's just the pub-ops tail wagging the regal pooch."...[David] Prosser [of SPARC Europe] asked the DTI [via the UK Freedom of Information Act] how often the Parliamentary Under-Secretary of State for Science and Innovation for the UK, and head of the DTI, Lord Sainsbury of Turville had met with publishers and researchers in the past two years. This time the unmasking was far more interesting — for what Prosser learned is that Lord Sainsbury has a special place in his heart for Sir Crispin Davis, the CEO of the world's largest STM publisher Reed Elsevier. As Suber explained on his blog, the FOIA request shows that "Lord Sainsbury met with OA opponents roughly twice as often as with OA proponents, and met with the Reed Elsevier CEO three times more often than with any other stakeholder." The FOIA documents also show, adds Suber, that "DTI apparently undertook no analysis of its own on OA." Far from being level, it seems, the playing field is heavily tilted in favour of rich and powerful publishers.
Jim Giles, Internet encyclopaedias go head to head, Nature, December 14, 2005. (Thanks to Declan Butler.) Excerpt:
Jimmy Wales' Wikipedia comes close to Britannica in terms of the accuracy of its science entries....[A]n expert-led investigation carried out by Nature — the first to use peer review to compare Wikipedia and Britannica's coverage of science — suggests that such high-profile examples [of Wikipedia errors] are the exception rather than the rule. The exercise revealed numerous errors in both encyclopaedias, but among 42 entries tested, the difference in accuracy was not particularly great: the average science entry in Wikipedia contained around four inaccuracies; Britannica, about three. Considering how Wikipedia articles are written, that result might seem surprising. A solar physicist could, for example, work on the entry on the Sun, but would have the same status as a contributor without an academic background. Disputes about content are usually resolved by discussion among users. But Jimmy Wales, co-founder of Wikipedia and president of the encyclopaedia's parent organization, the Wikimedia Foundation of St Petersburg, Florida, says the finding shows the potential of Wikipedia. "I'm pleased," he says. "Our goal is to get to Britannica quality, or better."...In the study, entries were chosen from the websites of Wikipedia and Encyclopaedia Britannica on a broad range of scientific disciplines and sent to a relevant expert for peer review. Each reviewer examined the entry on a single subject from the two encyclopaedias; they were not told which article came from which encyclopaedia. A total of 42 usable reviews were returned out of 50 sent out, and were then examined by Nature's news team. Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively....[T]o improve Wikipedia, Wales is not so much interested in checking articles with experts as getting them to write the articles in the first place. As well as comparing the two encyclopaedias, Nature surveyed more than 1,000 Nature authors and found that although more than 70% had heard of Wikipedia and 17% of those consulted it on a weekly basis, less than 10% help to update it.
In its accompanying editorial Nature endorses Wikipedia and asks scientists to help it out:
So can Wikipedia move up a gear and match the quality of rival reference works? Imagine the result if it did: a comprehensive, accurate and up-to-date reference work that can be accessed free from Manhattan to rural Mongolia. To achieve this, Wikipedia's administrators will have to tackle everything from future funding problems — the site is maintained by public donations — to doubts about whether enough new contributors can be found to increase the quality of the mushrooming number of entries. That latter point is critical, and here scientists can make a difference. Judging by a survey of Nature authors, conducted in parallel with the accuracy investigation, only a small percentage of scientists currently contribute to Wikipedia. Yet when they do, they can make a significant difference. Wikipedia's non-expert contributors are, by and large, dedicated to getting things right on the site. But scientists can bring a critical eye to entries on subjects they study, often highlighting errors and misunderstandings that others have unintentionally introduced. They can also start entries on topics that other users may not want to tackle. It is no surprise, for example, that the entry on 'spin density wave' was originated by a physicist....Nature would like to encourage its readers to help. The idea is not to seek a replacement for established sources such as the Encyclopaedia Britannica, but to push forward the grand experiment that is Wikipedia, and to see how much it can improve. Select a topic close to your work and look it up on Wikipedia. If the entry contains errors or important omissions, dive in and help fix them. It need not take too long. And imagine the pay-off: you could be one of the people who helped turn an apparently stupid idea into a free, high-quality global resource.
PS: I made a similar point in SOAN for July 2005:
If you're an expert on a certain topic, then make sure that Wikipedia includes the fruits of your expertise....You may not have a high opinion of Wikipedia, but there are two reasons not to let that stop you. First, it can become a self-fulfilling prophecy. If experts add or enhance articles to reflect their expertise, then Wikipedia will deserve respect to that extent. Second, Wikipedia is an increasingly common first stop, and probably last stop, for non-academic users looking for information. If you want to be visible to non-academic users, then it's an eyeball destination that you can easily join....Don't give up your standards, but don't judge this resource from mere presumptions without firsthand knowledge.
Update. Wikipedia has a page collecting the independent reviews of its accuracy, and the page now includes the Nature study. Nice touch: the page reports that all the errors noted in the Nature study have been tagged and will soon be corrected. Can Encylopedia Britannica do that?
M.L. Baker, New Brain Trust to Work Like the Web, CIO Insight, December 12, 2005. Excerpt:
Researchers poring over brain scans may soon have an easier time integrating that data with information about the genes and proteins that make brain cells tick. A software vendor and a nonprofit group are teaming up to create NeuroCommons.org, a free, shared repository of data and other tools to speed research on brain function and disease. Informatics company Teranode will provide an infrastructure and means to store disparate data in common formats. Science Commons, a project of the nonprofit corporation Creative Commons, will develop a community of users and experts, plus work to help create an intuitive interface to find and analyze content....There's a real need for a shared platform in neurology, said John Wilbanks, executive director of Science Commons. Separate research foundations exist to fund different rare diseases, but they cannot share information without running afoul of technical and legal complications. One hope is that researchers can gather preliminary evidence for their hypotheses using other researchers' datasets. NeuroCommons.org should also allow researchers to readily compare proposed mechanisms about what, how, and when various genes and proteins interact. Neurologists would use an interface much like a Web search engine, but instead of finding relevant Web sites, they would be able to find other researchers' datasets and protocols, as well as working models of how genes, proteins and brain regions interact. Even better, NeuroCommons.org could automate such tasks and analyze the results. Researchers would not need to spend days doing literature searches or hunting with several available databases for useful data, said Matthew Shanahan, CMO for Teranode. That's especially important as the number of proteins and genes associated with diseases swells. "The thought that a scientist can do that manually efficiently doesn't make sense; you really need the aid of software now."...Neurocommons.org is set up to be maintained by its community of users. Researchers will be able to annotate each others' data. Wilbanks hopes that, eventually, researchers will see contributing information to the semantic Web as part of their scientific duty, much like peer review. But he admits that it isn't yet part of scientific culture. "It's hard to get someone to take the time to say, 'I'm going to make my data reusable by someone that doesn't know me.' "
Also see Paul Krill, Semantic Web eyed for life sciences data, InfoWorld, December 9, 2005. Excerpt:
The Semantic Web involves a concept in which data from multiple sources and ontologies can be integrated into a single information space. Experiment design automation (XDA) software vendor Teranode, which focuses on software for life sciences, plans to collaborate with Science Commons to build a neurology repository for the Semantic Web. Called Neurocommons.org, the project will provide a free repository of neurology-related data, tools and pathway knowledge for use by public and private researchers. Science Commons is an effort launched to promote the free flow of scientific information. Teranode believes life sciences represents an ideal test case for the Semantic Web because life sciences data comes from a variety of sources, including brain images, robot-arrayed gene chips, machines sorting materials cell-by-cell and gene sequencers. Science Commons will use the Teranode XDA infrastructure for Neurocommons.org. All content will be available in the Resource Description Framework (RDF) format, allowing for participating foundations to use a shared repository of research.
Yaffa Aharoni, Ariel J. Frank, and Snunith Shoham, Finding information on the free World Wide Web: A specialty meta–search engine for the academic community, First Monday, December 2005.
Abstract: The Web is continuing to grow rapidly and search engine technologies are evolving fast. Despite these developments, some problems still remain, mainly, difficulties in finding relevant, dependable information. This problem is exacerbated in the case of the academic community, which requires reliable scientific materials in various specialized research areas. We propose that a solution for the academic community might be a meta–search engine which would allow search queries to be sent to several specialty search engines that are most relevant for the information needs of the academic community. The basic premise is that since the material indexed in the repositories of specialty search engines is usually controlled, it is more reliable and of better quality. A database selection algorithm for a specialty meta–search engine was developed, taking into consideration search patterns of the academic community, features of specialty search engines and the dynamic nature of the Web. This algorithm was implemented in a prototype of a specialty meta–search engine for the medical community called AcadeME. AcadeME’s performance was compared to that of a general search engine — represented by Google, a highly regarded and widely used search engine — and to that of a single specialty search engine — represented by the medical Queryserver. From the comparison to Google it was found that AcadeME contributed to the quality of the results from the point of view of the academic user. From the comparison to the medical Queryserver it was found that AcadeMe contributed to relevancy and to the variety of the results as well.
(PS: Interesting approach. Unfortunately, it works better with subject-based repositories than with institutional repositories. If it could identify the subject-based sets or communities within IRs, that would not only improve its performance but make it compatible with the spread of IRs.)
CERN has launched an OA task force to coordinate actions by a group of physics publishers, laboratories, and learned societies, funding agencies, and individual researchers. From today's press release:
A landmark decision has been reached on the future direction of scientific publishing. At a meeting hosted by the CERN on 7-8 December, representatives of several major physics publishers, European particle physics laboratories, learned societies, funding agencies and authors from Europe and the US, came together for the first time to promote open access publishing. Among the results of the meeting was the formation of a task force mandated to bring about action by 2007....Eighty participants attended the meeting, which follows CERN's signature of the Berlin Declaration in May 2004, and takes advantage of the particle physics community's heightened awareness of open access. The creation of the open access task force comes at a crucial time for the particle physics community. In 2007, CERN will launch the field's new flagship facility, the Large Hadron Collider [LHC], and wishes to make the results as widely available as possible. Commenting on the meeting, CERN's Director General Robert Aymar said: "The next phase of LHC experiments at CERN can be a catalyst for a rapid change in the particle physics communication system. CERN's articles are already freely available through its own web site but this is only a partial solution. We wish for the publishing and archiving systems to converge for a more efficient solution which will benefit the global particle physics community."
Richard Wray, Wellcome boost for open access, The Guardian, December 15, 2005. Excerpt:
Three major publishers of scientific research, including Oxford University Press, will today announce a deal with The Wellcome Trust, the world's second largest charitable funder of medical research after Bill Gates, that will see thousands of research papers available free to everyone over the internet....The Wellcome Trust has emerged as a major proponent of open access and mandates its researchers to place a copy of their finished articles on the web for everyone to see. Today the Wellcome Trust will announce that three publishers - Blackwell, Oxford University Press and Germany's Springer - have all agreed to change the licences their authors must sign so that research funded by Wellcome but published in their journals can be made freely available online as soon as it is published. The Wellcome Trust is among a number of medical research funders backing a multi-million pound digital research facility, modelled on the US-based PubMed Central, where these articles would be stored. News of the deal will provide support to Research Councils UK (RCUK) - which brings together Britain's eight public research funders. Earlier this year RCUK proposed mandating its researchers to get involved in open access but some traditional publishers attacked the move as putting scientific debate in jeopardy.
John Russell is using Google Base to host an OA bibliography of Georgia labor history. From his description:
When Google Base was released, it seemed like a good opportunity to create some sort of open-access database. I was working on a bibliography of Georgia labor history at the time (and still am), so why not put the citations online? I logged in and looked at the interface and it seemed like it would be a reasonably easy proposition. I created new item types for the bibliography....For each item record, I added basic metadata - what you would expect to find in any citation - and these are easily added under the category of “Details.” For the “Keyword” section, I created a controlled vocabulary: each item in the bibliography would have the keyword “georgia labor history” plus a number of pre-established (by me) terms: any geographic locations would be included (in the format of either “city/town (Georgia)” or “county name (Georgia)”), plus race, gender, slavery, unions, textiles, agriculture, lumber, strikes, company name....Lastly, I added what I think is a useful feature - if there is an OpenWorldCat record for a book, I’ve supplied that link (meaning that if you click on the title, you are taken to the OpenWorldCat record rather than the record that I created in Google Base); similarly, I supplied links to finding aids for the primary sources included (clicking on the title takes you to the online finding aid). The articles also include links, if available, to the full-text of the article (done by using database-supplied persistent URLs and/or creating an OpenURL via SFX) - I didn’t make the article title link directly to article full-text and only would if the article were freely available online....I also like the ability to provide a link to the search, so that I could create a web page with links to citations for Georgia labor history, or offer links to the refined searches (such as strikes or race). In general, I think Google Base provides a great opportunity for librarians and other groups to create an open access database....Remember, though, to keep a back-up record of all the citations you add as Google Base could be gone in a year!
(PS: Apart from courseware, this is the first academic use I've seen of Google Base. I don't plan to monitor all the uses of GB, but I'd still like to follow its uses for OA scholarship.)
OneWorld South Asia has launched an open-access, OAI-compliant repository. (Thanks to Narendra Deo.) From the site:
This initiative archives research work, working papers, articles and presentations on aspects of information and communication technologies for development. We encourage all authors to publish their respective work with due credit of authorship for the common benefit....OneWorld South Asia (OWSA), a New Delhi based non-profit organisation having a network of 500 organisations is dedicated to voicing the voiceless and exploring the role of ICTs in achieving the Millennium Development Goals (MDGs). Besides, OWSA undertakes several initiatives to build and strengthen Communities of Practice (CoPs) around the MDGs to facilitate knowledge and information sharing in the region.
Journals feel the change, RTDinfo: Magazine on European Research, November 2005. Unsigned. Excerpt:
The arrival of the internet has thrown into disarray all the operating principles behind scientific publishing that have traditionally governed the quality and integrity of knowledge as well as access to it. For almost five years now, a debate has been raging, involving not only the publishers but all those with a stake in scientific communication and who are feeling the impact of the digital revolution....[The subscription-based system of distribution] is now coming under pressure from two quite different phenomena. On the one hand, the increase in publishing activity and in its cost, due to the exponential increase in knowledge, is leading to a saturation of the purchasing power of the public science libraries of universities and leading research centres. On the other hand, the digital revolution and the internet are bringing competition between printed and virtual media, while on-line access is radically changing the traditional management of scientific knowledge. Furthermore, for just under a decade now, very active groups of researchers have been campaigning not only for open access but also for free access. This combination of economic strangulation and digital revolution ultimately has implications for the public authorities and indeed for society as a whole. Because, is it not society that provides the ‘financial fuel’ necessary for science? And is it not therefore entitled to participate in the debate on this new world of access to knowledge?
Declan Butler, Getting GIS data into Google Earth, Declan Butler, Reporter, December 13, 2005. Excerpt:
Google Earth has set new standards for visualizing geographical information systems (GIS) data. Great for viewing the world’s sightseeing spots, your house, or the nearest hotels and restaurants at your business, or, holiday destination. But that’s a bit limited. The full extent of rich scientific, and other, GIS datasets often cannot yet be easily converted for viewing in Google Earth, because of differences in formats. Speak to anyone at various geographical or scientific databases these days and you often hear the same question: “How can we get our data into Google Earth?” New computing tools are now emerging, however, that are changing this situation....Meanwhile ESRI itself is scheduled to release in first quarter 2006 ArcGis Explorer, a free visualization tool, which is being billed by observers as a Google Earth killer, the screenshots, and comments from developers who have given it a tour, suggesting it does all that Google Earth does, but much, much, more — a Google Earth on steroids.
The University of North Texas (UNT) has launched an OA collection of Congressional Research Service (CRS) reports. (Thanks to Free Government Information.) From the web site:
The Congressional Research Service (CRS) does not provide direct public access to its reports, requiring citizens to request them from their Member of Congress. Some Members, as well as several non-profit groups, have posted the reports on their Web sites. This site aims to provide integrated, searchable access to many of the full-text CRS reports that have been available at a variety of different Web sites since 1990.(PS: CRS reports are entirely funded by taxpayers and highly regarded for their thoroughness and quality. They ought to be OA. Several other sites already do what UNT is doing and host OA copies of leaked or released CRS reports. See the FAS collection, the Franklin Pierce Law Center collection, and Josh Ruihley's OpenCRS.)
On November 14, Lund University became the first university in Sweden with an OA policy. The English translation was just approved for distribution. Here's the policy in its entirety:
The Internet has radically changed the practical and economical possibilities for the dissemination of research results. It is vital for Lund University to utilize these new opportunities to increase the visibility of research production and thereby maximize access for other researchers, industry, and the general public. This will also increase the visibility and impact of Lund University researchers. Open access to publications leads to more usage and greater impact for the researchers. Many universities are working towards this goal with the support of The Association of Swedish Higher Education (SUHF). For Lund University, this implies the establishment of an infrastructure to advance and facilitate open access to publications by Lund University researchers. In the U21 group, a cooperative effort in this direction has already begun. This development means that an increasing amount of accepted doctoral theses will be freely available on the Internet, unless prevented by copyright agreements. Finally, the goal is to increase the number of quality controlled articles in scientific journals, and in conference proceedings, deposited (parallel-published) in the University repository, together with other publications in the form of report series and working papers, Even though the individual researcher owns the intellectual property rights to his/her material it is important that the Board of Lund University supports this development.
(PS: Kudos to all at Lund who had a hand in drafting and adopting this policy.)
Greg Tananbaum, I Hear the Train A Comin', Against the Grain, November 2005. An interview with Heather Joseph, Executive Director of SPARC. Excerpt:
While the issue of journal pricing was without a doubt the wake-up call that brought many of us to the table, it has been critically important for us to try to understand the broader context in which these pricing trends are occurring, as well as the wider consequences. By acknowledging that we’ve got adjustments to make on a system-wide level, I think that the stakes have become higher, but I do think that the potential benefits are well worth the risks. It’s been particularly notable to me to see the discussion of access to scholarly scientific research results gain such traction as a public policy issue, not only in the U.S., but worldwide (with the PubMed Central and Research Councils UK initiatives, among others)....The way that researchers interact with their data has always driven scholarly communication. The question of how to enable the widest possible use and, especially, re-use of data is, to me, one of the most exciting issues currently on the table....[T]he Open Access campaign has been arguably the most visible of SPARC’s education campaigns, and is certainly our most active current drive. This campaign is specifically designed to promote the awareness and adoption of Open Access models, and does this through a variety of different avenues. SPARC created a rich Web-based resource articulating the potential benefits of Open Access, and followed that up with a widely-distributed brochure targeted towards educating faculty — to date, nearly 20,000 of these brochures have been distributed (by request) to various universities. We’ve produced and published a two-part business planning guide for running Open Access journals, and coordinated and run workshops and forums on this topic. Additionally, SPARC became the publisher of the very popular Free Online Scholarship Newsletter, created and [written] by Peter Suber [now called the SPARC Open Access Newsletter]. It’s an incredibly vibrant program....The “Create Change” campaign, which was driven in close collaboration with the Association of Research Libraries, has had an impressive reach. Over the past 4 years, Julia Blixrud, SPARC’s Assistant Director for Public Programs, has given invited presentations related to this campaign on dozens of university campuses, not only in the U.S., but worldwide. We’ve also seen more than 50,000 supporting brochures be requested by campuses for distribution to faculty members....While advocacy has always been one of SPARC’s major strategic activities, this program area really gained national and international attention with SPARC’s focus on Open Access....SPARC has been outspoken on policies that pertain to public access to federally funded research results, in particular on the recent NIH Public Access Policy. The focus on public access to federally funded research led SPARC to spearhead the formation of an unprecedented alliance of leading library groups, public interest organizations and patient’s advocacy groups, the Alliance for Taxpayer Access. This group quickly coalesced into a growing voice in the Open Access movement, calling for greater access to taxpayerfunded research to help drive the return on investment of public funds....I often hear the concern that small, society publishers who have traditionally been “good citizen” players in the scholarly communications arena are among those at greatest risk should funding agencies mandate a move to Open Access. As someone who has spent the majority of my career working to support scholarly societies, I am not unsympathetic to that concern. I believe that SPARC is uniquely positioned to leverage its education and outreach programs to focus on identifying and implementing market-based initiatives that can help create the kind of market conditions in which scholarly society (and other non-profit publishing organizations) can continue to play a vital role....[W]henever I’m part of a discussion about the economics of the movement, the first thing I usually hear is “there is no proven Open Access business model,” and the second thing I usually hear is a claim that any Open Access model is likely to cause economic harm to some subset of the scholarly communications community. While I agree that much more work needs to be done to create viable, market tested models, I think these kinds of statements only look at half of the issue — the potential costs of Open Access. I would like to see us focus our energies on the other side of that equation — the potential benefits of an Open Access model. I think it will be important for us to find a way to examine, and to try and quantify what the potential return on investment is that we, as a society, can realize by making the results of scholarly and scientific research openly accessible. I think that generating some data on this side of the equation would be a very enlightening and important exercise....I think institutional repositories are potentially rich breeding grounds for new kinds of scholarly communication activities. A trick will be for the community to throw out conventional thinking when considering how to populate them.
Ana Radelat has a six-sentence note on the CURES Act in the December 11 Hattiesburg American and a two-sentence note in the Clarion Ledger, two Mississippi papers. The CURES Act is co-sponsored by Senator Thad Cochran of Mississippi. What's notable is that this is all the news coverage I've seen since the bill was announced on December 7.
It appears that the most extensive coverage of the bill to date is here in Open Access News. I'm no longer surprised when blogs outperform the MSM. But I'd like this bill to pass and for that it needs wider press. Isn't it newsworthy that CURES would spend $5 billion, create a new federal agency, devote itself to curing major diseases, and mandate open access to federally-funded medical research? (Note to U.S. Senators: The next time you introduce a significant bill, mention Brad Pitt, Angelina Jolie, or torture somewhere in the title.)
This year is the 10th anniversary of the launch of Project Muse. I wasn't going to blog the news, since PM isn't OA. But then I read this in Greg Rienzi's story for the Johns Hopkins University Gazette (December 12, 2005):
In just three years, Project Muse was able to break even....Project Muse's current annual revenue is approximately $9 million, with a lion's share of the profits returned to the publishers who own the participating journals. Keane said that Muse has also produced a modest surplus every year since 1999. "As a result, Project Muse has been generating its own working capital. This would be an excellent financial result for any business started in 1995, and it's a rare achievement in the world of academic library and university press ventures," she said.
Here's my question: Why not use the surpluses to amortize the costs of digitization and overhead so that, over time, PM can make more and more of its content OA?
John Batelle, Alexa (Make that Amazon) Looks to Change the Game, Searchblog, December 12, 2005. Excerpt:
Every so often an idea comes along that has the potential to change the game. When it does, you find yourself saying - "Sheesh, of course that was going to happen. Why didn't I predict it?" Well, I didn't predict this happening, but here it is, happening anyway. In short, Alexa, an Amazon-owned search company started by Bruce Gilliat and Brewster Kahle (and the spider that fuels the Internet Archive), is going to offer its index up to anyone who wants it. Alexa has about 5 billion documents in its index - about 100 terabytes of data. It's best known for its toolbar-based traffic and site stats, which are much debated and, regardless, much used across the web. OK, step back, and think about that. Anyone can use Alexa's index, to build anything. But wait, there's more. Much more. Anyone can also use Alexa's servers and processing power to mine its index to discover things - perhaps, to outsource the crawl needed to create a vertical search engine, for example. Or maybe to build new kinds of search engines entirely, or ...well, whatever creative folks can dream up. And then, anyone can run that new service on Alexa's (er...Amazon's) platform, should they wish....And there's no licensing fees. Just "consumption fees" which, at my first glance, seem pretty reasonable. ("Consumption" meaning consuming processor cycles, or storage, or bandwidth). The fees? One dollar per CPU hour consumed. $1 per gig of storage used. $1 per 50 gigs of data processed. $1 per gig of data uploaded (if you are putting your new service up on their platform). In other words, Alexa and Amazon are turning the index inside out, and offering it as a web service that anyone can mashup to their hearts content. Entrepreneurs can use Alexa's crawl, Alexa's processors, Alexa's server farm....the whole nine yards.
Jeffrey Trachtenberg and Kevin Delaney, HarperCollins Plans to Control Its Digital Books, Wall Street Journal, December 12, 2005. (Thanks to Barbara Fister.) Excerpt:
In the latest salvo in the fight over the future of books on the Internet, one of the country's biggest publishers said it intends to produce digital copies of its books and then make them available to search services offered by such companies as Google Inc., Yahoo Inc., Microsoft Corp. and Amazon.com., while maintaining physical possession of the digital files....HarperCollins Publishers Inc. hopes to head off the prospect of these big Internet companies taking charge of books that it has purchased, edited and published. Its move to digitize its active backlist of an estimated 20,000 titles and as many as 3,500 new books each year comes at a moment when technology companies and the publishing industry are wrestling over rights and economic models for books online. HarperCollins's effort to make search companies use its digital copies is an aggressive response to anxieties felt by publishers worried that they will lose control over their intellectual property. Along with a recent initiative by Bertelsmann AG's Random House, the initiative signals a growing desire by publishers to control and participate in some of the new online uses of their books. "Now is the time to build a digital infrastructure that will allow us to protect our rights and the rights of our authors," said Jane Friedman, chief executive of News Corp.'s HarperCollins Publishers. "We will make all of our books available digitally, but we will store the digital copies and license them out to those who want to use them." "We didn't like being seen as Luddites," she added. "We see what's going on, and we get it. We want to be the best collaborator, but we also want to take charge of our future."..."The difference is that the digital files will be on our servers," said Brian Murray, group president of HarperCollins Publishers. "The search companies will be allowed to come, crawl our Web site, and create an index that they can take away, but not the image of the page." This would prevent such Internet companies from selling a digital copy of that book unless HarperCollins decided to partner with them as a retailer....Publishers were willing to give Amazon thousands of titles to digitize for Amazon's search-inside this-book feature. Ms. Friedman says this helped boost her backlist sales by 6% to 8% annually. However, Ms. Friedman says she was caught off-guard by the Amazon Upgrade program, which hasn't yet gone into effect. "This raised issues about how to monetize the digital files," Ms. Friedman said. "Is ownership physical possession, or is ownership defined by intellectual property? Amazon had our digital copies, and they had the customers. But as publishers we want to set the price and terms for our products."
Comment. This isn't about OA, but I blog it because HarperCollins is showing that it accepts the evidence that free online full-text searching sells books. It's happy to let Google and Amazon do the indexing, provided that they have no way to sell the texts themselves. If HarperCollins can persuade other book publishers that they will gain from online indexing, then we're likely to see more book scanning and less resistance. Who conducts the scans and owns the files is a detail.
Michael Roy, Open Access to Scholarship: An Interview with Ray English, Academic Commons, December 11, 2005. Ray is the Library Director at Oberlin College. Excerpt:
There are now 61 signatures from Fellows of the Royal Society (FRS) on the open letter endorsing open access and the draft RCUK OA policy. The letter had only 42 signatures when it was first released on December 7.
If you are a FRS, you can sign the letter by sending an email to email@example.com.
David Prosser of SPARC Europe used the UK Freedom of Information Act to discover how often Lord David Sainsbury met with publishers and researchers in the past two years. Sainsbury is the Parliamentary Under-Secretary of State for Science and Innovation for the UK, a minister in the Department of Trade and Industry (DTI), and a key player in setting UK science and OA policy. Here's the answer:
Comment. Here are a few annotations. Sir Crispin Davis is the CEO of Reed Elsevier. Harold Varmus is one of the co-founders of the Public Library of Science (PLoS). The ALPSP is the Association of Learned and Professional Society Publishers. Iain Diamond is the Chief Executive of the Research Councils UK (RCUK). The BMA is the British Medical Association. For the timeline above, note that the House of Commons Select Committee issued its strong OA recommendations in July 2004; the DTI-coordinated government response rejected them in November 2004; and the RCUK released its draft OA policy for public comment in June 2005. Bottom line: Lord Sainsbury met with OA opponents roughly twice as often as with OA proponents, and met with the Reed Elsevier CEO three times more often than with any other stakeholder. DTI apparently undertook no analysis of its own on OA.
Berlin4, the fourth annual International Conference on Open Access within the tradition of the Berlin Declaration, was originally scheduled for October 4-7, 2005, but had to be postponed. It has now been rescheduled for March 29-31, 2006, in Golm, Germany. There will be a day of pre-conference workshops on March 28.
Update. When I first posted this note I mistakenly typed 2005 instead of 2006 for the new date. Sorry for the confusion.
J. Ding and four co-authors, PubMed Assistant: a biologist-friendly interface for enhanced PubMed search, Bioinformatics, December 6, 2005. Only this abstract is free online for non-subscribers, at least so far:
MEDLINE is one of the most important bibliographical information sources for biologists and medical workers. Its PubMed interface supports Boolean queries, which are potentially expressive and exact. However, PubMed is also designed to support simplicity of use at the expense of query expressiveness and exactness. Many PubMed users have never tried explicit Boolean queries. We developed a Java program, PubMed Assistant, to make literature access easier in several ways. PubMed Assistant provides an interface that efficiently displays information about the citations, and includes useful functions such as keyword highlighting, export to citation managers, clickable links to Google Scholar, and others which are lacking in PubMed. AVAILABILITY: PubMed Assistant and a detailed online manual are freely available [here] under a GPL (GNU General Public License).
Michael Geist, Make Internet an election issue, Toronto Star, December 12, 2005. Excerpt:
As local politicians go door-to-door in search of votes and the national party leaders prepare for this week's debates, the election campaign has thus far centred on each party's attempt to articulate a unique vision for the future of Canada. With this in mind, Canadians should jump at this rare opportunity to turn the leaders' attention to law and technology issues....In this election, two issues come immediately to mind — access and privacy....[The access] issue should also touch on access to knowledge initiatives. The Internet has the potential to tear down barriers to knowledge by embracing open-access research funding that would bring federally-funded research into the hands of millions of Canadians, committing to the creation of a national digital library that could emerge as a critical cultural export, and promoting online access to knowledge in Canadian schools without unnecessary new licensing schemes. The Liberals provided some support for open access funding, but were non-committal on other access issues; opposition parties should take a stand.
Today Science Commons officially launched NeuroCommons. From the announcement:
The NeuroCommons is a proving ground for the ideas behind Science Commons: open legal contracts and Open Access literature, advanced use of open-standards Semantic Web technology, and the construction of an open community involving all the stakeholders in scientific funding, research, and publishing....It represents an integrated testbed for Science Commons' work in Publishing and Licensing as well as the first investment in our efforts to create a Science Commons by a private company. NeuroCommons will:  Use freely available literature and databases to make scientific knowledge, descriptions of biological materials, and data sets easier to use and find - we will connect a graph of neurological information and publish it in Semantic Web standard formats.  Provide a web-based infrastructure for search, community-driven additions, and annotations.  Lower the legal and technical barriers to finding and sharing knowledge and tools in the neurosciences.NeuroCommons is funded by Teranode.
Update. Also see Teranode's press release (December 12, 2005).
Francesca Di Donato, Designing a Semantic Web Path to e-Science, in Giovanni Tummarello and Paolo Bouquet (eds.), Proceedings SWAP2005 - Semantic Web Applications and Perspectives, Trento (Italy), 2005. Self-archived December 11, 2005.
Abstract: This paper aims at designing a possible path of convergence between the Open Access and the Semantic Web communities. In section 1, it focuses on the problems that the current Web has to face to become a fully effective research means, with particular regard to the question of selection according to subjective quality criteria. Section 2 exposes the main principles and standards which lie behind the Open Access movement, and tries to demonstrate that the Open Access community is a fertile ground where to experiment Semantic Web technologies. Finally, section 3 sketches a number of practical strategies and suggests the combination of existing tools for e-Science, in order to create a real Semantic Web of scientific knowledge.
Soutik Biswas, India hits back in 'bio-piracy' battle, BBC News, December 7, 2005. (Thanks to Subbiah Arunachalam.) Excerpt:
In a quiet government office in the Indian capital, Delhi, some 100 doctors are hunched over computers poring over ancient medical texts and keying in information. These doctors are practitioners of ayurveda, unani and siddha, ancient Indian medical systems that date back thousands of years. One of them is Jaya Saklani Kala, a young ayurveda doctor, who is wading through a dog-eared 500-year-old text book for information on a medicine derived from the mango fruit. "Soon the world will know the medicine, and the fact that it originated from India," she says. With help from software engineers and patent examiners, Ms Kala and her colleagues are putting together a 30-million-page electronic [open-access] encyclopaedia of India's traditional medical knowledge, the first of its kind in the world. The ambitious $2m project, christened Traditional Knowledge Digital Library, will roll out an encyclopaedia of the country's traditional medicine in five languages - English, French, German, Japanese and Spanish - in an effort to stop people from claiming them as their own and patenting them. The electronic encyclopaedia, which will be made available next year, will contain information on the traditional medicines, including exhaustive references, photographs of the plants and scans from the original texts. Indian scientists say the country has been a victim of what they describe as "bio-piracy" for a long time. "When we put out this encyclopaedia in the public domain, no one will be able to claim that these medicines or therapies are their inventions. Till now, we have not done the needful to protect our traditional wealth," says Ajay Dua, a senior bureaucrat in the federal commerce ministry....The sheer wealth of material that has to be read through for information is enormous - there are some 54 authoritative 'text books' on ayurveda alone, some thousands of years old. Then there are nearly 150,000 recorded ayurvedic, unani and siddha medicines; and some 1,500 asanas (physical exercises and postures) in yoga, which originated in India more than 5,000 years ago. Under normal circumstances, a patent application should always be rejected if there is prior existing knowledge about the product. But in most of the developed nations like United States, "prior existing knowledge" is only recognised if it is published in a journal or is available on a database - not if it has been passed down through generations of oral and folk traditions.
In January 2006, ParisTech will launch a large portal of open courseware. (Thanks to Francis Muguet.) From the press release (November 18, 2005):
The eleven ParisTech engineering institutions launched an ambitious project in November 2003, aiming at making available some of their educational resources (lecture notes, exercises, yearly archives, simulations, animations, course notes and videos). One target of this project is to promote the excellent high-quality teaching provided by those institutions, in order to attract foreign students. Another goal of the project is to contribute to bridge the digital divide by making available Open Access Educational Resources, in accordance with the recommendations of the World Summit on the Information Society (WSIS). This initiative appears in the WSIS stocktaking database.
Jia Hepeng, China launches campaign to boost local journals, SciDev.Net, December 7, 2005. Excerpt:
The Chinese government has launched a campaign to encourage Chinese researchers to publish their results in domestic — rather than international — journals, and to place their results in free archives. "We will gradually make scientists publish research that is funded by the government agencies in leading domestic journals," said Wu Bo'er, director of the Department of Facilities and Financial Support of the Ministry of Science and Technology, at a meeting last week (30 November). At the heart of the campaign is a new fund that will provide financial support to between 300 and 500 of the country's 5,000 scientific journals. The money, whose total amount has yet to be announced, is intended to help the journals improve their editorial and print quality. Some journals will be encouraged to publish in English....At present, papers published by Chinese researchers in journals quoted in the Science Citation Index can bring substantial rewards, such as professorships, research grants and even housing. This has encouraged Chinese scientists to publish their results in foreign journals. To reverse the trend, Wu told SciDev.Net that Chinese scientists may eventually be required to publish their work domestically....Yu Zailin of Peking University disagrees with a ban on publishing in international journals, saying it would impede academic progress. He suggests that instead, Chinese scientists could be asked to write a Chinese paper to be published simultaneously or shortly after their foreign publication....Wang Li, chief editor of Changchun-based Journal of Jilin University, raises another concern. While he accepts that the government's campaign may help to strengthen the country's leading journals, he warns that it could have a less beneficial impact on other scientific publications. "If all Chinese researchers are encouraged to submit their papers to the leading journals, the middle and small-level domestic ones could suffer," says Wang....The government will also fund an online database of the full text of all papers published in journals selected to receive financial support. Both researchers and the public will have access to the database.
Comment. I applaud the Chinese plan to improve many of its journals and provide OA to publicly-funded research and publicly-funded journals. However, I also support the concerns of Yu Zailin. One solution is to let researchers publish in the journals of their choice but expect them to deposit copies of their work in OA repositories. This is the approach taken by the NIH, Wellcome Trust, and RCUK. Researchers will still have an incentive to publish in the improved Chinese journals because of their new quality and new access policies.
Paul Krill, Semantic Web eyed for life sciences data, InfoWorld, December 9, 2005. Excerpt:
The Semantic Web is getting a boost in the life sciences arena. The Semantic Web involves a concept in which data from multiple sources and ontologies can be integrated into a single information space. Experiment design automation (XDA) software vendor Teranode, which focuses on software for life sciences, plans to collaborate with Science Commons to build a neurology repository for the Semantic Web. Called Neurocommons.org, the project will provide a free repository of neurology-related data, tools and pathway knowledge for use by public and private researchers. Science Commons is an effort launched to promote the free flow of scientific information....Science Commons will use the Teranode XDA infrastructure for Neurocommons.org. All content will be available in the Resource Description Framework (RDF) format, allowing for participating foundations to use a shared repository of research.
Kim Thomas wrote a short piece on Friday for Information World Review on the September CIBER report. I didn't blog it because it was very short and didn't say anything new. But I'm still bothered by the title that IWR gave the piece: Academic authors favour peer review over open access, and decided I had to say something. IWR knows perfectly well that the OA movement is about removing access barriers to peer-reviewed research, not about bypassing peer review. It also knows perfectly well that the CIBER report did not make this mistake. Because IWR has many accurate articles about OA to its credit, it should recognize that the title of the Thomas article is harmful and misleading. I hope it will publish a correction.
Cory Doctorow and Creative Distribution, Open Business, December 6, 2005. Excerpt:
[Q] All four of your novels are available for free online. Obviously a number of factors would have played a part in this decision. Do you think your motivation was primarily a “business” decision or a “political” one?
Jeff Ubois writes on the Television Archiving blog:
Peter Suber’s Open Access News is fast becoming one of my favorite sources of information. The focus tends to be on print and scholarly journals, but it covers the whole spectrum of open access issues, and draws from an amazingly diverse set of sources. I suspect the amazing variety of new approaches to open access print can inspire new models video and television.(Thanks, Jeff.)
Dean Giustini asks, Is Google Scholar the least improved search engine of 2005? He's collecting reader opinions.
Jan Szczepanski has written an account of his tireless and much-appreciated work listing open-access journals. Originally an email to Heather Morrison, Heather posted it yesterday on OA Librarian. Excerpt:
Do you remember the ads for Postmodern Culture in the early 90's? The journal was not available on paper! I wanted to buy this important journal but couldn't. I never forgot that. In 1998 I made a study for the library on how many free e-journals existed and what was the worth. In the beginning of 1999 we presented a report. By we I mean some memberts of my staff at the Department of Humanities, where I was Head at the time....We made a study in two areas, music and philosophy. We found that the amount of free e-journals was impressive and of high quality, well worth collecting. Since then I continued collecting free e-journals in the humanities. In January 2002 the library had created a local database for electronic journals and I started to put also the free e-journals there. At the end of 2002 I had included over 300. In December that year I checked the statistics. They had been used 7.500 times, that is 25 times in avarage. This was impressive so I continued collecting. During 2003 I included 800 more living titles and 400 retrodigitalized titles. Now the statistics showed that free e-journals had been used 28.000 times in total, 18 times per title in average....Everything exploded during 2005. In 2004, I had collected 2.400 titles I have now in the beginning of December 2005, 3.948 titles and 757 retrodigitized titles, totally 4.705. The open access movement is not only STM-journals fighting commercial publishers it is also a very quiet but strong movement within the other culture, humanity and social sciences. They are not competing with commercial publishers because these journals have never been extremly expensive. They start new journals because the technology is there and they are used to writing and working for free and want to communicate and give the world the results of their work....In May 2005 I contacted Peter Suber because I wanted to help my journals to be better known and used and disseminated. Peter helped me. I had found out that it wasn't enough just to start up a free e-journal if nobody knows about it. So I thought, I will try to see to it that thousands of libraries all over the world will include them amongst their electronic Elsevier-titles....What is the difference between the commercial packages and my titles? One of the most important differences is that I have titles from all over the world and in many more languages. Small countries are represented, other continents. I have broken the anglo-american dominance! This feels good and right. And I have made humanities and social science free e-journals more visible. That gives also a good feeling. I have not earned a penny and for that I will get a reward in heaven....Why did I turn public? One of the reasons was that I thought DOAJ was working too slow. [Jan has always shared his list with the DOAJ.] A bottle-neck! New titles were popping up daily and it's our duty as librarians to collect them and give them to our customers. The second reason was the UK government assertion that the open access movement had lost in momentum. There were wrong.
Vauhini Vara, Project Gutenberg Fears No Google, Wall Street Journal, December 10, 2005. An interview with Michael Hart. (Thanks to Issues in Scholarly Communication.) Excerpt:
Internet giants like Google Inc. and Yahoo Inc. are making headlines with their rival plans to create online libraries of books. Long before those companies even existed, though, there was Project Gutenberg: an ambitious, offbeat effort to digitize classic books by typing them out by hand. The approach made a lot of sense back in 1971, when Project Gutenberg's founder Michael Hart was a student at the University of Illinois. He enlisted an army of volunteers to help in the effort, by pulling their own dusty volumes from attic shelves and transcribing them, word for word. The electronic versions were sent to Mr. Hart, who stored them on clunky university computers. Nearly 35 years later, Project Gutenberg has put more than 17,000 so-called e-books on its Web site. It continues to add more titles each week -- though most texts are now scanned rather than typed. Mr. Hart, an eccentric technologist and bibliophile, still shepherds the effort on a shoestring budget from his computer-filled home in Urbana, Ill....It's not that we don't want to work with [Google]. Google didn't want to have anything to do with us. They want to do their own project. All of these places can legally use all of our books. If Google put up all of our books, that would be fine. I would have gladly worked with Google. [How is Project Gutenberg different from Google Book Search?] Google is working from the top down. It's very centralized. Project Gutenberg is the opposite: It's decentralized, it's grassroots. From the consumer's point of view, if you're trying to get a quotation from a book, you could get the book from Project Gutenberg and cut and paste, say, the whole "Hamlet" soliloquy. On Google, you can't. Also, ours is totally non-commercial. You won't find advertising on any of our pages.