Open Access News

News from the open access movement

Saturday, March 25, 2006

Retrovirology added to ISI indexes

Retrovirology was recently added to the list of periodicals indexed in Web of Science. The journal was already indexed by PubMed, Embase, and Scopus.

Retrovirology -- Fulltext v1+ (2004+) BioMed Central | PubMed Central; ISSN: 1742-4690.

An OA journal for full detail on clinical drug trials

Douglas G. Altman and three co-authors, Trials - using the opportunities of electronic publishing to improve the reporting of randomised trials, Trials, March 23, 2006. The lead editorial in this issue.
Abstract (provisional): This editorial introduces the new online, open access, peer-reviewed journal Trials. The journal considers manuscripts on any aspect of the design, performance, and findings of randomised controlled trials in any discipline related to health care, and also encourages the publication of protocols. Trialists will be able to provide the necessary detail for a true and complete scientific record. They will be able to communicate not only all outcome measures, as well as varying analyses and interpretations, but also in-depth descriptions of what they did and honest reflections about what they learnt. Trials also encourages articles covering generic issues related to trials, for example focussing on the design, conduct, analysis, interpretation, or reporting.

Governments charging themselves for public information

It turns out that some UK government agencies pay the Ordnance Survey, another government agency, for access to publicly-funded geodata. See the details at Free Our Data.

PS: In the US, the highly-regarded and publicly-funded Congressional Research Service (CRS) reports are not generally OA. Citizens who want copies often have to pay private-sector publishers, as do agencies of the federal government.

How far will the gift culture spread?

Andy Updegrove, Where (if anywhere) are the Boundaries of the Open Source Concept? Consortium Standards Bulletin, March 24, 2006. Excerpt:

In the last several days there have been several stories in the news that highlight the increasing tension between ownership of intellectual property rights (IPR) and the opportunities that become available when broader, free access to those rights is made available.  The three articles that struck me as best proving this point were the announcement by Sun Microsystems that it had released the design for its new UltraSPARC processor under the GNU GPL, a speech by Tim Berners-Lee to an Oxford University audience in which he challenged the British government to make Ordinance Survey mapping data available at no cost for Web use, and reports that a Dutch court had upheld the validity of the Creative Commons license.  Each of these stories demonstrates a breach in traditional thinking about the balance of value to an IPR owner between licensing those rights for profit, or making those same rights freely and publicly available.

In the case of the Sun announcement, that breach is expansion of the open source methodology form software to silicon - a genetic leap, if you will, from one species of technology to another. Tim Berners-Lee’s challenge, on the other hand, is an example of the increasingly popular concept that "data wants to be free," and that the greatest societal benefit may result from allowing it to be so. And the Creative Commons victory demonstrates that traditional legal concepts can be adapted to successfully accommodate such new realities....

What this demonstrates is that the broad concept of open source is extensible into many types of situations, and may be managed in multiple ways. In the first case, the approach has moved from software to chip designs, and the initiative is organized on an open source software project model. In the second case, raw data is involved, and the delivery mechanism is through public (the Ordinance Survey example) or private (the Google example) means, for two entirely different motivations. In the third case, it is works of authorship of all types (literary, music, art, etc.) released by the individual author/owner, who may set the boundaries of that access through the simple means of referring to a specific variation of a publicly available license.

What this shows me is that the envelope of free use and public availability of IPR will continue to be pushed in more and more directions, and managed in more and more novel and situationally appropriate ways.  Crucial to this process will be the accumulating evidence in more and more domains that the owners of IPR may gain (indirectly) more by giving than selling (directly).  This is not as novel as might first be imagined.  IPR has always been a means to an end, rather than an end in itself....As more and more examples accumulate in increasingly diverse areas where IPR owners demonstrably gain by giving, it can be assumed that the owners of IPR in other areas will give thought to how the technique may be adapted to their own IPR assets and situations.  At some point, the inevitable tipping point will be reached, where an IPR owner will automatically consider which world she wishes her work to live in - open or closed, or in both, depending upon the specific use or user obtaining rights to use the IPR.  Is this inevitable?  Personally, I think it is.  This is one of those examples where the Internet really has "changed everything."...I strongly doubt that the open source concept will be applied to every area of endeavor, or that it will predominate in every area where it is applied.  But I also believe that we may be surprised at some of the areas not yet imagined where it springs up next....

Updegrove's article has spawned a Slashdot thread.

Spring issue of INASP Newsletter

The Spring issue of the INASP Newsletter is now line. (HTML edition.) This issue has articles on Sri Lanka's National Science Library and Resource Centre (NSLRC), Scholars Without Borders, the Asia-Pacific Information Network (APIN), the Editing and Publication Association of Bangladesh, MedKnow Publications in India, talks between INASP and FAO at the Tunis WSIS pre-conference, government-subsidized electronic access to journals in India, and publishing a first-class journal on a shoestring in Nepal.

Is Grey different from Green?

TextRelease has announced that GreyNet offers a New Road to Open Access. Excerpt:
Since the relaunch of GreyNet by TextRelease in 2003, authors both in the [Grey Literature] Conference Series as well as those contributing to The Grey Journal (TGJ) sign-on to a “non-exclusive rights agreement”. The authors remain free to deposit their own work in other online repositories, which they deem fit. This non-exclusive rights agreement further allows GreyNet to negotiate licensing and cooperative publishing exchange of the full text and metadata contained in its in-house content base."

Comment. In an email accompanying the announcement, TextRelease says that the grey road is a third path beyond the gold and green roads to OA. I must disagree. I applaud TextRelease for letting its authors deposit their work in OA repositories. But that's the green road. Many different rights agreements with publishers are compatible with OA archiving and TextRelease's agreement is just another in the series.

Friday, March 24, 2006

Free ebooks for your PDA offers free ebooks for your PDA, iPod, ebook reader, or browser. Users may browse by author, title, category, language, or reader recommendations. Most titles are available in multiple formats, including ebook-reader formats and ordinary PDFs. (Thanks to the Scout Report.)

More on OA to biodiversity information

The International Institute for Sustainable Development (IISD) has published highlights of the eighth meeting of the Conference of the Parties to the Convention on Biological Diversity (Curtiba, Brazil, March 20-31, 2006). Excerpt:
The Secretariat introduced relevant documents (UNEP/CBD/COP/8/17, 17/Add.1, and 18). COLOMBIA stressed repatriation of information and, supported by many, collaboration with other initiatives. CANADA urged parties to provide free and open access to information and, supported by the EU, suggested reference to the Global Biodiversity Information Facility. CHINA and CAMEROON highlighted supporting national clearing-house mechanisms.

Beilstein Journal of Organic Chemistry included in PubMed Central

BioMed Central issued a press release today announcing the addition of the Beilstein Journal of Organic Chemistry (BJOC) to PubMed Central.

The Beilstein Journal of Organic Chemistry (BJOC) is now included in PubMed Central, the world's largest digital archive of freely available full-text journal literature. PubMed Central hosts over 200 journals, mainly in the life sciences and medicine. BJOC is one of the first chemistry journals to be accepted for inclusion in the database, which will bring its articles to the attention of an enormous worldwide readership of biomedical scientists.

BJOC is published by the Beilstein-Institut, a name synonymous with quality in organic chemistry for two centuries, in co-operation with BioMed Central, the leading global publisher of open access journals. As one of the first open access journals in chemistry, BJOC allows readers free access to all content immediately upon publication. BJOC has no publication charges, is fully refereed and offers rapid publication. Authors may submit their articles online to the Editor-in-Chief, Professor Jonathan Clayden of the University of Manchester, UK. (

Dr Martin Hicks, Director of the Beilstein-Institut, said 'Our objective in publishing BJOC is to serve organic chemists worldwide by providing a high quality journal that serves their needs as authors and readers, without being a financial burden to their institutes. We are delighted that inclusion in PubMed Central will bring the high quality articles published in BJOC to the attention of the growing number of biomedical researchers with an interest in organic chemistry and chemical biology.'

Beilstein Journal of Organic Chemistry Fulltext v1+ (2005+) BioMed Central | PubMed Central; ISSN: 1860-5397.

More on the low compliance rate for the NIH policy

Janice Hopkins Tanne, Researchers funded by NIH are failing to make data available, BMJ, March 25, 2006 (only the first 150 words are accessible to non-subscribers). Excerpt:
Most researchers who are funded by the US National Institutes of Health (NIH) do not provide free public access to their papers by posting them on PubMed Central, says an NIH report. Mandatory posting may be necessary to ensure that free access is given in future, says the report. Prodded by federal departments and Congressional committees, the NIH last year announced a policy on public access to increase the availability of research that it funds. It asked researchers to submit their final, peer reviewed manuscripts to the PubMed Central database—the NIH's free digital archive of journal literature in the biomedical and life sciences—when their paper was accepted by a journal. However, less than 4% did so.

Richard Poynder interviews Eric Raymond

Richard Poynder has posted his interview with Eric Raymond, President Emeritus and Co-Founder of the Open Source Initiative. This is the latest installment of The Basement Interviews, Poynder's blog-based OA book of interviews with leaders of many related openness initiatives.

Public interest research

Carolyn Raffensperger, Reclaiming a Public Interest Research Agenda, On the Commons, March 23, 2006. Excerpt:
We – meaning you and I dear reader -- have paid for some really bad things through our public research dollars. Exhibit A: In the late 1990’s the research arm of the U.S. Dept of Agriculture teamed up with Delta and Pineland Co. to genetically engineer seed to make it sterile in the second generation, thus forcing farmers to buy seed every year. This rogue technology was named “Terminator Technology.” It’s not hard to see that Terminator was as anti-farmer and anti-consumer as any invention created. This hijacking of our research dollars prompted a group of us to draft a paper defining public interest research because we believed that when public money was involved, the public had a right to expect that research it funded would serve the public interest. It should add to the commonwealth and common health, not subtract from it. In that paper we said that public interest research “will be identified by its beneficiaries, the public availability of its results, and public involvement in the research. These key benchmarks identify public interest research:
  • The primary, direct beneficiaries are society as a whole or specific populations or entities unable to carry out research on their own behalf.
  • Information and technologies resulting from public interest research are made freely available (not proprietary or patented); and
  • Such information and technologies are developed with collaboration or advice from an active citizenry.”

Here are the OA-related excerpts from the paper Raffensberger wrote with 10 co-authors, Defining Public Interest Research (a 1999 white paper for the Science and Environmental Health Network; The Center for Rural Affairs; and the Consortium for Sustainable Agriculture, Research and Education) :

How are the data and results of publicly funded research kept in the public domain? Are they made available through the internet, public libraries, journals, press releases for media stories? Who decides how such results are used? ...Some would say that keeping information in the public domain does not rule out profit. The computer industry is experiencing the benefits of freely available programs and operating systems developed by volunteers. In some cases, companies continue to invest in systems they will not be able to own, and both the public and the company profit from the development this stimulates. But others wonder whether research that results in financial gain to universities, hospitals, and corporations qualifies as public interest research. American agriculture and society as a whole have benefited from the freely available information coming from publicly funded experimental stations and universities. This has begun to change, however, as patent laws assign ownership to information developed at public expense. While the privilege of patenting genes and organisms encourages investment in research and marketing to exploit these technologies, it also directs public money to private gain. When public funds have supported any aspect of research, it is difficult to reconcile the issuing of patents and the sealing off of proprietary information with the public interest.

Dresden presentations on OA

Medinfo has a series of blog postings (in German) on the Netzwerk Bibliothek conference (Dresden, March 21-24, 2006). The posting for today summarizes several presentations on OA.

The state of OA archiving in the UK

Tom Wilson, Institutional open archives: Where are we now? Library and Information Update, April 2006. (Thanks to William Walsh.) Wilson is the editor of the OA journal, Information Research. Excerpt:
So, where are we with IOAs [Institutional Open Archives] in the UK? Five years is certainly long enough to determine a trend, and it is clear that the trend is for growth. In search of the trend, I examined the sites of those universities in the UK with IOAs (26 in all)....[PS: Omitting notes in sites included and excluded.] The cumulative growth curve looks impressive, but the picture revealed by the annual number of items recorded suggests that, rather than growing rapidly, the curve has levelled out....In fact, the data shows a very patchy record for the 22 archives, and one institution, the University of Southampton, holds more than 50 per cent of all items recorded over the period. Had I included departmental archives, the prominence of Southampton would have been even greater, as the Department of Electronics and Computer Science has an archive of 9,342 items – more than the total in the institutional archive – and if these were included Southampton would hold more than 75 per cent of the new total of 19,168 items....[I]n general, the humanities and social sciences are less well represented in IOA than are science, medicine and engineering....This suggests that universities in the UK may be finding it very hard to get the message of IOA through to all their constituent departments....By any measure it can hardly be claimed that the concept of open archiving has taken off in British universities and I don’t think that any of its protagonists would claim otherwise. The movement is at an early stage, with something in the order of 12 per cent of UK universities involved and with a minuscule proportion of the total research output covered by the IOA. For 2004, a search of the Web of Science for papers by authors whose address included ‘England’ produced 58,710 items and, when we exclude the Scottish universities from the table (since Scottish addresses were not searched for), we find that fewer than 2,000 of these have been archived in institutional archives....

Those institutions that are involved appear to be having difficulty in getting academics to contribute, perhaps because they are putting insufficient effort into the process, but also, perhaps, because the whole idea of self-archiving in institutional archives is based upon false assumptions about the behaviour of academic authors. Academics publish and the problem with the concept of an archive is that it is generally perceived as a mode of preservation, not a mode of publishing. Archiving also depends on the voluntary depositing of already published, or about to be published, material, and some strategy is needed to ensure that academics collaborate. A start has been made in the direction of motivating participation by the decision of the research councils to require the open archiving of all papers resulting from the research they fund....[PS: Omitting a call for OA journals that charge no fees on the author side or the reader side.]

Comment. Wilson is right to point to the draft RCUK policy as exemplary. But it has not yet been adopted and it appears to be weakening under the onslaught of publisher lobbying.

New publisher of OA textbooks

The Free Curricula Center (FCC) is a new service for producing distributing OA textooks and other teaching and learning resources. (Thanks to The Assayer.) From the site:
The Free Curricula Center (FCC) helps students worldwide reach their educational potential by producing and distributing university level curricula that can be copied freely and modified cooperatively. Specifically, FCC serves as a focal point for the development and sharing of textbooks, instructor guides, and other educational materials. These materials, called free curricula, are released at no cost into the public domain or under an "open source" style license. This license allows anyone to make and distribute copies without having to pay, and to modify the curricula as long as those modifications are released under the same terms. The Center helps its participants work together to create textbooks, instructor guides, and other materials for the subjects in which they have expertise. We do this by providing online tools to help educators collaborate successfully and by proving a space on the Internet where students can have free, easy access to their finished products. We also serve as a link to the resources of others, and mirror their material when permitted....In short, the Center wants to do for educational curricula what open source software has done for computing: focus cooperative efforts to bring about low cost, high quality alternatives to commercial products.

FCC currently offers two OA textbooks, one in philosophy and one in statistics. Submissions are evaluated by one of four committees, depending on the discipline. Authors may choose from a variety of open licenses. FCC also hosts several discussion lists on FCC issues in different fields and a (currently small) set of working papers on OA-related issues. It welcomes submissions, new members for its committees, participants in its discussion groups, and donations. For more details, see the FAQ.

Thursday, March 23, 2006

OA coverage in USA Today

Andrew Kantor, Net writing new chapter for science journals, USA Today, March 23, 2006. Excerpt:
While the Internet is certainly affecting how the mainstream media works, there’s another area that the anyone’s-a-publisher paradigm is affecting: the world of scientific journals. The place I used to work, the American Chemical Society, just laid off a bunch of people who put its journals together, outsourcing the operation to the company that prints them. The move is indicative of the pressure scientific organizations are feeling as a new generation of scientists enter the lab having grown up in an Internet world. The ACS and other science societies (as well as private publishers) make a lot of money selling science-journal subscriptions to university libraries....Scientists’ standing in their communities is determined by where they’ve published and how often those papers are cited. And believe me, these folks keep careful track of all those data. So publishing those journals is a great business: You get your content free, then charge university libraries thousands of dollars for subscriptions. In other words, colleges pay to receive the papers their own faculty has written....For a scientist, publications are currency. The more you publish, the more you’re worth in terms of the pecking order and - more importantly - the better shot you have at getting grants. And it’s that economy that the Internet is poised to shake up....

Scientific journals are notoriously expensive, and libraries are notoriously under funded. The fact that those libraries are paying for material the publishers got free is galling to a lot of people.  What's emerging is what's called open-access science - something made possible by the Internet. With it, a scientist can make a paper public by putting an electronic copy where everyone can access it and granting the world the right to use, copy, and distribute it.  Organization such as BioMed Central and the Public Library of Science help publish open-access journals. With these, the scientists pay to be published (using their grant money) and the publication is made available free to everyone.  That means you can go to BioMed Central and read any of the papers in any of the journals. You can't do the same at the ACS's site (despite the fact that your tax dollars paid for much of the research there)....

Even the format of scientific papers is likely to be changed by the Internet. The rise of the blog and other short-form media has ushered in the era of the quick hit. While there are plenty of long-format bloggers, the traditional blog entry is short and sweet, peppered with references to other sources....Formal, polished communication is making way for simple, blunt data. And scientists are not immune from appreciating this. What’s going to happen is that more scientists, especially younger ones, will begin self-publishing their results before they bother to write a formal paper. They’ll put them on their websites or blogs to solicit comments, and turn science into a conversation the way only modern technology allows. How will all this affect traditional scientific journals? That depends on how long it takes the next generation of scientists to take the reins in deciding what’s more valuable - the information or the popularity contest. Judging by the Internet’s history, it won’t be too long before we have an answer.

Intro to IRs for librarians

Margaret J. Pickton and Joanna Barwick, A Librarian's guide to Institutional Repositories, a preprint forthcoming from eLucidate.
Abstract: Institutional repositories (IRs) are a recent feature of the UK academic landscape. You may already have one at your workplace (in which case you might be better to skip to the next article); you will probably have heard the term being bandied about by your colleagues; you might even have come across one when trawling the web. But what is an IR? Should your institution have one? And if so, how would you go about creating it? These are some of the questions we hope to address in this short article.

Using copyrighted metadata to limit OAI harvesting

Should OAI-compliant repositories use copyrighted metadata and limit harvesting to those who have prior permission? Klaus Graf argues against the practice in this posting to InetBib (in German). For example, the Humboldt University Berlin uses copyrighted metadata in its repository and would like to forbid harvesting by commercial users. Graf argues that the HUB policy is counterproductive and that existing literature on copyright and OAI (such as Gadd, Oppenheim, and Probets 2003) do not adequately explore these issues.

Stevan Harnad on MIT's recent two steps toward OA

Stevan Harnad, Optimizing MIT's Open Access Policy, March 22, 2006. A response to the steps discussed at MIT's last faculty meeting (blogged here 3/21/06). Excerpt:
MIT has proposed two OA policy steps: compliance with the NIH Public Access Policy and seeking consensus on copyright retention....
(1) The two steps taken by MIT are a very good thing, compared to taking no steps at all, but:

(2) Step one, explicitly formalizing compliance with the current NIH Public Access Policy, is a retrograde step, as it helps entrench the flawed NIH policy in its present form (a request instead of a requirement, a 6-12-month delay instead of immediate deposit, and depositing in PubMed Central instead of in the fundee's own Institutional Repository).

(3) Step two, seeking university-wide agreement on copyright retention, is not a necessary prerequisite for OA self-archiving, and there will be time-consuming resistance to it, from both publishers and authors. Hence it is the wrong thing to target first at this time. (University of California is making exactly the same mistake.)

(4) What MIT should be doing is neither formalizing compliance with the current NIH policy nor giving priority to copyright retention (even though copyright retention is highly desirable).

(5) What MIT should be doing is to require that all MIT researchers deposit, in MIT’s IR, the final refereed drafts of all their journal articles, immediately upon acceptance for publication.

(6) In addition, MIT should encourage that all MIT researchers set access to each deposited full text as OA immediately upon deposit -- but leaving them the option of choosing instead to set access as Restricted Access (MIT-only, or author-only) if they wish, and fulfilling email eprints requests generated by the OA metadata by emailing the eprint to the requester for the time being. (Ninety-three percent of journals already endorse immediate OA self-archiving, so only 7% will require the option of eprint emailing of RA deposits.)

(7) Having adopted this two-part deposit/access-setting policy as a mandate, MIT can then go on to its two proposed steps, of complying with the provisional NIH public access policy (by allowing harvesting at the appointed date from MIT’s own IR into PubMed Central) and seeking an MIT consensus on copyright retention.

(8) Instead going straight to (7) without first adopting and implementing (5) and (6) is a huge (and unnecessary) strategic mistake, and a bad model for other institutions to follow.

Comment. (i) I agree that the current NIH policy is flawed. But while we work on strengthening the policy, it's better for researchers to comply than not to comply. Researchers can deposit the same articles in their institutional repository, if they like. When PubMed Central is set up to harvest grantee articles from IRs, then we can recommend that grantees deposit in their IRs instead of, rather than in addition to, PMC. (ii) The MIT contract amendment does not seek to "retain copyright" for authors, only to retain the rights needed for OA and a few other important scholarly uses. It's true that authors don't need to retain full copyright in order to consent to OA, and it's true that most journals already give permission for postprint archiving (without the need for special negotiations or contract language). But something like the MIT contract amendment is needed for those journals that do not already permit postprint archiving. (iii) I fully support Stevan's recommendations in ##5-6.

Tim Berners-Lee calls for OA to publicly-funded data, starting with geodata

S.A. Mathieson and Michael Cross, Ordnance Survey challenged to open up, The Guardian, March 23, 2006. Excerpt:
The inventor of the world wide web has called for more open access to Ordnance Survey (OS) mapping data - and may get his wish later this year. Sir Tim Berners-Lee told an Oxford University audience last week getting "basic, raw data from Ordnance Survey" online would help build the "semantic web", which he defines as a web of data using standard formats so that relevant data can be found and processed by computers. "There's a moral argument that says, for a well-run country, we should know where we are, where things are, and that data should be available," he said. Berners-Lee said it may be reasonable for OS, the premier state-owned supplier of public sector information, to continue to charge for its high-resolution mapping. But even if licences were required, he added, OS should make its data open to manipulation. "I want to do something with the data, I want to be able to join it with all my other data," he said. "I want to be able to do Google Maps things to a ridiculous extent, and not limited in the way that Google Maps is." The guest lecturer said he had discussed this with OS. "They are certainly thinking about this and studying what they can do. OS is in favour of doing the right thing for the country, as well as maintaining its existence, so I think there's a fair chance we'll find mutual agreement." OS said it was considering opening access to data, through applications programming interfaces (APIs) for example, but only for non-commercial use. "If it happens, it will be in the next six months or so," said Ed Parsons, chief technology officer. Parsons said OS provides universities with access to its data. "It's about expanding this to non-academic research," he said. However, those using APIs would be barred from competing with OS's paying customers, even on a non-commercial basis. "We're constrained by competition law," said Parsons. The BBC's Backstage project, which allows non-commercial re-use of BBC material, is a possible model.

Berners-Lee said this debate was the first of many. "What happens with OS is going to be replayed with anonymised medical data, with data about all kinds of public things."...

Meanwhile, according to documents published last week, OS faces losing its official status as a fair trader unless it changes the way it licenses its geographical data. The Office of Public Sector Information (OPSI), a unit of the cabinet office charged with promoting fair access to crown copyright data, says there is substance to complaints from commercial mapping firms that OS has been "obstructive and slow" in licensing its data....[The report] highlights the policy of licensing data for specific uses only, which "appears to many potential and actual users as rigid and unreasonable and does not encourage re-use".

For background on the Ordnance Survey's argument that "we're constrained by competition law", see a second article in today's Guardian, by Charles Arthur, Government organisations under pressure to make money. It contains a list of UK government agencies that are "required by law to make back in revenues what they cost to run." The Ordnance Survey is one of them, as are the UK Hydrographic Office, UK Meteorological Office, and the Medicines and Healthcare Products Regulatory Agency. Many others are not, such as the Environment Agency, the British Geological Survey, and at least three of the research councils. (The article only lists three of the eight RCs, and all three are in the 'no' column.)

HyperJournal 0.5b

HyperJournal has released version 0.5b. HyperJournal is open-source journal management software for OA, OAI-compliant journals.
This version features [1] Revamped interface, templates and styles. [2] Advanced roles for publishing workflow. [3] Issues management. [4] Basic "plugout" web service. [5] Several bug fixes.

Wednesday, March 22, 2006

BMC hatches 2 more independent OA jnls

BioMed Central has assisted another pair of independent Open Access journals to start publishing last week. Journal of Biomedical Discovery and Collaboration (March 13) and Philosophy, Ethics, and Humanities in Medicine (March 17) bring the total of the independent OA titles hosted by BMC to 89.

Journal of Biomedical Discovery and Collaboration - Fulltext v1+ (2006+); ISSN: 1747-5333.

Philosophy, Ethics, and Humanities in Medicine - Fulltext v1+ (2006+); ISSN: 1747-5341.

Portal of Croatian OA journals

The Croatian Ministry of Science has launched a Portal of Scientific Journals of Croatia (also in Croatian). The portal provides access to 46 Croatian OA journals, with more to come. (Thanks to Tibor Tóth.)

HTML edition of Bailey's OA Bibliography

Charles W. Bailey, Jr. has issued an HTML edition of his Open Access Bibliography. It's the same text as last year's print and PDF editions, but the new edition lets you link directly to any of its major sections, such as Open Access Journals, Institutional Repositories, and Open Access Arrangements for Developing Countries. And it loads much faster.

Update (3/25/06). You can now link to every major section and subsection in the bibliography.

Moving from print to electronic

The December 05 / January 06 issue of Against the Grain is devoted to the Cancellation of Print Journals for Electronic Versions. It appears that each article focuses on non-OA ejournals.

Web 2.0 tools for OA dissemination of research

Ismael Peña, Web 2.0 and diffusion of research,, March 19th, 2006. A PPT presentation in Catalan but with this English-language abstract:
Buzzword or not, the Internet is changing and the so-called Web 2.0 applications might mean new ways to work in the research-education-diffusion field (i.e. the University field). This presentation’s goal is raising a reflection and showing a “good” practice in difusion of research, after Ismael Peña’s experience in the area of Public policies for development and ICT4D at the Open University of Catalonia and the use of blogs, wikis and other tools in his ICT4D personal portal.

Doing science in a world of shared, voluminous data

Alexander Szalay and Jim Gray, 2020 Computing: Science in an exponential world, Nature, March 22, 2006. Excerpt:
[D]ata volumes are doubling every year in most areas of modern science and the analysis is becoming more and more complex....With data correlated over many dimensions and millions of points, none of the old steps — do experiment, record results, analyse and publish — is straightforward. Many predict dramatic changes to the way science is done, and suspect that few traditional processes will survive in their current form by 2020....As data volumes grow, it is increasingly arduous to extract knowledge. Scientists must labour to organize, sort and reduce the data, with each analysis step producing smaller data sets that eventually lead to the big picture. Analysing terabytes of data (one terabyte is 1,000 gigabytes) is a challenge; but petabyte data sets (of more than 1,000 terabytes) are on the horizon. One petabyte is equivalent to the text in one billion books, yet many scientific instruments, including the Large Synoptic Survey Telescope, will soon be generating several petabytes annually....

Procedures already involve instruments and software with myriad parameters. It is difficult to capture all the model numbers, software revisions, parameter settings and process steps in an enduring format. For example, imagine a measurement taken using a DNA-sequencing machine. The output is cross-correlated with a sequence archive (GenBank) and the results are analysed with Matlab. Fully documenting these steps would be arduous, and there is little chance that someone could repeat the exact procedure 20 years from now; both Matlab and GenBank will change enormously in that time. As experiments yield more data, and analysis becomes more complex, data become increasingly difficult to document and reproduce. One might argue that complex biological experiments have always been difficult to reproduce, as there are so many variables. But we believe that with current trends it is nearly impossible to reproduce experiments. We do not have a solution for this problem, but it is important to recognize it as such....Standards are essential at several levels: in formatting, so that data written by one group can be easily read and understood by others; in semantics, so that a term used by one group can be translated (often automatically) by another without its meaning being distorted; and in workflows, so that analysis steps can be executed across the Internet and reproduced by others at a later date....

Many scientists no longer 'do' experiments the old-fashioned way. Instead they 'mine' available databases, looking for new patterns and discoveries, without ever picking up a pipette. But this data-rich approach to science faces challenges. The speed of the Internet has not kept pace with the growth of scientific data sets. And so large data archives are becoming increasingly 'isolated' in the network sense —one can copy gigabytes across the Internet today, but not petabytes. In the future, working with large data sets will typically mean sending computations to the data, rather than copying the data to your workstation. But the management of distributed computations raises new questions of security, free access to public data and cost....The publication process itself is increasingly electronic, with new ways to disseminate scientific information (such as the preprint repository But there is, as yet, no standard for publishing large volumes of data. Paper appendices cannot hold all the data needed to reproduce the results. Some disciplines have created their own data archives, such as GenBank; others just let data show up, and then disappear, on individual scientists' websites. Astronomers created the International Virtual Observatory Alliance, integrating most of the world's medium and large astronomy archives. This required new standards for data exchange, and a semantic dictionary that offers a controlled vocabulary of astronomy terms. To encourage data sharing, it should be rewarded. Public data creators and publishers should be given credit, and archives must be able to automatically provide provenance details. Current databases have a long way to go to achieve this ideal.

PS: Also see other Nature articles from the same issue on 2020 Computing (all OA). The Szalay-Gray article above is based on a longer report from Microsoft Research.

Finding v. developing good open content for education

Tom Hoffman, Bootstrapping Open Curricula: A Practical Proposal, eSchool News, March 20, 2006. Excerpt:
There is a lot of talk about generating curricula using open source methodologies, that is, roughly, allowing free distribution of curriculum materials while inviting outside contributions to the work, both on the grassroots level and engaging some big guns like IBM and the Hewlett Foundation. While I'm certainly philosophically supportive of these ideas and initiatives, I haven't seen anything yet that is sufficiently grounded in practical experience with either open source software development or curriculum design to get very far off the ground....

[T[o get high quality open curriculum, we need one of two things. One route is to wait for a cadre of latter-day Nancie Atwell cum Linus Torvalds figures who will create ongoing bodies of work on the web under open licenses....I don't have any particularly bright ideas at this point for facilitating that process in the short term.

The second route would be to find some curriculum currently under a proprietary license that might be a good candidate for re-licensing under an open content license. Let me give you an example of a good candidate of this process. Ideally, it would be an innovative curriculum developed in a partnership between leading academics and classroom teachers that represents the best practices, tested over a period of years. It should have a deep philosophical underpinning, so the project maintains a consistent focus....It should be savvy in its approach to media and technology, thus of interest to the teachers most likely to be attracted to open content online. A likely candidate would also have recently fallen out of print, having prematurely succumbed to changing pedagogical fashion and politics. The chunk of curriculum initially released should be around a year's worth; enough to unfold a complete, coherent sequence but not more than a first wave of readers and contributors can wrap their heads around. A nice vision, but could we find such a thing? Happily, yes. I submit to you Pacesetter, a set of capstone courses for high school seniors in English, Math and Spanish launched in 1993 by the College Board as an alternative to their AP courses and, from what I can tell from their website, essentially shut down last year....

What we need now is some of the institutions that are gabbing about this stuff, foundations like Hewlett and corporations like IBM, to step up to the plate and make a proposal to The College Board to open up the Pacesetter curriculum. There are lots of plausible models for this. The most obvious one is probably Sun's StarOffice/ setup. The College Board could maintain the Pacesetter trademark and sole rights to offer Pacesetter assessments. The curriculum materials would be turned over to a non-profit who would put them on the web under a different name under a Creative Commons license. The College Board would always have the option to fold contributed materials back into Pacesetter and revive it as a commercial product and assessment if they chose to do so. There is no reason this can't happen. It makes perfect sense.

OA web site on evolution wins, fundamentalists lose

Two Berkeley professors and the NSF may continue to display and maintain their OA web site on evolution even though fundamentalists complained in federal court that it violated their religious beliefs.

Comment. This case is is a good example of a slam-dunk that's nevertheless critically important to win. If it had gone the other way, it could have crippled science, especially OA science. See my comments on it from last November, when the case was filed.

Paying eight times for taxpayer-funded geodata

Chris Hancox, a GIS Officer at an unnamed council in the Anglian region of England, thinks that UK taxpayers pay eight times for publicly-funded geodata. The Free Our Data blog has posted his letter:
I look after all the maps for the council where I work and yes, even government departments and councils etc have to pay for Ordnance Survey data. Local government has interesting scenarios where the taxpayer will pay three times or more for Ordnance Survey Data. One of the most interesting scenarios is Planning Applications.

1st payment to OS: if a member of the public wants to submit a Planning Application they can buy a site plan map, usually from the council (cost of about £25 for an 4 x A4 sheets) or other OS licenced data reseller. 2nd Payment to OS: the Planning Authority (local council) also have to buy their map base from Ordnance Survey every year. Part of what is called the Mapping Services Agreement (MSA). 3rd Payment to OS: the member of the public also pays for Ordnance Survey data as part of their normal taxes. There is also a 4th Payment (which is the biggest scandal) that goes to Ordnance Survey and the Post Office, to use our council-created and council-maintained Local Land and Property Gazetteer (LLPG) or local address database. Even though all the councils create and maintain their own address gazetteer, we have to pay the OS and Post Office for the privilege of using that address data. The OS says that it owns the copyright of the position of the address, and the Post Office says it own the copyright of the address (because it adds the postcode). Councils therefore have to pay a per-click cost to OS and Post Office to use the council-created addresses on our own website address lookup facilities. The irony about all this is that the local council creates the address in the first place (Street Naming and Numbering sections) and gives (for FREE) this information (including site plan) to the Post Office and Ordnance Survey - so they are in essence charging the local council for its own information. Therefore the public have to pay the Council to create the address (Street Naming and Numbering dept) and then pay again to the OS and the Post Office for the right [for the council] to use it. Now, if the council wants to use Addresses (LLPG) created by a neighbouring council(s) (eg for cross-council working/Partnership working etc) the cost goes up even more. The tax payer has to pay the council to create the address data (Street Naming and Numbering dept), OS and Post Office to use the address, and again if used on a web address search facility. Then a neighbouring council will have to pay extra money to the OS and Post Office to use the same address (if it’s not in its postcode area) and finally pay again to use the address in a address search facility on its website.

In total the tax payer could pay up to five times for the one address that the council gives to the OS and Post Office for FREE. Finally, taxpayers putting in a planning application online for one building/address using OS data could be paying a total of up to eight times for that one address/building data. Now that is scandalous.

OA and non-OA databases link

The OA PubChem from the US National Library of Medicine and non-OA DiscoveryGate from Elsevier MDL are now linked. DiscoveryGate subscribers get full access to both systems. PubChem users get free searching of both systems, and full access to PubChem results, but can only click through to DiscoveryGate content if they have a license. For example, see how DiscoveryGate structures show up in PubChem searches. From today's announcement:
Elsevier MDL and the National Institutes of Health (NIH) have linked databases to make it easier and quicker for academic, government, pharmaceutical and other researchers to obtain more comprehensive information, improve decision making and support discovery breakthroughs. The NIH’s freely available PubChem database of small molecule data, designed to support links to outside chemical information resources, is now cross-indexed with the Compound Index hosted on Elsevier MDL’s DiscoveryGate® platform. “Now, researchers can immediately tell if information related to a specific chemical compound exists in the other system,” said Steve Bryant, director of the PubChem project. “From DiscoveryGate, links connect researchers directly to the relevant information from PubChem. Likewise, researchers using PubChem can link to additional information in DiscoveryGate (appropriate licenses required for accessing DiscoveryGate data sources). This integrated information provision improves the scientific community’s ability to rapidly conduct thorough research.” Lars Barfod, CEO of Elsevier MDL, added: “The value of DiscoveryGate is focused information, relevant to specific research questions, from a network of indexed and linked data sources. This collaboration with the NIH to index PubChem extends the scope of available content, and will help researchers find critical information while avoiding unnecessary, repetitious searches of multiple systems or data sources.”

PS: This isn't the first time a non-OA service has linked up with PubChem. Nature Chemical Biology from the NPG did so in June 2005.

Pilot LOCKSS project in the UK

JISC has announced a pilot LOCKSS project to preserve ejournals in the UK. From today's announcement:
The move in recent years towards provision of scholarly journals in electronic form has greatly enhanced the access to and availability of scholarly publications. However the arrangements for preserving long-term access to electronic journals are far from satisfactory. When subscribing to electronic journals, libraries no longer possess a local copy as they did with printed journals. They effectively lease the content of the electronic journals they subscribe to by remotely accessing it on publishers' servers over the computer network. The problem with this common practice is that access to entire back runs of electronic journals could be lost to academic libraries when subscriptions are cancelled or when journals cease publication. The uncertainty of continuing access is a major barrier preventing libraries from moving to electronic-only subscriptions. The recent endorsement of the statement "Urgent Action Needed to Preserve Scholarly Electronic Journals" by organisations such as the Association of Research Libraries (ARL) and the International Coalition of Library Consortia (ICOLC) highlight the concern in the scholarly community over the long-term future of scholarly electronic journals. There is consensus that a solution to the e-journal archiving problem is urgently needed and that a technically and financially sustainable solution requires collaboration between libraries and publishers. Lots of Copies Keep Stuff Safe (LOCKSS) is a tool developed at the Stanford University Libraries to to address the issues surrounding e-journal archiving and preservation. Over 80 libraries and 60 publishers world-wide now are work together through the LOCKSS Alliance to preserve persistent access to a wide range of content ranging from commercial subscription content to non-profit open-access e-journals.

OA momentum in France

Herbert Gruttemeier, The way to Open Access : French strategies to move forward, Library and Information Service (Tushu Qingbao Gongzuo), 50, 1 (2006) pp. 27-33.
Abstract: In France, the movement in favour of open access to scientific research output is getting increasingly coordinated and supported at the political level. The CNRS, leading research organization in Europe and signatory of the Berlin Declaration, has an evident strategic role to play in this development. Various initiatives that have emerged in the French academic world in recent years have led, for example, in early 2005 to the joint announcement, by four major research institutions, of a common policy to promote open access to published material and other types of digital resources, and to set up institutional archives. The article highlights some key issues of this policy, gives an overview of the current and past CNRS involvement in Open Access and describes the principal functions, as well as the related challenges, of the future institutional repositories.

Tuesday, March 21, 2006

Two steps to support OA at MIT

At its March 15 faculty meeting, the MIT faculty discussed two OA-related topics: complying with the NIH public access policy and using an MIT amendment to modify standard publishing contracts and let authors retain key rights. Details in today's report from the MIT News Office:
Concerned that taxpayer-funded research is not accessible to the general public because of the tightly controlled, proprietary system enforced by some journal publishers, the National Institutes of Health (NIH) is asking every NIH-funded scientist who publishes results in a peer-reviewed journal to deposit a digital copy of the article in PubMed Central (PMC), the online digital library maintained by the NIH. Not later than 12 months after the journal article appears, PMC will then provide free online access to the public.

Director of Libraries Anne J. Wolpert and Vice President for Research Alice Gast discussed with the faculty MIT's response to this issue, which has been to support NIH researchers in complying with the policy, and also to enable any MIT researcher to use a more author-friendly copyright agreement when submitting articles for publication. "The overwhelming majority of work produced by you is licensed back to you, and you can't always use your own work in the way you want to use it," Wolpert told the faculty. Copyright exemptions that were carefully crafted to allow the academy to teach and do research are steadily being superseded by intellectual property regimes that were developed for the benefit of the entertainment industry. "What Disney wants, the academy gets, whether it suits your interests or not," Wolpert said.

Among the reasons for universities to support open access is the high cost associated with renting access to journals, which for MIT alone has grown in the past decade from $2.6 million to more than $6 million a year....

An amendment that can be attached to any publication's copyright agreement was disseminated to principal investigators in February. "We have to wait and see how this plays out and see what feedback we get from publishers," Gast said. The goal is for MIT as an institution to work out agreements with publishers rather than make individual researchers fight their own battles. More information, as well as the amendment, which would override the publisher's copyright agreement, is available online [here] and other MIT web sites. "There is a distinct feeling among our counterparts at large private and public institutions that if MIT takes a reasonable and principled position on this issue, other institutions will be encouraged to do likewise," Wolpert said.

The MIT contract amendment is closely related to the SPARC Author's Addendum drafted for the same purpose. The MIT amendment gives authors (among other things) the non-exclusive right to copy and distribute their own article, to make derivative works from it, and to deposit the final published version in an OA repository. MIT is the first university I know to present its faculty with a lawyer-drafted contract amendment for the purpose of retaining the rights needed to provide OA to their own work. Kudos to all involved. MIT faculty could change the default for faculty with less bargaining power.

Nicholas Cozzarelli, 1939-2006

Nicholas Cozzarelli, molecular biologist at Berkeley and Editor-in-Chief of the Proceedings of the National Academy of Sciences (PNAS), died at his Berkeley home on Sunday. He was 67. From the obituary in the UC Berkeley News:
As editor-in-chief since 1995 of the main publication of the National Academy of Sciences, Cozzarelli changed the journal's methods for reviewing submitted manuscripts, according to Botchan, making it a scientific publication that now vies for prestige with the best. "Nick deserves almost the entire credit for making PNAS a high impact journal," said Arthur Kornberg, Nobel laureate and Stanford University professor emeritus of biochemistry. Cozzarelli was a post-doctoral fellow in Kornberg's laboratory from 1966 until 1968.

Cozzarelli also was a champion of open access publishing, that is, allowing the public, not just a journal's paid subscribers, to read scientific articles. PNAS was one of the first publications to alter its policy and allow free public access via the Internet, although its high-profile competitor publications have yet to follow suit....He was on the editorial boards of many publications, including the first open access journal, the Public Library of Science (PLoS) Biology.

PS: On OA issues, and many others, Cozzarelli was a leader among editors of high-prestige, high-impact journals. He grasped the argument for OA very early and his willingness to experiment with PNAS was bold and canny. He will be missed.

More on OA to grey literature

The Spring issue of Grey Journal is devoted to OA. The articles themselves are not OA, at least so far, but here's the TOC and part of the editor's introduction from the email announcement:
Last December, delegates from sixteen countries worldwide met at GL7 in Nancy, France to address the principles of open access as they apply to grey literature. Information professionals from sectors of government, academics, and business presented results of research projects intended to facilitate open access to grey resources. These results no doubt will allow for the further assessment of information policies both within and outside their respective organizations.

While there was general consensus on the benefits of open access, there were clear differences in how the principles of OA would be implemented. This ranged on a broad continuum between the position of The Royal Society on the one side and that of the Wellcome Trust on the other.

GreyNet seeks to remain in the forefront of this movement, but at the same time feels committed to keeping lines of communication open between these farthest positions....

Since the relaunch of GreyNet by TextRelease in 2003, authors both in the GL Conference Series as well as those contributing to The Grey Journal (TGJ) sign-on to a  “non-exclusive rights agreement”. The authors remain free to deposit their own work in other online repositories, which they deem fit. This non-exclusive rights agreement further allows GreyNet to negotiate licensing and cooperative publishing exchange of the full text and metadata contained in its in-house content base.

In this issue of TGJ, a number of papers mainly from GL7 have been selected and brought together under the theme “Grey Matters for OAI”....

  • Cees de Blaaij (Netherlands), Public funded research and Open Access: Perspectives and policies
  • Stefania Biagioni (Italy), Assisting scientists to make their research results world wide freely available: an experience begun in the 90s
  • Rosa Di Cesare, Daniela Luzi, and Roberta Ruggieri (Italy), Open archives and SIGLE participation in Italy:  Is there a subtle link between the two experiences?
  • Mohammad Reza Ghane (Iran), A Survey of Open Access Barriers to Scientific Information:  Providing an Appropriate Pattern for Scientific Communication in Iran
  • Hyekyong Hwang and three co-authors (South Korea), Patterns of Research Output produced by Scholarly Communities in South Korea
  • Manorama Tripathi, H.N. Prasad, and S.K. Sonker (India), Open Access to Grey Literature: Challenges and Opportunities at the Banaras Hindu University in India

Optics Express - soaring citation rate

Optics Express, the Open Access journal of the Optical Society of America, "... was recognized by Essential Science Indicators as having the highest percent increase in total citations in the field of Physics in both September 2005 and January 2006.

PS: It's not difficult to understand the explosive growth in citations to high quality, peer-reviewed research which is available without barriers.

JISC and UKOLN work on IRs

Rachel Heery, Institutional Repositories, Focus on UKOLN, March 2006. Excerpt:
Deployment of repositories within institutions means many information professionals are facing unfamiliar implementation configuration and policy decisions. Institutional repositories might be considered as digital libraries in miniature, yet whilst many of the information management and technical issues are similar, repositories introduce the potential for significant changes in the workflow of delivering content from author to reader. After all, this is what open access is about. One of the areas of interest within the JISC Digital Repositories Programme is exploring how the user will interact with the emerging network of relatively small (at this stage) content collections. Will this be through search? or through citation linking? These end-user interfaces will rely on a level of interoperability in the way content is managed across the different repository software platforms, as well as on consistent identification of objects within the repositories. The JISC Digital Repositories Support team working across CETIS (Centre for Educational Technology Interoperability Standards) and UKOLN is addressing some of these interoperability issues, initially focusing with UK repository developers on the deposit of content into repositories, and with an international working group looking at a wider range of repository interfaces.

University press and library collaborate on OA publishing

The Georgetown University Press (GUP) has entered a partnership with Digital Georgetown, "the digital hub for Georgetown University's scholarship and research initiatives." Together they will produce OA publications from GUP's print back catalog and maybe even from new material. From yesterday's story in the Blue & Gray, the school paper:
There is a movement within many academic circles to make such materials more accessible by posting them online. A new partnership between Georgetown University Press and Digital Georgetown is a step in that direction. Digital Georgetown is an online hub for scholarship materials that launched in 2004 and is directed by Joan Cheverie, a former reference librarian at Lauinger Library. In recent months, Cheverie has been working with Hope Smith-LeGro, electronic editor at GU Press, to put collections of previously published research online. Their first project has been to post the proceedings from GURT, the Georgetown University Round Table on Languages and Linguistics. In the past, the roundtable's work has only been published in book form. Recognizing that some scholars may be searching for specific information, GU Press wanted to provide the materials in format that is readily accessible. "We really view ourselves as an academic resource," Smith-LeGro said. GURT conferences from 1999 to 2001 are now online and free for any user of the site; conferences prior to 1999 are in the process of being transferred to digital format, Cheverie said. GU Press will continue to sell GURT books and will post the content online five years after the print date....Although publishing scholarly materials online is a developing trend at universities worldwide, the collaboration between an academic press and the library system is rare, Cheverie said. "We've found a great synergy in this partnership because we have such complementary missions," she said.

Creative Commons, Google Library, and Open Content Alliance

Walt Crawford, Building the Econtent Commons, EContent, March 21, 2006. On Creative Commons, the Google Library Project, and the Open Content Alliance. Excerpt:
What do you get when you combine a four-year-old licensing system and two possibly complementary projects to digitize substantial quantities of print information? With luck, a substantial ecommons: millions of digital items that can be used directly and as the basis for derivative works without infringing copyright. These projects should also result in full-text indexing for millions more items that won't be freely available online but can be acquired through libraries and booksellers....Pulling these threads together, OCA encourages use of Creative Commons licenses whenever that makes sense. That makes it more likely that a good deal of copyright material will be available under appropriate license, since Creative Commons licenses offer carefully drawn ways to "give away" some copyright control without losing copyright. Google isn't part of this combination yet, but it wouldn't take much to make the public domain works part of the greater whole. Creative Commons takes one tack toward building a commons of econtent (and physical content). OCA uses Creative Commons and the many open standards developed to share information where it can, and works to make major resources available to all without injury to any. There will be more such projects—not to undermine the rights of writers and publishers, but to provide a commons that we can use and derive new creations from.

OA journal seminar

Webcasts of the POSOA Seminar on Open Access Journals (Toronto, March 9, 2006), are now online. POSOA is the University of Toronto's Program on Open Source and Open Access.

Richard Poynder interviews Richard Stallman

Richard Poynder has posted his interview with Richard Stallman. This is the latest installment of The Basement Interviews, Poynder's blog-based OA book of interviews with leaders of many related openness initiatives. Excerpt:

RP: What about the Open Access movement, which wants to see all scholarly papers made freely available on the Internet. Do you support that?

RS: Yes, and I am a signatory of the Budapest Initiative. I have, by the way, been urging them to make the freedom to redistribute a central part of their demands. It is not enough to have one site where everyone can download the information: people must be free to set up mirror sites too. However, when it comes to scientific papers I don't think people should have the freedom to publish modified versions; modified versions of someone else's scientific article are not a contribution to society.

Comment. (1) The freedom to redistribute is built into the BOAI vision of OA, which would remove permission barriers as well as price barriers. Some OA initiatives use central archives rather than distributed archives, but even they do not exclude the freedom to redistribute. (2) For some elaboration on Stallman's views on modifying scientific papers, see my newsletter for May 15, 2002: "Richard Stallman told me that he sees no good reason to use the GPL or copyleft for scientific journal articles....GPL makes more sense for software manuals or textbooks, where new developments create a need to modify the original text. But articles that report the result of an experiment, or the observations of a scientist, should not be modified."

Mass digitization conference

The presentations and webcasts from the conference, Scholarship and Libraries in Transition: A Dialogue about the Impacts of Mass Digitization Projects (Ann Arbor, March 10-11, 2006), are now online. (Thanks to Elizabeth Winter.)

Manifesto for Critical Information Studies

Siva Vaidhyanathan, Critical Information Studies: A bibliographic manifesto, Critical Studies, March/May 2006. Excerpt:
This paper takes measure of an emerging scholarly field that sits at the intersection of many important areas of study. Critical Information Studies (CIS) considers the ways in which culture and information are regulated by their relationship to commerce, creativity, and other human affairs. CIS captures the variety of approaches and bodies of knowledge needed to make sense of important phenomena such as copyright policy, electronic voting, encryption, the state of libraries, the preservation of ancient cultural traditions, and markets for cultural production. It necessarily stretches to a wide array of scholarly subjects, employs multiple complementary methodologies, and influences conversations far beyond the gates of the university. Economists, sociologists, linguists, anthropologists, ethnomusicologists, communication scholars, lawyers, computer scientists, philosophers, and librarians have all contributed to this field, and thus it can serve as a model for how engaged, relevant scholarship might be carried out. CIS interrogates the structures, functions, habits, norms, and practices that guide global flows of information and cultural elements. Instead of being concerned merely with one's right to speak (or sing or publish), CIS asks questions about access, costs, and chilling effects on, within, and among audiences, citizens, emerging cultural creators, indigenous cultural groups, teachers, and students. Central to these issues is the idea of ‘semiotic democracy’, or the ability of citizens to employ the signs and symbols ubiquitous in their environments in manners that they determine....

[C]ontroversies over copyright, technology, corporate control over information, and access to knowledge easily flow across newspaper pages, generating widespread curiosity about these issues. As a result, even the most high-level, advanced work within CIS can find its way into surprising and exciting places....[M]any CIS scholars have reached beyond spheres of scholarly discourse to influence both general public perceptions and specific policy matters. Public interest organizations such as the Electronic Frontier Foundation, Public Knowledge, and Creative Commons employ CIS scholarship when pursuing their agendas in courts, legislatures, international governing bodies, and the public sphere....By all indications, CIS has succeeded in changing the terms of the conversation about issues such as copyright, cultural policy, and the relationships among democracy, culture, and technology. More practically, CIS has helped generate the ‘open content’ and ‘open journals’ movement, which allows authors and artists to retain more control over the ways that publishers exploit their work and enables authors to ‘lock content open’. CIS not only has made its arguments, it has lived its arguments....Beyond the text of the scholarship, the commitment to positive liberty comes through most clearly in the projects and experiments that facilitate access to and use of scholarship and information: chiefly the development and proliferation of open access journals, open courseware, open curricula, and open archives....

Too often, academic leaders forget their ethical duty to the community of scholars and world citizens at large. They rabidly protect their ‘intellectual property’ to the detriment of the scholarly world (and the species) as a whole, and as such many suffer from what I call the ‘Content Provider Paradox’ (Vaidhyanathan 2002a). In addition, scholars themselves often overreact to perceived ‘threats’ that someone is teaching ‘their’ course or relying too heavily on ‘their’ data. This is an unhealthy and anti-intellectual disposition magnified by the general tenor of the times. Foolishly, however, scholars continue to sign away all their rights to their scholarly work to commercial publishers, who then sell their work back to their libraries at great cost. Recognizing this absurdity, some scholars have insisted on publishing their work with Creative Commons licenses, ensuring that the general public and not just patrons of expensive research libraries may read, quote, and improve on their work. And the Open Journals movement, led by the Public Library of Science and the Science Commons, also promises to let scholars contribute to the greater good while ensuring effective peer review and distribution of work (Harnad 2004). Still, many tenure committees outside of the sciences have yet to learn that open journals are better and that the commercial journal publishing process as it now stands is unethical. Indeed, it will take many years to wake scholars from the false consciousness of the academic-publishing industrial complex. Meanwhile, every scholar committed to CIS should insist on retaining some of her or his rights to publications and making them available as widely and cheaply as possible. Demanding that a publisher allow the use of a Creative Commons license is a start. If a few senior scholars withhold publication from unethical journals, then the publishing world will have to negotiate and concede that Creative Commons offers no threat to their business but greater opportunity to attract consumers. And if they do not, then scholars should found their own open journals through scholarly associations and sever ties with commercial publishers. In this way, CIS scholars can change more than the conversation about culture, control, commerce, and copyright. They can affect the workings of an industry in flux and better serve their mission to educate and illuminate the remarkable times in which we live.

OA edition of book on digital libraries

William Y. Arms, Digital Libraries. An OA edition of the book from M.I.T. Press (January 2000), with a new preface by the author (June 2005).

ALPSP study of OA archiving and journal cancellations

Mark Ware, ALPSP survey of librarians on factors in journal cancellation, ALPSP, March 2006. A 64 pp. report on the effect of OA archiving on library decisions to cancel journal subscriptions. The report (in print and PDF) costs £45/$80/€100 for ALPSP members and £90/$160/€200 for non-members. From the ad:
The question of whether self-archiving of preprints and/or postprints by journal authors is likely to have a significant impact on journal subscription numbers is currently a hotly debated issue of considerable policy importance for scholarly publishers. The moves by funding bodies and some institutions to request or require authors to deposit postprints has given more urgency to this issue as the archives are now likely to grow in number and more importantly in their content. This study was commissioned by ALPSP to ascertain what are the major factors contributing to journal cancellations, and thus to provide some new information for a debate that has inevitably so far been short of data.

The study consisted of an online questionnaire. The wording was originally developed by ALPSP and Mark Ware Consulting, and then subject to review by a number of experienced librarians. The sample was obtained by posting requests to a number of listservs such as liblicense and SerialST. The sample was thus a self-selected one from a non-random group (those who chose to join the listservs) and this does represents a limitation of the study. Nonetheless a good response of 340 completed questionnaires was received, which we roughly estimate represents a response rate of 4-7%, perhaps reflecting the degree of interest in the topic.

Monday, March 20, 2006

Nature editorial calls for OA database of avian flu data

Dreams of flu data, Nature, March 16, 2006 (accessible only to subscribers). An unsigned editorial. Excerpt:
"Confidentiality of sensitive national outbreak surveillance data assured!" This prominent guarantee on the website of the South East Asian Nations Infectious Diseases Outbreak Surveillance Network says it all. Open sharing of data often ends when it could compromise trade or other national interests....

[T]here is no comprehensive database of outbreaks of infectious diseases. We have better data on galaxies 10 billion light years away than on human cases of avian flu in China or Vietnam. Yet the world is imperilled by outbreaks, wherever they happen....At the moment, plague, cholera and yellow fever are the only notifiable diseases that countries must report to the World Health Organization (WHO). Vietnam has often reported human cases of avian flu months after the event, and outbreaks in animals have been concealed in many countries....Data on outbreaks in poultry are even more sparse, and mostly come from the World Organisation for Animal Health (OIE). Someone at the UN Food and Agriculture Organization (FAO) is maintaining a file of cumulative bird outbreaks from OIE and other data, and is making it available to researchers and journalists. But it is incomplete, lacks good location data and contains errors. Genetic data are also lacking. When samples are sequenced, the results are usually either restricted by governments or kept private to an old-boy network of researchers linked to the WHO, the US Centers for Disease Control and Prevention, and the FAO. This is a far cry from the Human Genome Project, in which all the data were placed in the public domain 24 hours after sequencing. Many scientists and organizations are also hoarding sequence data, often for years, so they can be the first to publish in academic journals. With the world facing a possible pandemic, such practices are wholly unacceptable. Nature and its associated journals are not alone in supporting the rapid prior exposure of data when there are acute public-health necessities.

Three cheers, then, to Ilaria Capua of the Tri-Veneto Region Experimental Animal Health Care Institute in Italy, who last month threw down the gauntlet to her colleagues by refusing to put her latest data on Nigeria and Italy in these private networks. Instead she uploaded them to GenBank [PS: which is OA] and called on her colleagues worldwide to do likewise. Only in this way can researchers establish and track the global pattern of the evolution of the bird-flu virus. Imagine scientists anywhere being able to log on to a publicly available, searchable Internet database, updated in real time, with full clinical and sequence data on each human case, and accurate and complete poultry data. Dream on....The world badly needs [an OA] database for outbreaks of avian flu. But international agencies are failing to rise to the challenge.

Steve Bryant on PubChem

David Bradley, Interview with Steve Bryant, Reactive Reports, Issue 53, March 20, 2006. Bryant is a Senior Investigator in the Computational Biology Branch of the National Center for Biotechnology Information at NIH. Excerpt:

[Q] Why do you personally feel PubChem is important?

[A] ...[T]he most valuable thing we can do with computers and molecular databases [is to] make the information as accessible to researchers as we can. There was a whole world of information about the bio activities/properties of small molecules that was not included in our retrieval systems in as good a way as it could have been. I thought it be worthwhile to do as it would have a major impact on research.

[Q] Why shouldn't researchers pay for this information?

[A] ...[I]t is interesting to look back 25 years or so when it became technically possible to use computers to make molecular databases. The biologists made GenBank and Protein Databank as public repositories, but in the chemical world at the same time the same technology was used to create commercial information services. So, there were two models about information access, although why that should be no one can really say. One factor may have been that at that time, biologists didn't think of sequences or proteins as commercial products whereas in chemistry there was a long history of paying for information about molecules because there were obvious commercial opportunities. When it comes to molecular libraries, the decision of Francis Collins, Director of the National Human Genome Research Institute at NIH, and colleagues was to follow the biological model of the Human Genome Project and GenBank and to make the information freely available....

More on EBSCOhost links to OA content

Barbara Quint interviewed EBSCO VP Michael Gorrell about recent changes at the company. Here's a bit on OA:
As for future plans, Gorrell indicated that the functionality in the service lets people link out to open access sources already. He also said that EBSCOhost reaches a lot of open source publications. Customized links let people search OpenURL links. Gorrell explained that EBSCO knows that people want access to wider content, the kind often supplied by Google Scholar. The company has plans underway, including some that librarians should find “very pleasant.” Stay tuned.

PS: EBSCO started linking to OA content at least as early as January 2004.

OA for Canadian statistics

Helen Clarke reports that Statistics Canada is moving from a TA model to an OA model for its electronic publications. To recover its print costs, it will still charge for its print publications. The transition should be complete by April 24, 2006. Clarke quotes from an SC memo that I can't find at the SC web site.

A path to 100% OA

Stevan Harnad, Opening Access by Overcoming Zeno's Paralysis, forthcoming in in Neil Jacobs (ed.), Open Access: Key Strategic, Technical and Economic Aspects, Chandos Publishing, 2006. Self-archived March 19, 2006.
Abstract: Open Access (OA) means free access for all would-be users webwide to all articles published in all peer-reviewed research journals across all scholarly and scientific disciplines. 100% OA is optimal for research, researchers, their institutions, and their funders because it maximizes research access and usage. It is also 100% feasible: authors just need to deposit ("self-archive") their articles on their own institutional websites. Hence 100% OA is inevitable. Yet the few keystrokes needed to reach it have been paralyzed for a decade by a seemingly endless series of phobias (about everything from piracy and plagiarism to posterity and priorities), each easily shown to be groundless, yet persistent and recurring. The cure for this "Zeno's Paralysis" is for researchers' institutions and funders to mandate the keystrokes, just as they already mandate publishing, and for the very same reason: to maximize research usage, impact and progress. 95% of researchers have said they would comply with a self-archiving mandate; 93% of journals have already given self-archiving their blessing; and those institutions that have already mandated it are successfully and rapidly moving toward 100% OA.

Lawsuits hobble recruiting for the Google Library program

Australian libraries are reluctant to join the Google Library Program until the lawsuits against it are resolved. Munir Kotadia has details in today's ZDNet Australia.

Two OA journals forthcoming from Libertas Academica

Libertas Academica is recruiting editorial board members for two forthcoming OA journals: Oncogenomics and Biomarker Insights. Pass the word to colleagues who might be interested.

The site says nothing about Oncogenomics, but already has a web site for Biomarker Insights. Excerpt:

Biomarker Insights is a peer-reviewed, open-access research journal where those engaged in biomarker research can turn for rapid communication of the latest advances in the application of biomarkers toward the discovery of new knowledge, and toward the clinical translation of that knowledge to increase the efficacy of practicing clinicians....The speed of reporting advances in biomarkers, the open access model, and the highest editorial standards with which Biomarker Insights is produced will have the combined effect of maximizing the benefit to the advance of the field.

Challenge to OA critics

The Free Our Data blog has noted that Tim Berners-Lee endorses the project (for OA to publicly-funded geodata in the UK) and promises details next week in The Guardian. Stay tuned. At the same time, however, the bloggers make this point:
The interesting point about Sir Tim, of course, is that he could have patented his work in developing the hypertext protocol (what if CERN had had a requirement that workers’ ideas were patented?) and perhaps made a lot of money - although equally, the Web would not have been taken up with the same excitement if one had had to pay a licence fee for every web page served or link clicked. Sir Tim said as much in 2004 (original article seems to have disappeared.)

The content you’re looking at now is an illustration of how freeing publicly-paid and generated data - in this case, how to implement the hypertext protocol (http to you and me), developed at a taxpayer-funded particle physics laboratory - can lead to individual and commercial implementations whose value far exceeds those that could have been realised by charging.

We’re glad to know that we’re thinking along the same lines as the man who invented the Web. But can anyone think of a counter-example - of patented or copyrighted ideas around data that have been taken up on a huge scale? To win the argument for this campaign, we need to consider any counter-example.

Mandating OA to theses and dissertations

Arthur Sale, The impact of mandatory policies on ETD acquisition, a preprint forthcoming in D-Lib Magazine, 12(4). Self-archived March 15, 2006.
This paper analyzes the data now available in Australia's coordinated Electronic Theses and Dissertations gateway to show the impact of high-level institutional policy decisions on population of the individual repositories. The paper shows that just like research article repositories, voluntary ETD deposition results in repositories collecting less than 12% of the available theses, whereas mandatory policies are well accepted and cause deposit rates to rise towards 100%. Modeling of the PhD and Master process in Australia is also carried out to indicate the delays and liabilities to be expected if mandatory policies are applied only to newly enrolled candidates.

Update (4/18/06). The D-Lib version of the article is now online.

Beyond traditional journals

Bryan Lawrence has blogged some notes on his Oxford seminar on communicating science. Excerpt:
I...concluded the big question is really how to deal with self publishing and peer review outside the domain of traditional journals, because I think for many their days are numbered (possibly apart from as formal records). Most of the ensuing discussion was predicated on the assumption that I was recommending blogging as the alternative to "real" publishing, despite the fact that earlier I had introduced the RCUK position statement on open access and I then went straight on to introduce Institutional Repositories and the CLADDIER project. So, let me try and be very explicit about the contents of my crystal ball.

The days of traditional journals are numbered, if they continue to behave the way they do, i.e. [a] Publishers continue to aggregate, and ignore the (declining) buying power of their academic markets. [b] They do not embrace new technologies. [c] They maintain outdated licensing strategies. [d] Two outstanding exemplars of journals moving with the times (for whom this is not a problem) are: Nature...[and] Atmospheric Chemistry and Physics....Early results and pre-publication discussion will occur in public using blog technologies!...Data Publication will happen, and then we will see bi directional citation mechanisms (including trackback) between data and publications. By extending trackback to include bidirectional citation mechanisms and implementing this at Institutional Repositories (and journals) we will see traditional citation resources becoming less important....

To sum up: [1] Many journals will die if they don't change their spots. [2] Trackback linking will become very important in how we do citation. [3] Post publication annotation will become more prevalent. [4] Blogging (technologies) will add another dimension to scientific discourse.

Filling an IR: strategic serendipity

Dorothea Salo, Marketing an IR, Caveat Lector, March 18, 2006. Excerpt:

I would strongly recommend a cheap, agile, multifaceted, flexible IR marketing campaign over a single sweated-over Master Communication Plan.  The simple reason is uncertainty. Tying an IR to a single message (as many marketing plans insist that you do) presumes the existence of one message that will be effective. Even a plan that allows for tailoring different messages to different audiences presumes that you know how to identify your audiences and what messages will work with each one.  I’m quite convinced this is not so. Of the people I’ve hooked to the IR I run, a few liked the preservation aspects of it, several went for the increased impact factor, some just wanted a place to keep their finished work organized, and one or two agreed with the altruism arguments. I can’t see any correlation between willingness to participate in an IR and discipline, career status, previous ties with the library, type of research performed, or any other typically-salient characteristics of research faculty.

There may be some underlying order to all this that I simply haven’t seen. The best I can reply is that nobody else has yet seen it either, aside from the obvious knowledge that the sciences are moving faster than the humanities. (And yet… I’m working with an English professor on some stellar 19th-century-lit work too specialized to find a print publisher. There is room to maneuver, even in the humanities.)...

So that’s my advice. Don’t bother with long involved planning sessions. Don’t bother with marketing committees at first (though later on, it may well help to share information). Brainstorm a page of ideas, pick some to try, and try them. When some don’t pan out, pick others. Embrace serendipity. Listen to and act on what people tell you about the IR, and about faculty beliefs and practices. And have fun! Laugh! I’ve caught a few people, I firmly believe, just because I enjoy and believe in what I’m doing and it shows when I talk about it.

Bethesda Statement in Polish

Bozena Bednarek-Michalska has translated the Bethesda Statement on Open Access Publishing into Polish.

Gems of OA lit in chemistry

The inaugural issue of the ChemRefer Newsletter (March 2006) identifies a handful of high-quality, open-access articles and journals in chemistry.

Indian OA repository nominated for Stockholm Challenge

Frederick Noronha, Indian net archive named finalist for Swedish award, HindustanTimes, March 20, 2006. Excerpt:
OpenMED, an open access Internet archive for research works on medical and allied sciences that is hosted by an Indian government body, has been nominated as a finalist for the prestigious Stockholm Challenge 2006 award.

Backed by the National Informatics Centre, OpenMED allows authors and researchers to self-archive their scientific and technical documents. The Stockholm Challenge is a prominent global networking programme for information and communication technology (ICT) entrepreneurs and its award is given away in the Swedish capital in the summer. It aims to show how IT "can improve living conditions and increase economic growth in all parts of the world." Over the years, some 3,000 projects have been submitted for this award....

Naina Pandita, senior technical director and project coordinator of the ICMR-NIC Centre for Biomedical Information at New Delhi, explained: "It accepts both published and unpublished documents relevant to research in the medical and allied sciences including bio-medical, medical informatics, dental, nursing and pharmaceutical sciences." OpenMED accepts preprints (pre-refereed journal papers), post-prints (refereed journal papers), conference papers, conference posters, presentations, technical reports and departmental working papers and theses. In case of non-English documents, descriptive data such as information of the author, title, source and the abstract and keywords are included in English...."We expect it to promote self-archiving and open access to papers or scholarly publications in these fields," said a member of the project team...."(Our) objective is to encourage the self-archiving culture amongst medical professionals in India. The goal is to preserve valuable research publications for future medical researchers and, side-by-side, to publicise research being conducted in the country," the project's promoters said in the application that made it to the final round in Scandinavia. This project, argue its promoters, is based on E-Prints, free downloadable software, and this archive can be easily replicated. In future, the project plans to launch Open Access Journals as a non-commercial venture.

PS: OpenMed is hosted by the ICMR-NIC Centre for Biomedical Information, Bibliographic Informatics Division, National Informatics Centre. Congratulations to everyone at NIC.

Sunday, March 19, 2006

More on patenting facts

Michael Crichton, This Essay Breaks the Law, New York Times, March 19, 2006. An op-ed. Excerpt:
The Earth revolves around the Sun. The speed of light is a constant....Elevated homocysteine is linked to B-12 deficiency, so doctors should test homocysteine levels to see whether the patient needs vitamins. Actually, I can't make that last statement. A corporation has patented that fact, and demands a royalty for its use. Anyone who makes the fact public and encourages doctors to test for the condition and treat it can be sued for royalty fees. Any doctor who reads a patient's test results and even thinks of vitamin deficiency infringes the patent. A federal circuit court held that mere thinking violates the patent.

All this may sound absurd, but it is the heart of a case that will be argued before the Supreme Court on Tuesday. In 1986 researchers filed a patent application for a method of testing the levels of homocysteine, an amino acid, in the blood. They went one step further and asked for a patent on the basic biological relationship between homocysteine and vitamin deficiency. A patent was granted that covered both the test and the scientific fact. Eventually, a company called Metabolite took over the license for the patent....[W]hat the Supreme Court will focus on is the nature of the claimed correlation. On the one hand, courts have repeatedly held that basic bodily processes and "products of nature" are not patentable. That's why no one owns gravity, or the speed of light. But at the same time, courts have granted so-called correlation patents for many years. Powerful forces are arrayed on both sides of the issue....

[T]he human genome exists in every one of us, and is therefore our shared heritage and an undoubted fact of nature. Nevertheless 20 percent of the genome is now privately owned. The gene for diabetes is owned, and its owner has something to say about any research you do, and what it will cost you. The entire genome of the hepatitis C virus is owned by a biotech company. Royalty costs now influence the direction of research in basic diseases, and often even the testing for diseases. Such barriers to medical testing and research are not in the public interest. Do you want to be told by your doctor, "Oh, nobody studies your disease any more because the owner of the gene/enzyme/correlation has made it too expensive to do research?"...

Companies can certainly own a test they have invented. But they should not own the disease itself, or the gene that causes the disease, or essential underlying facts about the disease. The distinction is not difficult, even though patent lawyers attempt to blur it....Unfortunately for the public, the Metabolite case is only one example of a much broader patent problem in this country. We grant patents at a level of abstraction that is unwise, and it's gotten us into trouble in the past. Some years back, doctors were allowed to patent surgical procedures and sue other doctors who used their methods without paying a fee....Companies have patented their method of hiring, and real estate agents have patented the way they sell houses. Lawyers now advise athletes to patent their sports moves, and screenwriters to patent their movie plots....Such a situation is idiotic, of course. Yet elements of it already exist. And unless we begin to turn this around, there will be worse to come....

OSI's OA work in Eastern Europe and Ukraine

Iryna Kuchma, Melissa Hagemann, and Rima Kupryte, Open access : OSI and eIFL’s work on the international level, activities in Ukraine on the national level, a presentation (in English) at the 6. seminar za knjižnice u sustavu znanosti i visoke naobrazbe (Zagreb, March 3-4, 2006).
Abstract: Overview of activities in the field of open access funded by Open Society Institute and backed up by eIFL is given. From general information about OSI, eILF and open access activities we come to practical solutions in a few countries included in OSI programs. Finally, situation in Ukraine is described i.e. national strategy about open access is described, from negotiations and activities on governmental level to solutions for end users.

Moshe Vardi on OA in computer science

Marianne Winslett interviews Moshe Vardi in the March issue of SIGMOD Record. (Thanks to Computational Complexity.) Excerpt:
How can computer science go about changing its publication culture? Are there areas that move just as fast as we do, and have journal papers and conferences, but conferences are not the primary vehicle? I have questions about the basic model of scholarly publications. And I find it fascinating that it is difficult to have a conversation about this on a big scale, and make changes on a big scale. We are very conservative. It is interesting that computer science has been one of the slowest disciplines to move to open access publications. Other disciplines are way ahead of us in using online publications.

PS: Vardi is one of the editors of the OA journal, Logical Methods in Computer Science.

More evidence that OA to full-text books increases sales of print editions

Peter Sayer, Google scans French literature for Book Search support, InfoWorld, March 17, 2006. Excerpt:
Google has at least one supporter [among French publishers] in the form of Michel Valensi, founder of publishing company Editions de l'Eclat, who boasts of being the first French publisher to sign up for Google's partner program. Partners' books are searched in the same way, but a whole page of the book is displayed, rather than just a fragment, with links to online stores carrying the book. Google has scanned 100 of Eclat's books, which have been searched 60,000 times leading directly to 600 online sales, Valensi said. The Google deal is just an extension of what Valensi has been doing since 2000, though, when he published the first free, online edition of a book he was also selling on paper -- a French translation of Giovanni Pico della Mirandola's 15th century text "Oration on the Dignity of Man." In the previous six years, Valensi said, he had sold just 1,500 copies of the book, but in the six years since he began giving it away online, he has also sold 6,000 paper copies. Now, his company's Web site links to about 35 such books, and in time he hopes to offer the whole catalog this way.

More on Connotea tagging of OA articles in Eprints

Timo Hannay, Linking Up Research Papers Using Tags, You're It! March 16, 2006. Excerpt:

Back in my first post to this blog, I said that over here at Nature we’re interested in the question of "…how far tagging can take us in tackling the (formidable) information organisation needs of modern science." Today we’re starting on a cool (I think) new experiment that might help provide some early answers.  Many of you are no doubt familiar with Matt Biddulph’s wonderful mock-up of the BBC Radio 3 website as it might work with embedded functionality. (See in particular Matt’s Flash movie here.) Inspired by this, we’ve just released some code that adds the same type of functionality (but this time for real) to ‘institutional repositories’ (IRs) - websites that scientists and other academics use to share their work with each other.  One general problem with IRs is that, notwithstanding services like Google Scholar, a lot of their content isn’t very easy to find, and it certainly isn’t easy to browse between related items in different repositories. Our new code aims to improve things by allowing IR users to tag articles and see links to related content, all from within the IR web page itself. Behind the scenes, the software communicates with and/or Connotea (Nature’s own social bookmarking service for scientists). Since Connotea is open source, it will also work with any instance of Connotea Code.  The good folks at the University of Southampton’s Electronics and Computer Science Department have now put this code on their institutional repository, creating our first real-world installation (yeah!)....

The recommendations (which are generated based mainly on tag co-occurrence) already seem OK to me, but they should get better as more links and tags get entered into the system.  There’s lots of different IR software out there, and our code currently only works with EPrints, which we chose because it’s very popular, is written in a language (Perl) that we’re familiar with, and has a friendly development team just down the road from us. If you’re the administrator of an EPrints repository then you can get instructions and code from here. I’m told that it’s a doddle to install.

Examples of user freedom under OA

OA doesn't just remove access barriers for individual users connecting to individual articles online. Here are two good examples of how OA sets users free to make less common uses --without asking permission and without paying a fee. From the March 16 issue of the PLoS E-Newsletter for Institutional Members:

All content in PLoS journals are immediately freely available online under the terms of the Creative Commons Attribution License, which allows reproduction, distribution, derivative works, and commercial use as long as the source of the content and terms of the license are properly cited.

Ten thousand Members of the European Cancer Community Will Receive PLoS Article
An essay in PLoS Medicine, "When Clinical Trials Are Compromised: A Perspective from a Patient Advocate" is being reproduced by Cancer World, the scientific magazine of the European School of Oncology that is distributed to over 10,000 members of the European cancer community as well as being available free of charge online at

The World Health Organization Puts PLoS Article on CD for Distribution
A neglected diseases article from PLoS Medicine, "Rapid-Impact Interventions: How a Policy of Integrated Control for Africa’s Neglected Tropical Diseases Could Benefit the Poor" is being put on a CD that the World Health Organization (WHO) is distributing free of charge. The target audience for the CD includes Ministries of Health, district and hospital managers; management, medical, nursing and paramedical training institutions; and non-governmental organizations in developing countries, particularly where Internet access remains limited or non-existent.

Maximizing public benefit from publicly-funded science

Carolyn Raffensperger, The Tragedy of the Anti-Commons in Science, On the Commons, March 18, 2006. Excerpt:
Few argue with the premise that the public should be the primary beneficiaries of publicly funded research and technology development. But how they should benefit is the subject of hot debate. The U.S. government’s approach has been to facilitate the transfer of technology into the market by privatizing it. The idea is that the technology will be made available to the public only if a company can have exclusive access to it and effectively protect its investment. Since the 1980’s Congress has passed numerous pieces of legislation to streamline the transfer of drugs, agricultural materials and other technologies into private hands. The most important single law is the Bayh-Dole Technology Transfer Act. This approach has resulted in what some have called the tragedy of the anti-commons – excluding people from a non-depletable commons resulting in its under-use. For instance, NIEHS and other agencies spend millions of dollars on pharmaceuticals and then transfer them to private companies who profit from the publicly funded research. Patents and licenses prevent other companies from making cheaper, more accessible versions of the drugs. By making the public pay twice, the market is actually a method for excluding the public from the very things it paid to have created....

Technology transfer and the privatization of publicly funded research are terms of the old social contract. If we are going to establish a new social contract, the relationships between the public and scientists and corporations will need to change substantially. There are better ways to provide direct benefits to the public in exchange for their research dollars. We can imagine – and create -- a science commons where scientists are supported to investigate the most pressing questions and the public benefits from that research. If we do so, we are more likely to avoid both tragedies: the anti-commons tragedy of exclusion from the good things research creates or the commons tragedy of environmentally destructive technology. Maybe then science would be a true public good.