Open Access News

News from the open access movement

Saturday, June 20, 2009

OA journal ranks #1 in its field for impact

Gunther Eysenbach, Open Access journal JMIR rises to top of its discipline, Gunther Eysenbach's Random Research Rants, June 20, 2009.  Excerpt:

I am still shaken and thrilled by yesterdays' big news: The Open Access publication Journal of Medical Internet Research (JMIR), which I created 10 years ago, has now established itself as THE leading peer-reviewed journal in the field of ehealth, or as I prefer to put it, for "health and health care in the Internet age". Yesterday, on June 19th, 2009, the Impact Factor rankings for 2008 were published by Thomson Reuters....The Impact Factor for JMIR in 2008 is now an amazing 3.6 (up from 3.0 last year, and 2.9 the year before). This has to be seen against the background that medical informatics journals are typically not cited very well and have typical impact factors between 1-2.

Perhaps the biggest news due to its high symbolic value is that JMIR is now the top, number one ranked journal in its discipline, and has finally officially overtaken JAMIA, the official Journal of the American Medical Informatics Association (2008 IF 3.4), which has been on the #1 spot in this discipline for decades. For a small, independently, low-budget journal this is a major achievement and truly a David vs Goliath situation. AMIA is probably the most influential scientific society in the medical informatics field, and its journal JAMIA enjoys significant backing by the association. JAMIA is owned and published by Elsevier. I may be wrong on this (leave a comment!), but to my knowledge this is the first time in history that an independent Open Access journal takes the top spot in its discipline, overtaking the long-term top journal in a JCR (Journal Citation Reports) category....I know that the Impact Factor has its problems as a metric, but Impact Factors continue to be a valuable measure of a journal’s quality for authors, librarians and societies, and the high impact of JMIR sends a clear message to traditional publishers as well as to societies in terms of what Open Access publishing means for impact.

JMIR is now ranked the top (#1) journal in the medical informatics category (out of 20 journals), and second (#2) in the health sciences & services category (out of 62 journals), by Impact Factor....

The new top position in the field means that we will be getting even more submissions....

Comment.  The OA impact advantage helps journals, not just authors.  TA journals may have their reasons not to convert to OA, but they can't pretend that there's nothing in the other pan of the scale.  Congratulations to JMIR and Gunther.

Updating the rights-based approach to OA

The Internet Rights and Principles Coalition of the Internet Governance Forum (IGF) is organizing the wiki-based revision of the Internet Rights Charter, originally drafted by the Association for Progressive Communications. The charter supports OA for publicly-funded research as a right.

Also see our post on the charter's last revision in November 2006.

OKD in French

Friday, June 19, 2009

Wikisource as a repository

Tim Armstrong, Using Wikisource as an Alternative Open Access Repository for Legal Scholarship, Info/Law, June 19, 2009.  Excerpt:

I delivered my “Crowdsourcing and Open Access” presentation earlier today at CALICon09....

There are plenty of sites in the world that aim to serve as repositories for legal scholarship. Some of them are run by particular law schools and serve to advertise scholarship produced by that institution’s faculty. Others, like SSRN, aggregate scholarship from a variety of sources. Wikisource differs from all of them in that its mission is broader: Wikisource doesn’t want to be a scholarly archive, it wants to be a library. The very breadth and generality of that objective, however, gives Wikisource some advantages as an open-access repository that I don’t think have been adequately explored elsewhere.

To illustrate the point, I put my recent piece on the DMCA up on Wikisource.  Here it is: Fair Circumvention, 74 Brook. L. Rev. 1 (2008). The Wikisource version, I think, improves in a number of interesting ways over the PDF version available at SSRN.

  • It includes the full text of the article, searchable, indexable, and cut-and-pasteable, on a single web page. All of which makes the article more useable and easier to find by people (including legal generalists, who might not be acquainted with SSRN) who are doing research in this area. The text is indexed by Google.
  • Wikilinks to primary source materials make it easy to verify the research. If I have mischaracterized, say, the (in)famous Universal City Studios v. Reimerdes DeCSS case, you can find out easily, because Reimerdes is also on Wikisource, just a click away. Most of the statutes cited in the piece are available, too. As more primary source authorities are added to the site, the number of links from the article can also grow. Those primary source materials would be excluded from a site that aspired only to archive research; their easy accessibility on Wikisource, in contrast, makes the research better.
  • Easy authentication and pinpoint citation because the original page scans from the published version are preserved alongside the the digitized text, just a click away using the page number links that appear in the left-hand margin of the site.   (The page numbers are anchors, too, making it easy to create external links that point directly to a particular page of the article—for example, here’s p. 5).

Doing it this way entails a little extra effort, although as I tried to illustrate during my CALI talk, a certain amount of that effort can be crowdsourced. There is also a legal issue involved in ensuring that the applicable license permits the work to be hosted on Wikisource. Still, as a proof of concept, I think using Wikisource as a legal scholarship repository holds some interesting possibilities. Would be happy to hear any feedback.

Comment.  I like the idea of linking from the wiki edition to the unmodifiable pages of the published edition --what I called a "quality ratchet" when Open Medicine introduced the idea earlier this month.  I also like the idea of breaking out of PDF prison into just about any other format (but especially HTML, XML, or wiki).  Moreover, Wikisource is open to deposits from anyone, embracing scholars who don't have a repository at their institution or in their field. 

Funding for OA, in light of the Bentham case

Dorothea Salo, Opportunity in opprobrium, Caveat Lector, June 13, 2009.

... So one way to look at this unpleasant situation is as an information problem. If that suggests to you that I think librarians have a role in solving it, you know me entirely too well. In fact, I think we have to get a handle on it, because we are and will continue to be some of the organizations funding gold OA. Imagine the mess, if a well-regarded academic library funneled money to a Bentham! ...

Not a mess I would want to be in. But taking a stand gets sticky, too, because (as the wrangle at Maryland demonstrates) the last thing any academic service center wants to get involved in is telling faculty where they can and can’t publish. As gold OA takes on increasing importance, anyone with funds to disburse toward author fees may well land—or be perceived as having landed—in precisely that position. How do we even begin to think about that?

Well, one way is to think of ourselves as research funders, not unlike the NIH or the Wellcome Trust. If we’re paying the money, we deserve a say in where it goes, and we’re well within our rights to say that the like of Bentham or SJI is right out. As librarians, we make collection-development and purchasing decisions based on assessment of information quality, right? (Yes, yes, “when not prevented from doing so by Big Deals and similar less-than-savory practices,” granted.) This is the same thing, just at a different point in the process. It shouldn’t be a problem.

Of course, I’ve just begged a huge question. How do we know about the like of Bentham or SJI? Or, to make the question less black-and-white, what about double-dipping hybrid journals, the ones that will cheerfully take your money to make an article OA, but won’t adjust their subscription fees by a single penny in proportion to uptake of the OA option? Arguably, libraries have a survival interest in not funding those!

I think OASPA’s response to the Bentham situation points to part of the way forward. If OASPA membership becomes a seal of approval for all-OA publishing operations, then it’s dead simple for any library that funds author fees to hold to a policy of “if it ain’t OASPA, we ain’t paying.” ...

On the state of search tools for OA content

Dave Haden, Open access search?, Jurn blog, June 12, 2009.

... [A] search for “open access” was discouraging. There are about twenty “living-dead” [Google custom search engines] from 2006, and no large ones updated after 2006 (so far as I could tell from a quick visit).

Pouring out all this open access content is all very well, but where’s the competition and development in open access search?

And where are the simple common standards for flagging open content for search-engine discovery and sorting, for that matter?

Now of course I’m viewing things from the outside, as an independent curator and social entreprenuer, not a librarian or OA evangelist. But it seems to me that burying your Phd thesis deep in a repository cattle-car — seemingly with only a few keywords, an ugly template and an impenetrable URL for company — isn’t serving it or the author very well. ...

I did find a large Google CSE for Economics. ...

Wiley on disaggregating higher ed.

David Wiley, The Disaggregated Future of Education, slides, Dean’s Research Lecture at the University of Saskatchewan (Saksatoon, June 11, 2009). See also Wiley's notes on the presentation.

New OA suite of images for language learning

The Culturally Authentic Pictorial Lexicon is a recently expanded and re-launched OA suite of images to assist language learning. The images use the Creative Commons BY-NC license. (Thanks to Creative Commons.)

New additions to database of public domain works

Jonathan Gray, New developments on Public Domain Works!, Open Knowledge Foundation Blog, June 18, 2009.

We have now completed a major load of data into the Public Domain Works database:

There are now 125318 persons, 12840 items and 299141 works in the database. The data we have there comes primarily from two sources: people and book data from Philip Harper’s NGCOBA and recordings data from the online discographies provided by KCL’s CHARM project.

We also have a load more sound recordings data (~ 600k items) almost ready to go courtesy of Edward Betts and the Open Library. (And we are yet to even get started on the BBC GRAMS data …)

Also work on the public domain calculators is still ticking over. Gisle Hannemyr recently put together a first draft of a copyright flowchart for Norway. ...

New tool for searching journal policies

The CAUL Australian Institutional Repository Support Service has launched an Open Access Policies Search, which searches journal copyright policies across SHERPA/RoMEO, OAKList and AcqWeb.

Alma Swan interview

Sarah Bartlett has interviewed Alma Swan in a 35:26" podcast from the Project Xipos.  From the blurb:

In this podcast I talk with Alma Swan from Key Perspectives. Over the past 5 years in which Open Access publishing has undergone considerable growth, Alma has been a reference point, providing useful and authoritative research and analysis into the progress of the Open Access movement, drawing upon her experience in both scholarly communications and the natural sciences. We begin by talking about trends in the core area of Open Access publishing - namely journal articles - and then extend our view to the potential growth of Open Access e-Books. We also look at the emerging area of open research data, examining both the potential benefits to research and to society as a whole, as well as considering the significant challenges to be overcome. This podcast provides a useful snapshot of the Open Access movement as well as insights into its future.
See also Alma’s blog, OptimalScholarship

Realistic futures in which universities would save money from journal conversions to OA

Bill Hooker, Cost to libraries: OA vs TA, Open Reading Frame, June 18, 2009.  Excerpt:

In 2004, Philip Davis carried out a study of library costs in which he estimated the average subscription cost/article for a subset of ARL libraries and compared this with a range of estimated author-side fees for Gold OA, in order to determine whether libraries might pay more or less if all journals switched to OA. Here I've tried to update that study using information that wasn't available back then.

Davis set the spreadsheet up to make it easy to update his assumptions and recalculate (kudos!), and Peter Suber (among others) pointed out that at least the following assumptions should be updated:

  1. all OA journals charge author-side fees
  2. the full cost of OA fees will be borne by libraries
  3. TA journals charge no author-side fees

We now have five different studies (one recently confirmed, improved and updated) showing that in fact the majority of OA journals do not charge author-side fees. The highest proportion of no-fee journals is in the DOAJ psychology subset (90%) and the lowest is in the chemistry subset (49-58%); the most recent analysis of the entire DOAJ showed 70% no-fee.

We also know that research funders are increasingly willing to foot the bill for OA....A recent RCUK report showed that 45% of authors publishing in fee-based OA journals had their costs covered by their research funders.

Rather than pick a single number for either of these updates, I've plotted the fraction of the OA cost borne by libraries against the number of institutions at which OA is predicted to cost more than, the same as, or less than the TA model. The fractional cost borne by libraries is the product of (100 - %covered by funders)(%OA journals charging fees). (See Figs 1 and 2 below.) ...

...[T]he NIH is paying, on average, about $500/article in page charges. Since this is the largest sample we have, I've used this figure to update the spreadsheet. I added $500/article to the calculated serials expenditure/article and compared this adjusted TA cost/article to the OA costs.

I've updated two further aspects of Davis' spreadsheet. First, we now have better information about the actual range of author-side fees charged by those OA journals that do charge them. Rather than Davis' $2500 - $5000 range, I've used $1300 (PLoS ONE) to $3000 (most of the high-profile hybrid programs). If the adjusted TA cost/article falls within this range, the prediction is that the OA and TA models cost about the same from a library point of view.

Second, Davis assumed that the scholarly literature made up 50% of library serials expenditures. I don't know where this figure came from (the spreadsheet refers to a report which does not give any further information), but I think the real value is closer to 90%. My reasoning is based on my observation (see Table 2) that the average unit cost of a curated list of scholarly journals from UCOSC is about ten times the average unit cost of "all serials" from ACRL, ARL and NCES datasets....

Summary of updates.

  1. plot fractional cost borne by libraries to account for %OA journals that don't charge fees and % OA costs borne by research funders (or other bodies)
  2. add $500/article to TA model costs to account for author-side fees charged in addition to subscriptions
  3. predicted OA fee range = $1300 to $3000
  4. assume scholarly literature makes up 90% of serials expenditure

The updated spreadsheet is here, and the end result is this: ...[PS: Omitting Figure 1.]

At a fractional cost of 0.8, there are no libraries at which OA is predicted to cost more than the TA model, and at a fractional cost of 0.3 the OA model is predicted to cost less than the TA model at all 113 libraries.

To see how the %fee and %funder proportions affect the fractional cost borne by libraries, I constructed a simple matrix and highlighted the two cutoff points shown on the graph above: ...[PS: Omitting Figure 2.]

As you can see, there are a number of perfectly reasonable combinations which result in a fractional cost of 0.3 or less, at which all the libraries in the sample would save money under the OA model. (This, by the way, is exactly what Peter Suber predicted.)


  • I've waited three years for someone to update the deeply misleading Cornell calculation.  I'm gratified by the results and grateful to Bill.  I'm especially grateful that he did the new calculation in the matrix format I recommended (Figure 2), which shows the consequences for university budgets under a range of different assumptions about (1) the percentage of OA journals charging publication fees and (2) the percentage of funders willing to pay those fees.  This way, we don't merely replace specific false assumptions with specific new ones.
  • This calculation doesn't bear directly on university deliberations to adopt green OA mandates.  But it should block a scare tactic which the publishing lobby has used in the past to push back against university support for both green and gold OA.

Update (6/20/09).  Bill has updated his calculation to reflect some new information and correct an error.  Excerpt:

...[The new information] brings the estimated average author-side fee to $1250, well in line with the individual journal estimates I made and the published figures I found....

However! There is a flaw in my reasoning! ...

The new figures also show that the fractional cost [of publication fees borne by libraries] has to drop below 0.2 before all 113 libraries are predicted to save money in an OA model. That still seems to me to fall within a realistic range, given that 70% of journals in the DOAJ don't charge author-side fees and 45% of researchers in a recent RCUK study had their OA fees covered by their research funders, for a fractional cost of 0.135.

Nonetheless, it's worth taking a quick look at the libraries which are predicted to pay about the same in the OA and TA models. At a fractional cost of 0.4, they are...[14 high-output schools].  At a fractional cost of 0.3, only...[6 high-output schools] remain in the "pay about the same" category....

More on OA at U. Calgary

Andrew Waller and Mary Westell, Open Access initiatives at the University of Calgary, Letter of the LAA, preprint; self-archived June 15, 2009. Abstract:
This article briefly describes the suite of Open Access initiatives at the University of Calgary. These include an institutional repository, OA activities in the University of Calgary Press, the Synergies project, the Open Access Authors Fund, and an OA mandate for Libraries and Cultural Resources.

Updates on OpenWetWare

Drew Endy, State of the OWW, OpenWetWare Community, June 12, 2009. (Thanks to Michael Nielsen.)

... We currently maintain funding from the US National Science Foundation in support of OWW. To clarify one point in the recent and fantastic article by Jakob Sukale, the NSF grant expires 30 April 2010. This grant currently pays for Bill [Flanagan]’s salary and our server costs. We are currently underspending on this grant and I will likely ask for a no-cost extension which, if granted, could extend our existing funding runway to April 2011. ...

OWW maintains a statistics page here. There are ~6000 registered OWW users (roughly doubling over the past year). About 50 different users make edits to OWW pages on any given day. About 500 unique users make edits each month. Over 100,000 unique visitors browse OWW each month. This is incredible!

From a different perspective, OWW is incredibly small. We also represent a broader experiment in changing the process of research that is very much in a fragile intermediate stage of its development. ... Stated differently and from a personal perspective, I would currently be hard pressed to make a successful argument that supporting and using OWW has made the research in my own laboratory significantly better, as judged by our traditionally published results. On the one hand, we had a great experience using OWW as a platform for developing a shared reference standard for measuring promoter activity in vivo. On the other hand, using OWW as it exists today has led to increased frustration with the slow inanities to be found within the conventional research publication process, while simultaneously and naively reducing the pressure to publish more formally and enabling others outside the (v. small) OWW community to “borrow” results without giving credit. Perhaps this shouldn’t be surprising. All said, I’m more invested in OWW than ever before, and am convinced that we are figuring out a new way to do research. We just have a lot of work to do in order to make the transition complete. ...

See also our past posts on OpenWetWare.

Author's perspective on an OA book

Douglas J. Amy, Adventures in Web Publishing, Inside Higher Ed, June 18, 2009. (Thanks to Steve Foerster.)

Two years ago I was confronted with a problem faced by many academics. An author of three previous scholarly books, I had written a manuscript intended for a much wider and more general audience. Called Government is Good: An Unapologetic Defense of a Vital Institution, the book was a response to the conservative campaign to label government as “bad,” and the ongoing Republican effort to cut taxes, slash social programs, and roll back regulations protecting consumers, workers, and the environment.

Unfortunately, I could not find a popular press to take it on. And while a few university presses expressed interest, I was concerned that their relatively small budgets would mean little advertising and thus little readership by the general public. Then it occurred to me that there might be another way to get a larger audience for the book: put it up on the Web. Not just a sample chapter or two, but the whole book. It took a while to get comfortable with this idea. It would mean giving up royalties and losing the academic imprimatur of a published book. But the potential payoff of a much larger readership was tempting, so I took the plunge.

I quickly realized that simply putting up 300 manuscript pages onto the web as a plain PDF file would be pretty unappealing to most potential readers. So I got a small grant from my college and hired a Web design firm to turn the book into a Web site with an esthetically appealing format. ...

I launched the site – Government Is Good – in the fall of 2007 with absolutely no idea of how it would do. Today, I’ve had over 75,000 visitors to the site. Only half of those stayed long enough to read some of the material, but that is still an impressive number. I can safely say that more people have read this online material than have read my other three books combined. Two of these books were published by university presses and were considered successful. But for these publishers, good sales are often measured in the hundreds – numbers which now seem very modest in comparison to the tens of thousands of readers who have visited my Web site.

Besides the larger readership, there have been several other interesting, and unanticipated, advantages to going this route. For example, I’ve had readers from over 50 countries. ... This kind of broad geographical readership would clearly not have happened with a conventionally published book.

I have also received a surprising amount of feedback on my work. ...

Even more intriguing has been seeing how my site has been talked about in online discussion groups. ...

Web publishing also makes the material much more accessible for classroom use. Other political science professors have been able to assign parts of this book for their courses without having to get permission, charge a fee, or put in on reserve in the library. They merely put the Web address in their syllabi – simple and cheap. ...

Comparing author-side fees at OA and TA journals

Bill Hooker, Author-side fee comparison: OA vs TA, Open Reading Frame, June 18, 2009.  Excerpt:

I've posted a couple of times about the misconception that all OA journals charge author-side fees, and each time I've mentioned the Kaufman-Wills study which found that 75% of the toll-access journals they examined charged author-side fees in addition to subscription charges. I thought it would be useful to compare author-side fees charged by OA and TA journals.

It's easy to work out what OA and hybrid journals charge; BMC maintains a detailed list of publisher article processing charges.... 

What is much more difficult to determine is how much the average author is paying in author-side fees at toll-access journals, because the charge for a given article depends on number of pages and/or color figures, and in some cases also on whether supplementary information is included.

Below are a few examples; in each case for which I calculated a figure, I extracted the page and figure counts manually from a single issue. This is far too small a sample to be representative, but I'm just trying to get some kind of feel for the numbers. Further, the published figures I managed to find (indicated by footnotes) are consistent with my "calculated guesses". Also, the NIH estimates (scroll to section L) that it spends "over $30 million annually in direct costs for publication and other page charges" and produces "roughly 50,000 - 70,000 manuscripts", which means that the NIH is paying, on average, about $500/article in page charges. If around 8% of all new articles are Gold OA, that number goes up to about $543/article. If the Kaufman-Wills 75% figure is representative, then the average author-side fee being charged is $666/article, or $724/article if the %OA is taken into account. (Note that the %OA adjustment might be spurious because we don't know how much of the estimated $30 million is going to Gold OA fees.) ... [PS: Here omitting the actual  numbers.]

Update (6/21/09).  See Bill's update of this calculation.

Comparing the results of four IR surveys

Tom Singarella and Paul Schoening, Institutional Repositories (IR) Survey Summary, 2008.  Apparently a preprint.  (Thanks to Charles Bailey.)

Abstract:   This is a comparison of four short surveys on institutional repositories (IRs) that were distributed online to the regular membership of AAHSL (academic health sciences libraries) on June 14, 2005; June 26, 2006; June 21, 2007; and June 24, 2008 (closed on July 25, 2008).

Case study in challenging a copyright transfer agreement

Stuart Shieber, “Don’t ask, don’t tell” rights retention for scholarly articles, The Occasional Pamphlet, June 18, 2009.  Excerpt:

A strange social contract has arisen in the scholarly publishing field, a kind of “don’t ask, don’t tell” approach to online distribution of articles by authors.  Publishers officially forbid online distribution, authors do it anyway without telling the publishers, and publishers don’t ask them to stop even though it violates contractual obligations. What happens when you refuse to play that game?  Read on....

An author has a simple solution to the quandary of whether to distribute through a publisher’s access-limited mechanism or freely online: Do both.  Unfortunately, publishers typically restrict authors from this approach through contractual limitations stipulated in copyright assignment forms.

This brings us to the strange social contract....

The standard system for scholarly communication is thus based on widespread contractual violation and fraud.

Why don’t publishers police their contractees more carefully, as the RIAA does...?  We can only speculate that the fear of upsetting their content providers trumps their need to maintain control over the content itself, given that there is no evidence that the online availability is hurting their revenues....

Nonetheless, individual authors still breach contracts regularly as they act to maximize their career advancement possibilities.  To many, including myself, this state of affairs is untenable.  I am not willing to routinely violate contracts in this way.  Consequently, I and others have for some time reconciled the two distribution mechanisms explicitly, by amending the contractual conditions of copyright assignments.  For many years, I have as a matter of course refused to sign copyright assignment forms that do not give me the right of noncommercial online distribution of my work. Originally, I would use alternative copyright assignments that I wrote myself.  More recently, I have been attaching the SPARC addendum to publishers’ assignment forms, and then the Science Commons addenda that superseded it.

In the many years that I have been routinely replacing or modifying copyright assignments, I have never had a complaint (or even an acknowledgement) from a publisher.  In retrospect, this may make sense.  Since the contractual modification applies only to a single article by a single author, it is unlikely that anyone looking for copyright clearance would even know that all copyright hadn’t been assigned to the publisher.  And in any case publishers must realize that authors act as if they have a noncommercial distribution license whether they formally retain one or not.

I say that I’ve never had a complaint from a publisher, and that has been true with one exception.  This post describes that singular case....

I describe my experience in challenging an irrational and detrimental license clause, and how it spiraled into a battle that resulted in the publisher changing its policy for the journal as a whole. My experience is certainly not unique but accounts are rare, so I encourage others to share their experiences with successful (and unsuccessful) rights retention negotiations with journal publishers in the comments section....

[PS:  Here omitting the long and fascinating story.]

What is the moral of this story?  First, all participants — including the editorial board and editor-in-chief of the journal, the managing editor, the publisher’s staff — were people of good will. They all acted in ways they thought in the best interest of the institutions they represented and the larger missions of those institutions.  However, to a great extent, they may not have fully thought out the connections between the policies they acted under and the missions.  The editorial board may not have realized that the journal’s policy embargoed author distribution; certainly the journal’s contributors didn’t, or chose to ignore it.  The publisher may not have realized the inconsistencies between the journal policies and the facts-on-the-ground.

But it is also apparent that authors are far too acquiescent in the process of rights retention with publishers.  We are overly willing to accept the rulings of publishers as a fait accompli.  Despite the fact that publishers assert that their policies are supported by their editorial boards, editorial boards are in fact responsive to reasoned arguments.  And although a negotiation for rights retention between an author and a large commercial publishing company asymmetrically disfavors the author, one in which the author is supported by the editorial board is a different matter entirely.  This example calls for taking advantage of rights retention negotiations to enlist editors and editorial boards in the process of expanding access to scholarly articles in a way consonant with law, moving past the “don’t ask, don’t tell” social contract.

Thursday, June 18, 2009

Presentations from Italian conference

The presentations from CIBER seminar (Palermo, June 3-5, 2009) are now online. Some are on OA. (Thanks to Fabrizio Tinti.)

On CLOCKSS trigger events

Victoria Reich, From Dark Archive to Open Access: CLOCKSS Trigger Event Lessons, Against the Grain, April 2009. Only this description is OA, at least so far:
What is a [CLOCKSS] trigger event and when do these materials become available to us?
See also our past posts on CLOCKSS.

On OpenWetWare

Jakob Sukale, Don’t Hide your Research, Share it!, Lab Times, April 7, 2009. (Thanks to Michael Nielsen.) Description:
Technology is slowly allowing scientists to take information distribution back into their own hands as demonstrated by Not only does it allow researchers to publish bits of information that do not fit in with the conventional channels of journals but it also serves as a platform to interact with other researchers in an environment that is not owned by a profit-seeking company.
See also our past posts onOpenWetWare.

More on calculating the size of the public domain

Rufus Pollock, The Size of the Public Domain, miscellaneous factZ, June 9, 2009.

... Having already obtained estimates of the number of items (publications) produced each year based on library catalogue data our next step is to convert this into an estimate of the “size” of the public domain. (NB: as already discussed, “size” could mean several different things. Here, at least to start with, we’re going to take the simplest and crudest approach and equate size with number of publications/items.)

The natural, and most obvious, approach here is to go through our 1 million+ items and compute their public domain status (as discussed in this earlier post). Unfortunately, as detailed there, this is problematic because we often have insufficient information in library catalogues with which to compute PD status with certainty — in particular, author death dates are frequently absent. Thus, it will be necessary to fall back on some approximate method.

For example, we can use base PD status on simple publication dates: if a book was published, say, 140 years ago it is very likely it is in the public domain — for it to be in copyright its author must have lived more than 70 years after the book came out (remember copyright lasts for life plus 70 years in the EU)! Conversely, any publication less than 70 years old is almost certainly not in the public domain. For periods in between we can assume some proportion of publications are PD starting close to zero for more recent items and rising towards one for older ones. A calculation along those lines is provided in the following table: [Note: omitting table.] ...

So, based on the assumptions regarding PD proportions given in the table, there are somewhat over 600 thousand PD books according to the holdings of Cambridge University Library (of which just over half, approx. 390k are from before 1870). The British Library dataset is approx 4x as big as Cambridge University Library and the numbers scale up roughly proportionately giving a total of over 2.4 million items.

Of course this is a fairly crude approach based purely on publication date and it be improved in a variety of ways, most notably by using the authorial birth date information which is usually present in catalogue data (we can also use death date information where present). This will be the subject of the next post.

See also our past posts on Pollock's related work (1, 2).

UK report on orphan works and digitization

Naomi Korn, In from the Cold: An assessment of the scope of ‘Orphan Works’ and its impact on the delivery of services to the public, report prepared for the Strategic Content Alliance and the Collections Trust, April 2009. See also this blog post by the Strategic Content Alliance. (Thanks to Research Information.) From the executive summary:

... The flow of public sector content and the maximisation of the potential of its value is being disrupted by both the resources necessary to manage copyright and, in particular, Orphan Works. Despite the recognised extent, impact and problem of Orphan Works, particularly for digitisation activities across the globe, there has been a lack of credible evidence to evaluate the scale of the problem across the public sector in the UK. The absence of such an evidence base means that it is nearly impossible to address this problem legislatively and/or through the implementation of suitable licensing schemes. It also means that the problem cannot be managed nor solutions sought to prevent the occurrence of these works in the future.

In recognition of the substantial obstacles created by Orphan Works across the public sector, as well as the lack of a statistically viable evidence base to underpin any potential solutions, the Collections Trust and the Strategic Content Alliance have been working together on a joint initiative to assess the impact of Orphan Works on the delivery of services to the public. The ‘In from the Cold’ project is the first research of its kind surveying the extent of Orphan Works across the UK’s public sector, drawing on international responses as well as qualitative data from over 80 UK-based public sector bodies. ...

The scale and impact of Orphan Works across the public sector confirms that the presence of Orphan Works is in essence locking up culture and other public sector content and preventing organisations from serving the public interest. Works of little and/or variable commercial value but high academic and cultural significance are languishing unused. Access to an immense amount of this material, essential for education and scholarship, is consequently badly constrained, whilst scarce public sector resources are being used up on complex and unreliable ‘due diligence’ compliance. Without any kind of UK or European Union-wide legal certainty, there will remain a major risk for all users of Orphan Works. The quantity of Orphan Works and their impact is only accelerating as content is being created and digitised without adherence to any single internationally recognised standard for capturing provenance information.

The data and anecdotal feedback suggests that many public sector organisations are themselves unsure as to the extent of the problem, and that staff awareness and understanding are often limited. There are also suggestions that often works are selected for digitisation based on the fact that they do not pose any copyright issues, thus creating a black hole of 20th century content. These issues stress the need for an informed and skilled public sector to deal with all the issues associated with copyright-related materials, the necessity for access to resources to deal with Orphan Works, and an informed and proportionate understanding of the nature of the risks associated with the use of these works.

It is crucial that policy makers recognise the problems that public sector bodies face in managing and providing public access online to a vast range of works in copyright (including Orphan Works), and create a suite of appropriate legislatively based solutions. Whether the answer is a UK or an international one, involving a change in practice and interpretation and/or a change in legislation, this is clearly a matter of urgency. Without these legal safeguards, the contribution of public sector content to a global digital landscape will continue to be severely curtailed and the levels of public resources to manage copyright will be unacceptable.

OA should be part of a Canadian digital action plan

Michael Geist, In Search of A Canadian Digital Action Plan, Michael Geist, June 16, 2009. See also Geist's related newspaper column (as published in the Toronto Star and Ottawa Citizen, or Geist's longer draft).

In recent months, there has been growing support for a national digital strategy. ...

Since broad principles rarely generate action, the government should forego the conventional strategy and move directly to an action plan with specific deliverables. ...

After years of closed, "walled garden" approaches, the world is embracing the benefits of openness. The City of Vancouver recently adopted an openness policy that establishes a preference for open standards, open source software, and open government data. The federal government should do the same, promoting the use of cost-effective open source software and the benefits of commercial and civic activity around accessible government data. ...

The openness principle should also cover access to taxpayer-funded research. In recent months, the United States and the European Union have taken strong steps toward making their research openly available, with legislative mandates that require researchers who accept public grants to make their published research results freely available online within a reasonable time period.

In Australia, Senator Kim Carr, who serves as the Minister for Innovation, Industry, Science and Research, has remarked that "to the maximum extent practicable, information, research and content funded by the Australian governments. . . should be made freely available over the Internet as part of the global public commons. This should be done while the Australian Government encourages other countries to reciprocate by making their own contributions to the global digital public commons." Canada can ill-afford to remain a by-stander as other countries create an open global science commons. ...

Canada could also get on with the job of creating a national digital library by digitizing millions of Canadian books for the benefit of Canadian authors and the broader public. Moreover, groups like the CBC and the National Film Board should be working to digitize thousands of hours of Canadian film, television shows, and radio programs. ...

New manifesto on the commons

Reclaim The Commons is a manifesto launched at the World Social Forum (Belem, Brazil, January 27-February 1, 2009). (Thanks to Bienes Comunes.)

Humankind is suffering from an unprecedented campaign of privatization and commodification of the most basic elements of life: nature, culture, human work and knowledge itself. In countless arenas, businesses are claiming our shared inheritance - sciences, creative works, water, the atmosphere, health, education, genetic diversity, even living creatures - as private property. A compulsive quest for short-term financial gain is sacrificing the prosperity of all and the stability of the Earth itself. ...

Agressive intellectual property politics harness those suffering from neglected deseases or who can't purchase patented medicines, reduce cultural diversity, limit access to knowledge and education, and promote a global consumerist culture. ...

As more citizens discover this reality, a new vision of society is arising - one that honors human rights, democratic participation, inclusion and cooperation. People are discovering that alternatives and commons-based approaches offer practical solutions for protecting water and rivers, agricultural soils, seeds, knowledge, sciences, forest, oceans, wind, money, communication and online collaborations, culture, music and other arts, open technologies, free software, public services of education, health or sanitization, biodiversity and the wisdom of traditional knowledges.

The signers of this Manifesto, launched at the World Social Forum of 2009, call upon all citizens and organizations to commit themselves to recovering the Earth and humanity's shared inheritance and future creations. ...

June issue of JSP

The June issue of the Journal of Scholarly Publishing is now online.  Here are the OA-related articles (accessible only to subscribers, at least so far):

  1. Ji-Hong Park, Motivations for Web-Based Scholarly Publishing: Do Scientists Recognize Open Availability as an Advantage?

    Abstract:   The open availability of journal articles is expected to encourage scholars to publish in Web-based publishing venues, as it may provide more visible and wider dissemination of their research. Some studies, however, report no evidence of such relative advantages, although an advantage may be conferred by other factors. Despite emerging disputes about the effects of open availability, scholars' perceptions of the phenomenon are not well understood. Do they recognize the advantages of open availability? Or do they consider other factors more important? This study sought to answer these questions by examining reasons why scholars publish in open-access venues and the extent of their motivations. To accomplish this goal, results were tallied from a Web-based survey of 1104 scientists around the world. The data analysis identified eleven relevant motivational factors: six attitudinal factors, two perceived control factors, and one demographic factor. Together, these factors significantly influenced the intention to adopt open-access publishing. Factors related to social influence and perceived topical compatibility appeared to be insignificant. The influence of attitudinal and perceived control factors, however, varied based on tenure status. The biggest difference between tenured and untenured groups was the rank of perceived visible advantages, implying that open availability has different levels of significance depending on tenure status.

  2. Jingfeng Xia, Library Publishing as a New Model of Scholarly Communication

    Abstract:   This article briefly compares the history, current practices, and trends of library publishing and institutional repositories, but focuses on journal publishing by academic libraries. By introducing some foreign university publishing models, it recommends an institutional concentration, rather than a subject orientation, of library journals and suggests a diversification of the library publishing.

White House office asks for input on managing federal information

Michael Fitzpatrick, Transparency: Access to Information, OSTP Blog, June 10, 2009.

The Federal government is the largest single producer, collector, consumer, and disseminator of information in the United States. Providing meaningful access to this information is a key goal of President Obama’s Open Government Initiative.

As part of the Open Government Initiative, the President tasked Office of Management and Budget (OMB) Director Peter Orszag with issuing an Open Government Directive to Federal agencies. The OMB Directive will be informed by recommendations being developed by the Chief Technology Officer. Consistent with the President’s goals of promoting government transparency, participation, and collaboration, the public has been invited to offer ideas and suggestions.

With this blog post, OMB’s Office of Information and Regulatory Affairs (OIRA) is pleased to join this discussion. One of OIRA’s core responsibilities is to oversee the implementation of Federal information resource management policies. As we think ahead to the development of OMB’s Open Government Directive, we’d like your opinions and comments. ...

OIRA is interested to hear the public’s thoughts on two matters in particular, OMB circular A-130 and [the Freedom of Information Act].

1. OMB Circular A-130, “Management of Information Resources,” is OMB’s principal guidance to agencies on information resources management in general, and on information dissemination principles in particular. Your views on the current Circular, and your suggestions to improve it, are invited. Specifically:

  • Are the basic assumptions and considerations laid out in A-130 which form the basis for the policy in it accurate and up-to-date for today’s environment?
  • Are there any critical gaps or holes which need to be addressed?
  • Are any of the sections out-of-date to the point where they encourage information policy principles which are no longer effective, but instead counterproductive?
  • Given the emphasis on information policy and capital planning of IT investments which are subject to those policies, does the current relationship between these two areas in A-130 need to be updated or altered?
  • Besides the high level principles, should A-130 contain more specific actions for agencies to carry out in order to encourage better adoption of efficient and effective policies for management of an agency’s information resources? ...

Comment. This is the closest yet that the Obama administration's Open Government Initiative has come to addressing public access to government-funded research. See Trosow's argument that Circular A-130 extends to extramural research funded by the federal government.

See also our past posts on Circular A-130.

Low-cost digitization projects in eIFL countries

Repke de Vries and Arnold Hirshon, eIFL Case Studies on Low Cost Digitisation Projects, eIFL, undated but recent. 

See esp. the chapter on Results of Digitisation Projects and Access.  Table 8 in that chapter ("Means of User Access to the Digitised Content") shows that in 10 of the 12 surveyed countries, users don't have to pay for access to the resulting digital editions.

NPG permits text-mining on green OA manuscripts

Nature Publishing Group allows data- and text-mining on self-archived manuscripts, a press release from NPG, June 18, 2009.  Excerpt:

Nature Publishing Group (NPG) will explicitly permit academic reuse of archived author manuscripts. Head of Content Licensing David Hoole announced the development today at the OAI6 meeting in Geneva, Switzerland. Researchers can now data-mine and text-mine author manuscripts from NPG journals archived in PubMed Central and other academic repositories.

"NPG supports reuse for academic purposes of the content we publish. We want the excellent research that we publish to help further discovery, and recognize that data-mining and text-mining are important aspects of that," said David Hoole.

Under NPG's terms of reuse, users may view, print, copy, download and text and data-mine the content for the purposes of academic research. Re-use should only be for academic purposes, commercial reuse is not permitted. Full conditions are available [here].

The re-use permissions apply to author manuscripts, of articles published in NPG's journals, which have been archived in PubMed Central, UK PubMed Central (UKPMC) and other institutional and subject repositories. The terms were developed in consultation with the Wellcome Trust, the leading biomedical research charity....

"The Wellcome Trust is supportive of NPG's efforts to make archived content more reusable," said Sir Mark Walport, Director of the Wellcome Trust. "This is an important development because it shows that reuse can be facilitated, independent of business model, for text-mining and academic research." ...

NPG's re-use terms will be included in the metadata of these archived manuscripts.

This development is the next step in NPG's support for self-archiving. NPG's License to Publish encourages authors of original research articles to self-archive the accepted version of their manuscript, for public access six months after publication.

Forty-three journals published by NPG offer a free Manuscript Deposition Service to help authors fulfil funder and institutional mandates for public access....

NPG is currently working with partners to expand this to other repositories using the SWORD protocol.

Comment.  OA supporters have disagreed on whether text-mining is covered by fair use (or fair dealing etc.) or whether it requires fresh permission.  Regardless of where you came down on that, it's good to have explicit permission.  (On the other hand, if permission is unnecessary, then it wouldn't be good if researchers and publishers began to believe that it was; but that's a different issue.)  I regard this as a small but welcome step beyond gratis green OA to libre green OA.

Special issue of MBI on institutional and disciplinary repositories

The new issue (vol. 9, no. 1) of GMS Medizin-Bibliothek-Information is devoted to the Green Road to Open Access - Institutionelle und fachliche Repositorien.  (Thanks to the Informationsplattform Open Access.) 

All 12 articles are relevant.  Two are in English and the rest in German.

EThOSnet presentations

The presentations from EThOSnet Web Services Day (London, June 2, 2009) are now online.

Recent news on PSI

Wednesday, June 17, 2009

Case study of a journal using OJS

Michael Felczak, Richard Smith, and Rowland Lorimer, Online Publishing, Technical Representation, and the Politics of Code: The Case of CJC Online, Canadian Journal of Communication, June 2008.  Not new but newly OA; CJS offers OA after a 12 month moving wall.

Abstract:   The Canadian Journal of Communication (CJC) began to experiment with online technologies in 1994, in part as a response to the increasing commodification of scholarship by commercial academic publishers. This article reviews and reflects on the CJC’s online publishing efforts over the past decade and suggests that online publishing technology is a site of struggle that is situated by and situates academics, publishers, and readers along interdependent axes of agency, citizenship, and commodification. Today, the CJC uses and contributes to the Open Journal Systems (OJS) publishing technology developed by the Public Knowledge Project. We argue that academic-initiated undertakings such as OJS and the Canada-wide Synergies project present academics with strategic opportunities to define and control online scholarly publishing.

More OA documents from Oriental Institute backlist

The Oriental Institute of the University of Chicago has released OA editions of about 50 documents from its backlist, many out of print, since our last post on the Institute in the April. (Thanks to Charles Ellwood Jones.)

See also our past posts on the Oriental Institute.

OA journal drops paper edition

The journal Greek, Roman, and Byzantine Studies has posted this announcement, undated but apparently recent. (Thanks to Charles Ellwood Jones.)

Volume 49 (2009) will be the last volume of GRBS printed on paper. Beginning with volume 50, issues will be published quarterly on-line on the GRBS website, on terms of free access. We undertake this transformation in the hope of affording our authors a wider readership; out of concern for the financial state of our libraries; and in the belief that the dissemination of knowledge should be free.

The current process of submission and peer-review of papers will continue unchanged. The on-line format will be identical with our pages as now printed, and so articles will continue to be cited by volume, year, and page numbers.

Our hope is that both authors and readers will judge this new medium to be to their advantage, and that such open access will be of benefit to continuing scholarship on Greece.

Comment. The announcement frames the change as a conversion to OA, but I've titled the post "OA journal drops paper edition" because I believe that's more accurate. Archived versions of the journal site show that the journal has provided OA to complete issues for several years. But it may be the case that the OA edition previously was delayed relative to the paper edition, and that now there will be no such delay.

Elsevier lobbying UK universities to derail OA archiving

Zoë Corbyn, Publisher 'threat' to open access, Times Higher Education, June 18, 2009.  Excerpt:

A multinational journal giant is understood to be courting vice- chancellors in an effort to win their support for an alternative to open-access institutional research repositories.

Elsevier is thought to be mooting a new idea that could undermine universities' own open-access repositories. It would see Elsevier take over the job of archiving papers and making them available more widely as PDF files.

If successful, it would represent a new tactic by publishers in their battle to secure their future against the threat posed by the open-access publishing movement.

Most UK universities operate open-access repositories, where scholars can voluntarily deposit final drafts of their pay-to-access journal publications online. Small but growing numbers are also making such depositions mandatory.

An internet posting earlier this month alerted repository managers to Elsevier's move. "Rumours are spreading that Elsevier staff are approaching UK vice-chancellors and persuading them to point to PDF copies of articles on Elsevier's web-site rather than have the articles deposited in institutional repositories," the memo, on a mailing list operated by the Joint Information Systems Committee, said.

"The argument being used is that this will be cheaper than maintaining full text within repositories. If these reports are true, my guess is that Elsevier is using these arguments to undermine deposit mandates." The author of the post, Fred Friend, a consultant and former library director, said he wanted repository managers to be aware of the situation.

He said a repository operated by a journal publisher could set access conditions that undermine the needs of researchers and make it hard to "mine" the data....

Stevan Harnad, a professor at the University of Southampton who champions institutional repositories, said he was not surprised by the development. "If vice-chancellors are persuaded to adopt this policy, it would give repository access only to an unsatisfactory version (PDFs will not enable re-use for research purposes) and access on Elsevier's terms," he said.

Deborah Shorley, director of library services at Imperial College London, said she was not aware of Elsevier's activities, but added that "we have to make sure the control remains in the right place, which is with researchers".

Shira Tabachnikoff, director of corporate communications at Elsevier, confirmed that preliminary discussions had taken place with some institutions but would not go into detail on their nature.

"Institutional repositories might not be the best way for institutes to showcase their research," she said. "These discussions are about working with them to find improved methods."

She added that problems with institutional repositories include the archiving of incomplete papers and manuscripts containing errors, and the duplication of costs.


  • When universities launch OA repositories and policies to fill them, they do it for a reason.  They will not reverse course and turn control over to Elsevier instead.  Deborah Shorley at ICL responded exactly as I'd hope universities would respond.
  • Also see Fred Friend's original memo from June 2.

More OA resources on H1N1 flu

See also our past posts on H1N1 flu.

Video on OA in Spanish

Miguel Barrera Maureira, Open Access - Rol e Importancia, June 16, 2009; a 30-minute video on

Interview with IssueLab

Jane Park, IssueLab’s Lisa Brooks on Opening Up Research, Creative Commons, June 16, 2009. Interview with Lisa Brooks of IssueLab. Also see the same interview in comic form.

... IssueLab is an open source archive of research produced by nonprofit organizations, university-based research centers, and foundations. We track research across thirty-four social issue areas. Research contributors categorize their works in up to three issue areas and further sub-categorize as needed.

Archiving is part one; part two is dissemination. Daily we get in touch with people (nonprofit professionals, researchers, policy professionals, academics, etc.) who have expressed interest in the work we collect. As well, we start new relationships with people interested in social policy, or the sector, or research, or all of the above. We maintain a number of communication channels including our website, RSS news feeds (one per issue area plus a comprehensive give-me-everything-you’ve got feed), e-newsletters, we Twitter, we have a Facebook fan page, we run a LinkedIn policy discussion group. We also have an Open Archives Initiative-compliant data provider set up at for data sharing. And we have data partners that carry titles from our archive that fit with their mission. ...

IssueLab is an open access archive; it would be ludicrous of us to create content and make it difficult or impossible for people to access and share it. We use a Creative Commons license because we want to share what we do. We follow Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) in our data collection practices for the same reason. We chose BY-SA because it makes it crystal clear what people can do with our content — share and/or remix — and we want people to do just that. ...

See also our past post on IssueLab.

Recommended OA science resources

Barry Brown and Paul Piper, Freely Available Science Information Resources on the Web, Searcher, June 2009. Only this description is OA, at least so far:
Librarians facing budget cutbacks should check out these science resources available on the internet—and usually for free—recommended by Barry Brown, science librarian, and Paul Piper, reference librarian, who say the depth of information, from journal articles to gray literature, on the open web these days is quite impressive.
See also this list of links referenced in the article.

Forthcoming OA journal on autism

Autism Insights is a forthcoming peer-reviewed OA journal published by Libertas Academica. See the publisher's announcement. The journal's launch is anticipated by September 2009. Authors retain copyright and articles are published under the Creative Commons Attribution License. The article-processing fee is $1395, subject to discount or waiver.

Presentations from Open Repositories conference

The presentations from Open Repositories 2009 (Atlanta, May 18-21, 2009) are now online.

Presentations from Council of Science Editors meeting

The presentations from Show Me the Data — The Science of Editing and Publishing (Pittsburgh, May 1-5, 2009) are now online.

New issue of Learned Publishing

The July 2009 issue of Learned Publishing is now online. See especially:

Language in OA texts is like language in TA texts

Karin Verspoor, K. Bretonnel Cohen, and Lawrence Hunter, The textual characteristics of traditional and Open Access scientific journals are similar, BMC Bioinformatics, June 15, 2009.  Provisional abstract:

Background:  Recent years have seen an increased amount of natural language processing (NLP) work on full text biomedical journal publications. Much of this work is done with Open Access journal articles. Such work assumes that Open Access articles are representative of biomedical publications in general and that methods developed for analysis of Open Access full text publications will generalize to the biomedical literature as a whole. If this assumption is wrong, the cost to the community will be large, including not just wasted resources, but also flawed science. This paper examines that assumption.

Results:  We collected two sets of documents, one consisting only of Open Access publications and the other consisting only of traditional journal publications. We examined them for differences in surface linguistic structures that have obvious consequences for the ease or difficulty of natural language processing and for differences in semantic content as reflected in lexical items. Regarding surface linguistic structures, we examined the incidence of conjunctions, negation, passives, and pronominal anaphora, and found that the two collections did not differ. We also examined the distribution of sentence lengths and found that both collections were characterized by the same mode. Regarding lexical items, we found that the Kullback-Leibler divergence between the two collections was low, and was lower than the divergence between either collection and a reference corpus. Where small differences did exist, log likelihood analysis showed that they were primarily in the area of formatting and in specific named entities.

Conclusions:  We did not find structural or semantic differences between the Open Access and traditional journal collections. Research on Open Access full-text articles should generalize to the biomedical literature as a whole.

Comment.  This is not a surprising result.  But it opens the door for NLP researchers to take full advantage of the freely available and rapidly growing sample served up by the OA movement.

Another idea for building OA into the Google Book Settlement

Peter Eckersley, Google Book Search Settlement: Foster Competition, Escrow the Scans, EFF, June 11, 2009.  (Thanks to Charles Bailey.)  Excerpt:

There is mounting concern in some quarters that the Google Book Search settlement (see previous posts here, here, and here) could have anticompetitive effects. Everyone (including Google) seems to agree that, all else being equal, we shouldn't want a world where Google is the only entity that is scanning and providing online access to books, particularly the majority of out-of-print books whose owners can't be found (i.e., "orphan works").

So what would be necessary to create a marketplace with an opportunity for real competition? Obviously, entities other than Google will have to be able to get the same kind of blanket copyright license on comparable terms. Unfortunately, the proposed settlement makes Google the only company that can get a blanket license that covers orphan works — that issue has received considerable attention.

But those who are worried about market entry and long-term competition in this arena should also be thinking about another thing competitors need: access to the scans themselves.

The raw scans themselves should not be subject to copyright protection. But if Google hoards the scans, preventing bulk copying (with either legal or technical measures), then competitors will be forced to spend millions to re-scan the very same books in order to compete with Google. This not only is a barrier to entry, but also entails enormous long-term social waste — do we really want a world where every book needs to be re-scanned, over and over, by anyone who wants to enter this market?

Of course, Google (and anyone else) who wants to undertake the expensive task of scanning books should be entitled to some opportunity to recoup their costs. But it's hard to see why that should translate into an eternal barrier to entry for others. It makes no sense to require the 5th or 10th or 99th newcomer to spend millions to scan books that have already been scanned multiple times.

One good compromise might be to require that anyone who takes a blanket license (whether under the Google Book Search settlement, or under any legislation that might expand the settlement to others) must deposit a copy of the raw scans that they create with the Library of Congress or with the entity that administers the blanket license (e.g., the Books Rights Registry). After a period of years, let's say 14, the term of the Founder's Copyright, those scans should be made available at no cost to any others who take the relevant copyright licenses.

This would not only encourage market entry and competition in the online digital books arena, but would also foster innovation in the field. There's nothing that encourages digital innovation quite like access to an enormous dataset....

It makes no economic sense for us to force every future pair of graduate students who want to experiment with the book dataset to spend those hundreds of millions of dollars before they can launch their new startup. On the other hand, Google deserves some fair reward for navigating the obstacles and getting the books scanned. A compromise like a 14-year escrow rule might be just the way to achieve that.


  • I like this idea very much, and not just because it supports OA and not just because it's a compromise that stands a chance of being adopted.  Many, and perhaps most, of the books Google is scanning are provided by public universities and libraries which acquired and curated them using public funds.  (Even the private universities and libraries providing books are supported by public subsidies through untaxed property and tax deductible contributions.)  Eckersley's argument that the job is too large to repeat is sound --i.e. the non-exclusivity of Google's rights is just a formality.  But it can be buttressed by the argument that the public already has an investment in this project and should get more out of it than the opportunity to buy access from a new private monopoly.
  • By my count, it's the third proposal for building support for OA into the Google Book Settlement.  For the others, see (1) Charlie Nesson's proposal from April that a cut of the revenue from Google-scanned orphan works should fund an Open Access Trust, and (2) the proposal from Germany's Coalition for Action: Copyright for Education and Research (Aktionsbündnis:  Urheberrecht für Bildung und Wissenschaft), earlier this month, that Google should support OA for non-commercial purposes for at least some Google-scanned books.  If I'm missing any, please drop me a line.

Update.  Jennifer Howard tells me that in her podcast interview with Google's Adam Smith (June 15, 2009), she asked whether Google could be persuaded to put CC licenses on its scanned books.  Smith said it would, if that's what authors wanted.

Good question, good answer.  It's a start, even if comparatively few authors are interested in CC-based OA for their books.  Google should build this option into the settlement, set up a way for authors to register their interest, and then (if the settlement is approved) carry it out. 

On the other hand, we're not likely to get a large volume of OA from the Google settlement unless it comes from the orphan works, where we'll never know what the authors want.

Tuesday, June 16, 2009

On blogs, mailing lists, etc. in science

A lengthy discussion, in progress, on the role of blogs, mailing lists, and the like in scholarly communication:

An OA mandate for the Harvard Graduate School of Education

Harvard Graduate School of Education Votes Open Access Policy, a press release from the Harvard Graduate School of Education, June 16, 2009.  (Thanks to Ray English.)  Excerpt:

The faculty of the Harvard Graduate School of Education (HGSE) voted overwhelmingly at its last faculty meeting to allow the university to make all faculty members' scholarly articles publicly available online. The resolution makes HGSE the fourth of Harvard's 10 schools to endorse open access to faculty research publications. The Faculties of Arts and Sciences, the Harvard Law School, and the Harvard Kennedy School all passed similar policies in recent months.

"The field of education and the mission of libraries have always been aligned in efforts to bring knowledge to as many people as possible. With the open access resolution, the work of the faculty at the Harvard Graduate School of Education will now be available to all -- especially those who seek to improve the quality of education worldwide," said John Collins, librarian of Gutman Library at HGSE.

As a result of the resolution, HGSE faculty will now provide their scholarly articles to the Harvard Office for Scholarly Communication for deposit in an open access digital repository that is currently under development. When the repository launches later this year, the contents will be freely available to the public, unless an author chooses to embargo or block access. The policy makes rights sharing with publishers and self-archiving the default, while allowing faculty to waive Harvard's license on a case-by-case basis, at the author's discretion.

Professor Kurt Fischer said, "Educational researchers and leaders seek to share their knowledge and findings with educators, researchers, and anyone who is interested. Unfortunately, the current situation in publishing severely restricts access. The Open Access policy moves toward making writings available to anyone who can benefit from them."

Comment.  The momentum continues to grow, and you can see where it's going.  The new mandate follows the pattern set by the Harvard Faculty of Arts and Sciences (analyzed here), the Law School, and the Kennedy School of Government.  Kudos to all.  More later, including the text of the policy.

Update.  The text of the policy is now online.  It is virtually identical to the three previously adopted Harvard policies.


Re-released OA search tool

Paula J. Hane, Deep Web Tech Relaunches, Information Today, June 15, 2009.

... [Deep Web Tech] has relaunched Originally released in 2005 as a search engine focused on providing access to publicly searchable journal literature, now boasts a greatly expanded set of searchable collections and Deep Web Technologies' next-generation federated search engine. ... provides a single point of access to more than 400 high-quality, publicly searchable science and technology collections with a new, robust user interface specifically designed for advanced scientific research. "Featured collections" are included, which search major science search portals including,, and the E-Print Network. also searches ScienceConferences, a portal providing access to some of the best conference proceedings, and Each of these portals returns their best 200 search results to These results are aggregated with the results returned by individual sources. is divided into 15 categories, including Chemistry, Earth and Environmental Sciences, Health and Medicine, and Physics. Categories have also been created for Science News and Patents. Users can search any number of categories, searching all collections within the categories selected, or choose specific collections within a category to narrow their search.'s categories are managed by volunteer moderators who help the team select the best, most authoritative collections to include in each category. The company says it is seeking moderators for several categories. ...

The company also announced the availability of OpenSearch browser plug-ins for one-click searching of the major scientific information portals, including,,,,, and Users can easily add any of these portals to their browser's search engine box ...

Last fall, librarian Roddy MacLeod of Heriot-Watt University in the U.K. posted to the library's blog a list of 9 recommended science search engines. "These will usually give much more focused search results than Google," he wrote. Significantly, five of the nine search engines chosen are sites built by Deep Web Technologies. ...

3 new delayed OA journals at

Three more journals have launched their sites at Each will be delayed OA, with some backfiles already available. (Thanks to Charles Ellwood Jones.)

New tool for text-mining PubMed

Sam Zaremba, et al., Text-mining of PubMed abstracts by natural language processing to create a public knowledge base on molecular mechanisms of bacterial enteropathogens, BMC Bioinformatics, June 10, 2009. (Thanks to Free Culture & Archiving Planet.) Abstract:

Background: The Enteropathogen Resource Integration Center (ERIC) has a goal of providing bioinformatics support for the scientific community researching enteropathogenic bacteria such as Escherichia coli, and Salmonella spp. Rapid and accurate identification of experimental conclusions from the scientific literature is critical to support research in this field. Natural Language Processing (NLP), and in particular Information Extraction (IE) technology, can be a significant aid to this process. Description: ERIC has trained a powerful, state-of-the-art IE technology on a corpus of abstracts from the microbial literature in PubMed to automatically identify and categorize biologically relevant entities and predicative relations. These relations include: Genes/Gene Products and their Roles; Gene Mutations and the resulting Phenotypes; and Organisms and their associated Pathogenicity. Evaluations on blind datasets show an F-measure average of greater than 90 % for entities (genes, operons, etc.) and over 70 % for relations (gene/gene product to role, etc). This IE capability, combined with text indexing and relational database technologies, constitute the core of ERIC's recently deployed text mining application.

Conclusions: The ERIC Text Mining application was recently launched online on the ERIC website. The information retrieval interface displays a list of recently published enteropathogen literature abstracts, and also provides a search interface to execute custom queries by keyword, date range, etc. Upon selection, processed abstracts and the entities and relations extracted from them are retrieved from a relational database and marked up to highlight the entities and relations. The abstract also provides links from extracted genes and gene products to the ERIC Annotations database, thus providing access to comprehensive genomic annotations and adding value to both the text-mining and annotations systems.

CiteSeerX adds OAI-PMH support

Pradeep Teregowda, OAI Service available, CiteSeerX Bulletin, March 16, 2009.
OAI services have been enabled for CiteSeerX. You can use harvesters to download CiteSeerx metadata. The data is available in the Dublin Core format. It includes all the documents in the CiteSeerx collection.

New draft of Open Database License

Open Database License (ODbL) v1.0 Release Candidate 2 Available, Open Data Commons, June 15, 2009.

The Open Database License (ODbL) v1.0 "Release Candidate 2" is now available ...

As expected there haven’t been many changes from the first Release Candidate. The two main alterations are:

  1. Removal of section 4.7 related to reverse engineering. This may be reintroduced in later versions but has been left out here in order to remove any possible concerns about license compatibility on Produced Works.
  2. Explicit statement that derivative databases used in the creation of Publicly Available Produced Works are also subject to share-alike.

With the completion of this second round of comments we believe this text is now in final “1.0? form. In order to allow interested individuals and communities time to review the latest set of changes, as well as to provide an opportunity to catch any last minute “bugs” we are going to provide a one final, brief, comment period closing on Friday 19th of June at 1200GMT. ...

In preparation for the 1.0 release we have also continued to improve the FAQs as well as providing a new open data guide. ...

See also our past posts on the Open Database License.


In two weeks (on July 1) I'll significantly curtail my blogging.  The blog itself will continue, and Gavin will continue at something like his current pace.  But I'll start my new position at the Berkman Center and will only be able to post sporadically. 

Don't forget that even now, while I'm actively blogging, I recommend the OA tracking project (OATP) over OAN as a comprehensive source of OA-related news.  OAN is a selective subset of that news, and after July 1 it will be even more selective. 

If you follow the news on OAN, you should consider following the news on the OATP as well.  You can subscribe to the OATP news feed by RSS or email.  Or you can read it on a web page organized like a blog with the most recent items first.  If you want to improve the OATP news feed, please consider joining the project as a tagger.

For more detail on OATP, including its consequences for OAN, see my article in the May SOAN.

History of MEDLINE and ERIC

Sharon A. Weiner, Tale of two databases: The history of federally funded information systems for education and medicine, Government Information Quarterly, July 2009. Only this abstract is OA, at least so far:
Access to scholarly information in the disciplines of education and medicine occurred primarily through the simultaneous development of two bibliographic databases. The Education Resource Information Center (ERIC) originated as a resource designed to be comprehensive in its inclusion of peer-reviewed and unpublished literature for the entire education community. MEDLINE began as a resource of selective materials for physicians and researchers. Today, ERIC includes selected peer-reviewed literature directed primarily to researchers and practitioners, although others use the database, while MEDLINE is a vast information system serving all health professionals and consumers. This literature analysis of their policy history shows important differences in their evolution. Application of the Multiple Streams Framework can help in formulating possible explanations for the different developmental paths. These paths include: the degree of centralization or decentralization of the information system's organizational structure; the stability of the organizational mission; and the success of assessment strategies, federal budgetary support, and bias toward science in federal policy-making. These two government-supported databases served as models for a plethora of other databases. However, one was successful in acquiring funding from the outset, while the other continually suffered deficiencies in support. The importance of each to public welfare should have been obvious, but was not.

Honor for Malcolm Read

Malcolm Read, Executive Secretary of JISC, has been made an Officer of the Order of the British Empire (OBE) for his 16 years at JISC, including his leadership on OA.  From today's press release:

...Malcolm is strongly involved in driving policy and strategy development in the use of digital technology in post-16 education and research. His work at JISC has involved setting-up and supporting infrastructure provision and innovation programmes. He also has a strong interest in the cultural aspects of deploying ICT and promoting access to open research, education resources and the development of open innovation....

PS:  Also see our past posts on Read, focusing specifically on his OA work.

Wellcome policy on OA for orphan works

Christy Henshaw, Orphan Works, Wellcome Library Blog, June 16, 2009.  Excerpt:

...The Wellcome Library is full of artistic and literary works that are still in copyright....

In practice, most pre-20th century in-copyright materials are considered “ophan works” – items where the current copyright owners are impossible to identify, or trace. However orphan work status may also apply to more recently published works. The British Library, for example estimates that 40% of in-copyright works are orphan works.

Designating an item an “orphan work” does not change its legal status (it is by definition in copyright), and there can be risks in reproducing orphan works. Copyright holders may, quite rightly, demand the destruction of any copies of their works, and the payment of compensation for any revenue lost as a result of the reproduction.

The Wellcome Trust supports an open access policy with regard to its digital materials and aims to make as much as possible available freely online, whilst at the same time respecting copyright law. Orphan works are a difficult area that must be handled with care but which en mass provide a valuable contribution to the research community.

Recognising this value, the Wellcome Library undertakes due diligence to establish whether a 20th or 21st century work is indeed an orphan work. Traceable copyright holders are contacted (sources vary depending on type of material) and asked for permission. If no response is received, and no other potential copyright holder can be identified, the item is considered an orphan work, and mounted online. If a copyright holder did, subsequently, come forward and request that the image be removed, the Library would do so (see the Library’s take down policy)....

OAD list of OA journal funds

I just moved my list of OA journal funds to the Open Access Directory (OAD) for community editing. 

Richard Poynder posted a version of it last week and called for help in completing it.  I join the call, and hope that readers with new or better information can improve the list.  

Monday, June 15, 2009

Video of SCOAP3 Webcast

A video of the June 10 SPARC/ACRL Webcast, SCOAP3: An opportunity to create change, is now available. From the description:
In follow up to the recent release of a new set of Frequently Asked Questions on the SCOAP3 initiative, SPARC and ACRL are pleased to host Dr. Salvatore Mele, Head of Open Access at CERN and spokesperson for SCOAP3, at a live Web cast to explore the process of committing to the consortium, establishing its governing board, the project’s conditions for the call for tender, and to answer remaining questions.

Papers and notes from ETD conference

The presentations from ETD 2009 (Pittsburgh, June 10-13, 2009) are now online. Many of the papers are on OA.

See also the conference blog and Peter Murray-Rust's critiques (e.g. 1, 2):

... The whole meeting seems to be asleep about the urgency to liberate these theses into the digital Open. ...

[W]e are sitting on a goldmine of scientific information in academic theses and we are deterred from using them by copyright FUD. There is an implicit assumption that copyright is one of the god-given commandments – it seems almost revered here. ...

... Although there was some appreciation of the fact that theses now had a wider readership, there was little discussion of how they could enhance the visibility of theses. ...

The answer is simple. Create Open Theses in HTML and publish them. Use IR’s if you think that’s a useful way of making them permanent – but it’s not required. ...

New no-fee OA pharmacy journal

The Journal of Young Pharmacists is a new peer-reviewed OA journal published by the InPharm Association and Medknow. (Thanks to Vikas Anand.)

There are no article-processing charges. The inaugural issue is now available. Reuse for "reasonable non-commercial purpose" is permitted with attribution.


Peter Millington, SHERPA/RoMEO API Upgrade and Future Development, posted to the SPARC Open Access Forum, June 12, 2009.

SHERPA has just released Version 2.4 of the RoMEO Application Programmers' Interface (API). The new version uses a totally new algorithm and is faster than earlier 1.x versions. It also supplies data for the fields that were missing in earlier versions - paid open access, and compliance with research funders' mandates.

A full list of the changes can be seen in Appendix D of the updated documentation ...

Combining content from OA journals and Wikipedia

Matthew Cockerill, Wikipedia and open access journals - now more compatible than ever, BioMed Central Blog, June 11, 2009.

... I was especially happy to hear the ... announcement last month that Wikipedia's content will soon be switching from its current licensing scheme (the GNU Free Documentation License) to a Creative Commons license – specifically the Creative Commons Attribution/Share-Alike License (CC-BY-SA).

This new license chosen by Wikipedia is a variant of the Creative Commons Attribution License (CC-BY), which is used by BioMed Central and many other open access publishers. The difference between the two is that the version used by Wikipedia requires that any derived work that includes the material must be similarly licensed.

What this means in practice is that it is now straightforward, from a licensing perspective, for any organization whether commercial or non-commercial to create derivative works incorporating both open access research articles and Wikipedia content, and to distribute these combined works under the CC-BY-SA license. ...

CiteSeerX now searches within tables

Lee Giles, CiteSeerX indexes tables, posted to American Scientist Open Access Forum, June 11, 2009.

CiteSeerX now provides indexing and ranking of tables in documents. This new feature will soon be released in open source as part of the CiteSeerX open source project. Currently, nearly a million tables are indexed.

In addition, a demo of the data extraction from tables in pdf files can be found at [link] ...

See also this blog post by Pradeep Teregowda:
... Table search allows users to search embedded tables of documents in the CiteSeerx collection. Table caption, reference text and footnotes are indexed for each table. Ranking of table search results can be based on relevance, year and the number of citations to the corresponding document in CiteSeerx.

More on OA at York U.

Andrea Kosavic, Reaching Out Beyond York’s Borders: Contributing to the Global Research Library, YULibrary News, Spring 2009.

... At York University, we host a repository called YorkSpace which serves as our publicly accessible digital library of scholarship. The YorkSpace team is available to address copyright questions and to help those that would like to ensure access to their research.

When compiling an online CV or a list of recent articles for your departmental profile, links to your articles that reside within the library’s electronic resources will not be accessible to those outside of the York community since a Passport York account is required for access.

If you deposit your articles in YorkSpace, you can easily link to them with no access restrictions. Added benefits include improved discovery of your work, institutional context and the library’s commitment to the persistence and preservation of your work. ...

From the sidebar:

The [York University] Libraries are actively supporting the principles of Open Access. ...

If you would like a workshop for your graduate students on the implications of open access and their academic career, please contact Mark Robertson, Associate University Librarian, Information Services.

Our Scholarly Communications Initiative adopts a strong advocacy role in raising awareness within the York community around open access and author rights issues. Within the broader Canadian context, York, as a member of the Canadian Research Knowledge Network (CRKN), fully endorses the CRKN Statement on Alternative Publishing Modes & Open Access.

York University Libraries has long provided financial support for a number of Open Access initiatives including [SPARC, the Stanford Encyclopedia of Philosophy, the Directory of Open Access Journals, the Internet Archive, BioMed Central, Bioline, Public Library of Science] ...

See also this article on Creative Commons from the same issue.

Examining the impact of the PEER project

Gaz Johnson, PEERing through the scholarly publishing gloom, UoL Library Blog, June 11, 2009.

... I can see how PEER may well produce some interesting information and reports on the European repository and publishing scene. However, as with so many of these large inatives I’ve yet to spot where the directly applicable and readily employable outputs for repository people will be. Is PEER to act as a lobbying service on our behalf? No. Will PEER mediate discussions twixt the various stakeholders? Maybe. Will PEER change the way our repository functions? In some way I guess.

Perhaps it is too early to pour cold water on what PEER can, may or will achieve – but I’ve seen these big EU wide initiatives before (I’m thinking of DRIVER) which have had only a minor impact within the UK HEI repository community. Worthy work for sure, but so much at a nebulous, Ivory tower strata rather than a practitioner level. ...

New OA philosophy collection

The OA Georg Henrik von Wright online collection will be released on June 16 by The collection consiss of 46 articles and essays by the philosopher von Wright. The release will be marked by a Webcast, also on June 16. (Thanks to Rasmus Rendsvig.)

New Australian telescope to produce OA data

Australian National University, SkyMapper surveys the southern skies, press release, May 25, 2009. (Thanks to Keith Lyons.)

SkyMapper, Australia’s first new optical research telescope for 25 years, and the first to conduct a comprehensive digital map survey of the southern skies, was officially launched today at the Siding Spring Observatory facility of The Australian National University. ...

SkyMapper is a state of the art telescope which has been custom built to undertake the Southern Sky Survey – the first ever systematic digital map of the southern skies. Over the next five years the telescope will take detailed pictures of the entirety of the southern sky. In the process it will produce 400 Terabytes of data – equivalent to 100,000 DVDs – and that data set will be freely available to astronomers via the Internet. ...

From the project site:
... The data taken by the SkyMapper telescope will be shared with astronomers around the world via the Virtual Observatory initiative, so that every possible use can be made of this resource. ...

Launch of EOS web site

Bernard Rentier has announced the launch of the web site for Enabling Open Scholarship (EOS).  Read his announcement in French or Google's English.  EOS is the successor to EurOpenScholar (also EOS), which was launched in October 2007.  The first EOS was European, while the second is global.  Rentier is the rector of the University of Liege and the Chair of the new EOS.

The official launch of the organization itself, as opposed to the web site, should follow shortly.  The EOS advisory board is meeting in Brussels today to make the final arrangements.

From the new EOS web site:

EnablingOpenScholarship (EOS) is a membership organisation for universities and research institutions. The organisation is a forum for raising and discussing issues around the mission of modern universities, particularly with regard to the creation, dissemination and preservation of research findings.

The context for the establishment of the EOS forum has been:

Anyone who is interested in enrolling their institution as a member, or in attending an EOS meeting or briefing session, is invited to email the convenor of the group, Dr Alma Swan (contact details) .

PS:  One of the top priorities for EOS will be to help universities adopt effective OA policies.  With that in view, note the very strong optimal institutional Open Access policy and FAQ at the EOS web site.  Also see our past posts on the new and old EOS.


New OA series for computer science conferences

Announcing the Leibniz International Proceedings in Informatics (LIPIcs) series, announcement, April 2, 2009. (Thanks to Luca Aceto.)

Schloss Dagstuhl Leibniz Center for Informatics (LCI) establishes a new series of conference proceedings called Leibniz International Proceedings in Informatics (LIPIcs). The objective of this series is to publish the proceedings of high-quality conferences in all fields of computer science, and LCI institutes an Editorial Board to oversee the selection of the conferences to be included in this series.

The proceedings in the LIPIcs series will be published electronically and will be accessible freely and universally on the internet, keeping the copyrights of the authors, and under an open access license guaranteeing free dissemination. To face the cost of electronic publication, a one-time fee will be required from the conference organizers. This fee will be kept to a minimum, thought to cover the costs of LCI, thanks in particular to a sharing of the workload between LCI and the conference organizers.

Applications for publication in the LIPIcs series are already welcome, and are to be addressed to the Scientific Director of LCI. ...

The volumes in the LIPIcs series will be published electronically under an open access license. No copyright transfer to LIPIcs will be required. LIPIcs will provide a choice of possible licenses to conference organizers (initially: the Creative Common licenses that were used by STACS and FST&TCS). A minimum requirement for a license to be eligible is that it allows for identical reproduction of the papers as long as credit is given to their authors. In particular, printing these papers must be allowed, as well as non commercial distribution and posting on websites. ...

The fee for each conference aims at covering the cost of processing the corresponding volume of proceedings. In the initial period, this fee is set at 250 euros. The experience gained with the first few conferences will allow a more precise estimate of the cost of this series for LCI, and the fee will be revised accordingly. Until better insight is available, the fee will remain at most equal to 750 euros. ...

See also, by way of comparison, our past post on a new refereed OA repository for conference proceedings in computer science.

Universities with OA journal funds

Richard Poynder, Gold OA Funds, Open and Shut? June 14, 2009.  Excerpt:

I am trying to establish how many research institutions and funders have created Gold Open Access (Gold OA) authors funds, and would be grateful for input from others.

I am aware that the Wellcome Trust announced a scheme for paying OA publication fees for its grantees in 2006. But what other funders have introduced such schemes?

So far as research institutions are concerned, Peter Suber kindly provided me with the following list of those he knows have created Gold OA funds:

University of Amsterdam
University of Calgary
University of California, Berkeley
Delft University of Technology
ETH Zurich
Griffith University
University of Helsinki
Institute of Social Studies (Netherlands)
Lund University
University of North Carolina, Chapel Hill
University of Nottingham
University of Tennessee, Knoxville
Texas A&M University
Tilburg University
Wageningen University and Research Center
University of Wisconsin

However, I do not think this list is complete. I understand, for instance, that the University of Oregon has also created a Gold OA fund.

There are also some universities currently considering creating Gold funds including, I am told, both Cornell University and University College London (UCL).

In the light of current discussion on AmSci, it might also be useful to know how many research institutions have both set up a Gold OA fund and introduced a Green self-archiving mandate.

After reviewing the list above Stevan Harnad suggested that only two (ETH Zurich and the University of Helsinki) of the 85 research institutions that have introduced a Green OA mandate have also created Gold funds, although if we add the University of Oregon the figure would be three; and if UCL created a fund it would be four....

But are there any other research institutions or funders with Gold OA funds that are not listed above? Might an equivalent to ROARMAP (which tracks Green mandates) be a useful way of tracking the introduction of Gold funds?

Comment.  On the last question, I'm taking steps to add this list to the OAD.  Stay tuned. 

OA preprints with press embargoes

What's the point of depositing a paper in arXiv with the annotation, "Submitted to Nature. Under press embargo"? 

If a publisher won't consider a paper which has already been published or publicized (that is, if it follows the Ingelfinger Rule), will this hand-waving satisfy it?  Either way, why should authors indulge publishers who adopt the rule?  Why should readers indulge authors who try to follow the rule by putting an embargo on an OA preprint?

There's a good discussion thread on this cluster of questions at at Discover.

An OA repository for an Hungarian funding agency

The Hungarian Scientific Research Fund (Országos Tudományos Kutatási Alapprogramok, OTKA) has launched an OA repository.  (Thanks to ROAR.)

Putting a price on legal precedents

Don’t Mess With Texas, When It Comes to Memorandum Opinions Anyway, Advocate's Studio, June 12, 2009.  Excerpt:

...[I]n 2003, the Texas legislature barred the use of unpublished legal opinions in civil cases, but authorized the use of memorandum opinions in their place. In 2008, the Legislature took matters one step further by giving the memorandum opinions issued since 2003 full precedential value.

So, what’s the problem you ask? Well, these fully binding, precedential memorandum opinions are only accessible by Westlaw or Lexis. Cha-ching! ...

New PLoS ONE collection on prokaryotic genomes

PLoS ONE has launched a new collection on prokaryotic genomes. From the announcement:

There are many reasons why PLoS ONE is ideally suited to publishing this rich genomic data. Since every peer-reviewed article is…

  • published open access and freely accessible online – so it can be read by everyone
  • available in XML (HTML/PDF) format – so it can be read by machine
  • pulls in other relevant data such as citations, blog coverage and bookmarks – so you see more of the picture
  • and allows user discussion through rating and commenting – so you can contribute to the debate

Notes from open data meeting

The notes and transcript from the first meeting of the Working Group on Open Data in Science (online, June 2, 2009) are now available.

New version of Social Networking Extensions for EPrints

Version 0.3.2 of SNEEP (Social Networking Extensions for EPrints) has been released. (Thanks to Charles Bailey.)

See also our past posts on SNEEP.

JURN is now indexing 3,000 journals

JURN is now indexing more than 3,000 OA journals.

See also our past posts on JURN.

New French OA index of online journals

Mir@bel (Mutualisation d'Informations sur les Revues et leurs Accès dans les Bases En Ligne) is a new OA index of online journals, especially Francophone journals in humanities and social sciences. (Thanks to Fabrizio Tinti.) The search includes an option to limit results to journals with full text and/or to OA results.

Sunday, June 14, 2009

More on the OA fund at the U of Calgary

Andrew Waller, One year (almost) with the Open Access Authors Fund, a presentation at the University of Ottawa Training Week, May 25-29, 2009.  Self-archived June 12, 2009. 

Abstract:   This presentation described the origin of and policies and procedures relating to the Open Access Authors Fund at the University of Calgary. The activities of the Fund in its first year were presented and discussed. Other Open Access activities at the University of Calgary were also briefly discussed.