Open Access News

News from the open access movement


Saturday, November 01, 2008

Portal of Francophone national digital libraries launches

The Réseau francophone des bibliothèques nationales numériques [Francophone Network of National Digital Libraries] has launched its portal. See the October 18, 2008 press release (in French). (Thanks to Olivier Charbonneau.) A rough translation of excerpts from the press release:

... The network makes real the double mission of long-term preservation and wide diffusion of the documentary heritage of French-speaking areas. The design and the execution of the portal were entrusted to the Bibliothèque et Archives nationales du Québec. ...

Five principles guide the libraries' activities:

  • non-exclusive access to the digital collections for search engines;
  • guarantee of gratis access to the public for public domain documents;
  • maintaining the digital files in the public domain and guarantee of their long-term preservation;
  • multilingual access to the collections;
  • certification by the national libraries of the entirety and the authenticity of the documents put online.

This unique project of preservation and adding value to a heritage that is often poorly accessible and sometimes threatened with disappearance is developed with the active assistance of the Organisation internationale de la Francophonie ...

[At the portal], Web users will thus have the opportunity to consult newspapers, journals, books, and maps, as well as digitized archives from the collections of ten or so institutions from French-speaking places ...

See also the recent French government report, France Numérique 2012, which alludes to the portal.

PS Update (12/13/08). The RFBNN is sponsored by the national libraries of Belgium, Canada, France, Luxembourg, and Switzerland, the provincial library of Quebec, and the Library of Alexandria.  (Thanks to the IFLA ITS Newsletter.)  The Francophone countries committed themselves to launch the RFBNN in Article 43 of the September 2006 Bucharest Declaration.

Google begins scanning at Columbia

Columbia’s Mass Digitization Project with Google is Underway, press release, October 28, 2008.

Columbia University Libraries have begun work in partnership with Google, Inc. to digitize selected public domain printed volumes in the University Libraries’ collections. The Libraries are selecting from the hundreds of thousands of public domain volumes in the collections that may be legally copied and mounted online, including works in a wide variety of languages and scripts. Each volume will be unavailable for a period of time while it is being scanned. ...

This multi-year project with Google will provide faculty, students, scholars, and readers around the world with an unprecedented ability to search, locate, and read books from the University's collections. Once our books and journals are digitized, they be available through Google Book Search, with links added directly from CLIO to the digitized versions as the project moves forward. Columbia will preserve the digital versions for future users as well. ...

See also our past post announcing the partnership. Columbia also signed an agreement with Microsoft and the Open Content Alliance.

Collecting stories of researchers and repositories

Les Carr is collecting stories of researchers' interaction with repositories:

Practitioners, developers and researchers in the repository community have been asking whether repositories are effective at appealing to their primary stakeholders: researchers. It would be great to have a collection of success stories - anecdotes of how repositories have been able to improve the lot of researchers ...

Please can you email me a short (1 paragraph) success story (or stories!) about how your repository improved the experience of some researchers at your institution. They could be in the form of a user testimonial or described in your own words. I am not looking for tales of mass conversions and hysteria, just very practical stories of repository benefit as experienced by individuals. I will collect these together and make them available for repository managers and others to use in their marketing and advocacy. ...

Another overview of OA

Charles Oppenheim, Electronic scholarly publishing and open access, Journal of Information Science, August 1, 2008. Only this abstract is free online, at least so far:
A review of recent developments in electronic publishing, with a focus on Open Access (OA) is provided. It describes the two main types of OA, i.e. the `gold' OA journal route and the `green' repository route, highlighting the advantages and disadvantages of the two, and the reactions of the publishing industry to these developments. Quality, cost and copyright issues are explored, as well as some of the business models of OA. It is noted that whilst so far there is no evidence that a shift to OA will lead to libraries cancelling subscriptions to toll-access journals, this may happen in the future, and that despite the apparently compelling reasons for authors to move to OA, so far few have shown themselves willing to do so. Conclusions about the future of scholarly publications are drawn.

Social and technical aspects of archiving a digital project

Catherine Howell, Reflection and Selection: Creating a digital project archive, catherine's blog, October 21, 2008. (Thanks to Fabrizio Tinti.)

At the end of any project, the time comes to wrap up work and, hopefully, to prepare project outputs for dissemination and archiving. We’re working with the folks from CTREP to create the digital archive for the Learning Landscape Project. ...

CTREP is a project under JISC’s Repositories and Preservation programme. At Cambridge, the CTREP team is working to integrate our VLE, Sakai/CamTools, and our institutional repository, DSpace@Cambridge. (Up in the wilds of Scotland, our CTREP colleagues at the wonderfully-named University of the Highlands and Islands are doing related integration work with TETRA / Fedora). The idea is that DSpace will appear as “just another folder” in the Resources area of a CamTools site, and that pushing items from Resources into Dspace will, eventually, be a simple matter of drag-and-drop. Metadata will be pulled in automatically, along with each individual resource item—although I’ve got a lot to learn about how that works, exactly. ...

Various editorial processes happen before an item even makes it through from the project’s worksite to the second, archival site. Inevitably, our archival site does not reflect the messy “reality” embodied in the project worksite, and perhaps especially in the project Wiki. But an archive has to balance the desire to keep everything with the ultimate goal of usability ...

EDUCAUSE statement on openness

EDUCAUSE has released (October 1, 2008) a statement on openness:

A central pillar of the academic community is its commitment to the free flow of information and ideas. This commitment to sharing is essential to scholarly discovery and innovation. ...

The academic—and, by extension, social—value of unfettered intellectual exchange finds expression in technologies, applications, and approaches that foster sharing, collaboration, and open access to knowledge and resources. ... In an IT context, examples include:

  • Open standards and interoperability
  • Open and community source software development
  • Open access to research data
  • Open scholarly communications
  • Open access to, and open derivative use of, content ...

As the higher education technology association, EDUCAUSE embraces the value of openness. EDUCAUSE will work with its community and others to facilitate discussions on where open technologies, applications, and approaches are needed and how best to achieve them. EDUCAUSE will also look for opportunities, consistent with its mission and member service obligations, to support such efforts and to itself adopt open approaches. ...

Magazine supplement on JISC activities

The November 2008 issue of Library & Information Update contains a supplement on JISC's activities. (Thanks to Fabrizio Tinti.) Some relevant articles:
  • Getting the right message across [interview with outgoing JISC Chair Ron Cooke]

    ... The infrastructure for e-science and data sharing poses another challenge. Data is being generated in unprecedented volumes. ...

    ‘The data deluge has all sorts of implications. Data is cheaper, and sometimes free, which leads on to open access. JISC is doing a lot here, and it is behind the creation of the Strategic Content Alliance ...’

  • Procuring content for the community [interview with JISC CEO Lorraine Estelle]

    ... As part of the Caspar project, JISC Collections is providing copyright advice for institutions creating e-learning courses for undergraduates. This involves clearing copyright for third-party content to be included and managing the intellectual property in the new content, putting all the right agreements in place through a new Open Education User Licence, so that other institutions can use it. ...

  • A national e-content strategy and framework: the work of the Strategic Content Alliance

    Setting up and deploying a UK content framework is ‘a key strategic objective of JISC’, says Stuart Dempster, Director of the Strategic Content Alliance. ...

    Primarily through the website and the mechanism of affiliate membership (organisations should provide content ‘for the public good’), the Alliance allows public sector organisations to share expertise and knowledge. ...

    Underlying all the work is the principle of open access. However, it is still ‘an aspiration that needs to be supported by a business model’. ...

  • International collaboration and global infrastructure

    ... Pan-European partnerships between libraries and library consortia are becoming increasingly common. They include the Sparc Europe open access initiative; the Dart-Europe portal to give access to European research theses (sponsored by Liber, the Association of European Research Libraries); and Driver, which hopes to create a pan-European infrastructure for repositories. ...

  • Digital Libraries in the Classroom

    [Rachel Bruce, Programme Director for the JISC Information Environment:] ‘One of the two major messages in the JISC strategy is to have a world-class structure – it’s not just about libraries, but the digital infrastructure to facilitate research. ...

    ‘The other is creating a layer of scholarly resources and enabling it to be used in multiple ways across the network. That’s where digital repositories, open access and open standards come into play. ...

    ‘There is also an emerging idea of aggregation of data and resources. We should be making our data and resources available by using open standards and standard APIs (application programming interfaces), so people can build their services on top.

    ‘At the moment, the most mature dataset in repositories is the academic paper. Obviously, we’re interested in open science and open data as well. ...’

  • Embedding subject librarians in research departments [interview with Simon Coles, manager of the e-Crystallography Data Repository]

    A different funding model [for librarian-researcher collaboration] would be for research departments to ask for ‘a pot of cash’ to display the results of their funded research. As there is no extra money, this might eventually result in a reduction of central grant to the library. ‘Rather than spending money on licences to collections from the publishers, you would be paying for material to be made available from the ground up.’

    It is unlikely to happen overnight, though, because ‘there hasn’t been the broad uptake of open access that some expected’. ...

    If funding comes via different routes, and journal articles are for status, not the Research Assessment Exercise, emphases in the scholarly communication chain will change.

    It should also mean that researchers will be able to start to look at the data that interests them most. Much of it does not involve conventionally published output, but ‘a whole load of grey stuff that can go into institutional repositories.

    ‘In some respects it has greater value to the researcher than any reprint of a published article. About 50 per cent of the time I’m interested in the data behind it and probably won’t even read the words. That is why institutional archives being made openly available can be quite a big win.’ ...

  • The problem with the future

    ... [O]ne area that TechWatch thinks is really changing the game is less a single technology, more a state of mind.

    ‘The rise of open source has led to new ways of developing software through disparate communities,’ says [TechWatch Project Manager Gaynor Backhouse], adding that the basic idea of ‘open innovation’ is now starting to permeate through into other areas such as biology. ‘We are witnessing a profound change in the way that knowledge is produced. ...

  • Reaching out to business and the local community through lifelong learning

    Working closely with business and the community is a policy priority of the Higher Education Funding Council for England. ...

    But now, attempts to open access out further, and promote training, business and community engagement, are being thwarted by the licensing regime for digital resources. Not only do the rules govern electronic materials, but the software to read or use them, too.

    The restrictions by rights-holders are bringing libraries – certainly, their frontline staff – into conflict with the very people they want to welcome. ...

  • Repositories take-up: cultural barriers

    Digital repositories are part of JISC’s e-Infrastructure programme, and a huge amount of technical resource and advice has been made available for setting them up. In Britain there are 132 functioning ones according to the OpenDoar project. Only about 35 institutions which might expect to set up a repository have not already done so. ...

    The real problems are cultural ones, according to Pete Cliff, Research Officer with the Repositories Support Project at Ukoln, the technical and innovation agency, largely funded by JISC.

    The drive to set up institutional repositories came out of the open access (OA) movement. But proponents of OA then ‘just left it’.

    ‘What they didn’t say is what services repositories are offering to institutions, how it would benefit them. They assumed the benefits were self-evident. But academics have often seen only the drawbacks. They say: why do you want me to give my stuff away?’ ...

    Ukoln has found that advocacy is a key element. ‘We need to talk about services, to say what we’re offering the researcher and the academics. Otherwise they won’t get it. ...’

OAD list of author addenda

The Open Access Directory (OAD) just opened a list of Author addenda.  Excerpt:

An author addendum is a proposed modification to a publisher's standard copyright transfer agreement. If accepted, it would allow the author to retain key rights, especially the right to authorize OA. The purpose is to help authors who are uncomfortable negotiating contract terms with publishers or who are unfamiliar with copyright law and don't know the best terms for a modification to support OA. Because an addendum is merely a proposed contract modification, a publisher may accept or reject it.

The list launches with 15 different addenda, but there are undoubtedly more out there.  Remember that OAD is a wiki, and counts on users to keep its lists comprehensive, accurate, and up to date.

Labels:


Friday, October 31, 2008

80% of French research institute's recent publications are OA

Ifremer, Over 80% of 2005-2008 Ifremer's publications in Open Access, announcement, undated but this week. Ifremer is the Institut français de recherche pour l'exploitation de la mer [French Research Institute for Exploitation of the Sea]. (Thanks to Morgane Le Gall.)

In August 2005, Ifremer launched its institutional repository, Archimer. This repository is now [offering] more than 3700 documents available for free on the Internet [including] more than 80% of [the] international publications co-written by Ifremer since the opening of the repository.

Indeed, since August 2005, Ifremer co-published 990 articles referenced in the Web of Science®. 812 of these 990 publications are freely available in Archimer, almost 82%.

Copyright rules applied to these 990 publications can be classified as follows:

  • 31 articles were [published] by publishers who were not yet listed on the website Sherpa/Romeo ... and needed to be contacted,
  • 40 articles were [published] by publishers who forbade the registration of their publications in an Open Archive ...
  • 177 articles were [published] by publishers that allow self-archiving of their own PDF files,
  • 742 articles were published by [publishers] that limited the right of self-archiving [to] the [author's final manuscript]. The drafts of 613 of these 742 items were collected and recorded

This good result is linked to the involvement of the Ifremer library service in the operation of this archive. It is the library staff that ensures itself the preparation and registration of publications into Archimer: [Ed.: describes how library staff track, collect, and deposit publications.] ...

Comment. It's not completely clear to me, but I think the publications in question here were written by the institute staff, not institute grantees -- i.e. intramural, not extramural, researchers. (I don't know if the institute funds external researchers.) Update: Le Gall confirms that the publications are by the institutes's staff; it doesn't fund extramural researchers.

See also our past post on Archimer.

Here comes OA

The November issue of Genome Technology contains a five-part cover story on OA by Meredith Salisbury.  Here are the articles, with GT's own blurbs:

How Google Book Search will change

After announcing its settlement with publishers and authors, Google launched a page on The Future of Google Book Search.  Excerpt:

...It will take some time for this agreement to be approved and finalized by the Court. For now, here's a peek at the changes we hope you'll soon see.

  1. Book Search today
  2. How Book Search will change
  3. Three types of books
  4. The Book Rights Registry
  5. Libraries and universities
  6. Looking forward

From #2, How Book Search will change:

...Until now, we've only been able to show a few snippets of text for most of the in-copyright books we've scanned through our Library Project. Since the vast majority of these books are out of print, to actually read them you'd have to hunt them down at a library or a used bookstore....

This agreement will allow us to make many of these out-of-print books available for preview, reading and purchase in the U.S.. Helping to ensure the ongoing accessibility of out-of-print books is one of the primary reasons we began this project in the first place, and we couldn't be happier that we and our author, library and publishing partners will now be able to protect mankind's cultural history in this manner....

This agreement will create new options for reading entire books (which is, after all, what books are there for).

  • Online access

    Once this agreement has been approved, you'll be able to purchase full online access to millions of books. This means you can read an entire book from any Internet-connected computer, simply by logging in to your Book Search account, and it will remain on your electronic bookshelf, so you can come back and access it whenever you want in the future.

  • Library and university access

    We'll also be offering libraries, universities and other organizations the ability to purchase institutional subscriptions, which will give users access to the complete text of millions of titles while compensating authors and publishers for the service. Students and researchers will have access to an electronic library that combines the collections from many of the top universities across the country. Public and university libraries in the U.S. will also be able to offer terminals where readers can access the full text of millions of out-of-print books for free.

  • Buying or borrowing actual books

    Finally, if the book you want is available in a bookstore or nearby library, we'll continue to point you to those resources, as we've always done.

International users

Because this agreement resolves a United States lawsuit, it directly affects only those users who access Book Search in the U.S.; anywhere else, the Book Search experience won't change. Going forward, we hope to work with international industry groups and individual rightsholders to expand the benefits of this agreement to users around the world....

More comments on the Google-Publisher settlement

Here are some more comments from the press and blogosphere.

From Reyhan Harmanci at the San Francisco Chronicle:

...[Allen] Adler, of the [AAP] , compared the new independent, not-for-profit Book Rights Registry to the music industry's American Society of Composers, Authors and Publishers (ASCAP), which monitors and compensates musicians for live and recorded performances of their music. "It's the same concept - a central entity that protects rights holders through third-party licensing," he said....

San Francisco Electronic Frontier Foundation staff attorney Corynne McSherry said she is "still digesting" the agreement but had some early thoughts:

"I will tell you, frankly, that I kind of wish this case had gone to litigation. I think Google had a great fair-use defense," she said. "A ruling from the court would have been good for everyone. It potentially could have fostered other offerings, based on that legal certainty" if Google had won.

Brewster Kahle, founder of the Internet Archive, which is based at the Presido and has partnered with Yahoo, Microsoft and 135 libraries to create the Open Content Alliance, said the agreement moves libraries "toward a monoculture."

"One company is trying to be the library system," Kahle said, speaking of Google's plans to create a subscription service for library collections. "This is not good for a society that is built on free speech. Let's have the World Wide Web rather than the iTunes of books."

From Mathew Ingram at MathewIngram.com:

This settlement is a huge step forward for online and electronic access to books. As Google has repeatedly argued, this will make it substantially easier for authors and publishers to find, distribute and monetize out-of-print books — in effect, creating or enhancing a “long tail” for book publishing. It will also make it easier for people to purchase electronic books, and for libraries to provide electronic access to books in their collections for readers and researchers alike (as part of the settlement, Google will provide free access to millions of scanned books through public libraries and universities)....

From Mike Madison at Madisonian:

...Has Google backed away from an interesting and socially constructive fair use fight in order to secure market power for itself?  I wrote early on that I would be disappointed if Google didn’t see the case through to judgment, and at one level, yes, I am disappointed.

But there is a big silver lining for me.  The proposal offers a new and larger set of questions, questions that have surrounded Google generally for some time but that the proposal puts into more concrete focus:  Are we seeing the early stages of the beginning of the end of copyright law as we know it?  The “standard” account of copyright, if such a thing still exists, posits a statutory allocation of interests between authors and readers, followed by institutional arrangements in specific contexts (fair use, voluntary licensing, collective rights management, compulsory licenses) to tweak that allocation at the margin, where problems arise.  It has been my sense for some time that in many information policy debates, the default statutory arrangement no longer commands automatic attention as the presumptive center of the copyright universe.  Institutional and disciplinary interests and arrangements of various sorts (technical architectures, commercial enterprises, new institutions such as open source licensing and Creative Commons) have not displaced the statute entirely, but instead have begun to push the statute to a place where it negotiates for attention as a normative landmark.  Fighting over the scope of section 106 (the copyright owner’s exclusive rights) and section 107 (fair use) sometimes seems very 20th century.  I suspect that the Google Book Search settlement will reinforce and perhaps accelerate that trend....

From the University of Michigan in a press release (separate from the joint university press release):

Why does the University of Michigan support this settlement agreement?

On balance, we believe the agreement is consistent with the Library’s mission and serves the public interest by providing unprecedented access to these materials. The agreement offers our Library the opportunity to do the following:

  • Make it possible for our academic community to find and use the full text of millions of books online.
  • Protect our holdings against loss, damage, or deterioration. For example, in the event of a catastrophe such Hurricane Katrina, which destroyed thousands of volumes at New Orleans area libraries, we would have digital surrogates for print materials.
  • We can now more easily create a resource that academic researchers can use to perform large-scale analysis such as data mining or computational linguistics, analyses of a sort not be permitted through a generic web interface such as Google Book Search....

What does the settlement mean for the HathiTrust?

The HathiTrust has been designed first and foremost as a collaborative preservation archive for materials in libraries, and would have fulfilled this role, whether or not Google and its plaintiffs had settled their dispute. The agreement, nevertheless, permits the establishment of a library digital copy of works digitized by any or all libraries (not just Michigan) under the terms of the settlement. The HathiTrust will make it possible for libraries to collaborate in this critical work, providing a secure, stable and permanent home for digitized copies of library materials....

From Neil Netanel at Balkinization:

...[C]opyright holders will have the right to opt-out of the Project for any given book, but the default rule will be that Google may display 20% of the text of copyrighted out of print books and may sell access to viewing the entire text online. Google will also continue to be able to display and make available for user download the full text off public domain books in response to user's search queries. However, Google will no longer display short snippets of copyrighted books that remain in print without first obtaining copyright holder permission. Portions larger than short snippets of such books will also be made available for display, online viewing, and download per agreement with each copyright holder. (The settlement agreement actually uses the term "commercially available" rather than "in print," suggesting that books that are made available solely online, such as through Amazon.com's Kindle Books service, will be deemed to be "in print" for purposes of the settlement.)...

So in many ways the proposed settlement is a win-win-win-win (for Google, the copyright holders, the libraries, and the public). But there are some causes for concern as well. Perhaps most importantly, the settlement leaves undecided the issue of whether Google's scanning of the entire books and display of snippets is a fair use. Many observers, including me, believed that the courts would ultimately hold that it is a fair use, and thus set important precedent establishing that such "transformative uses" of copyrighted works -- uses that serve the shared goals of copyright and the First Amendment -- do not infringe copyright. Google's settlement for a $125 million payment and abandonment of its fair use defense (as well as its agreement to stop displaying short snippets of copyrighted in print books without obtaining copyright holder permission), may well leave others in a far weaker position to enter the market for online book searches and digital archives and may make it more difficult to claim that such uses of books do not harm a potential licensing market, which claim carries considerable importance for successfully asserting fair use. The proposed settlement also provides that Google may not enjoy the benefit of developments in fair use doctrine that bear on its Book Search Project, so Google has no incentive to support other book archive and search services' fair use claims....

[T]he bottom line is that Google is left with a de facto monopoly over this "universal library" service and, as I have discussed in a recent article, potential competitors face a higher barrier to entry than if Google had fought and prevailed on fair use (or if Congress enacts a statutory license for such uses)....

From Chris O'Brien at the Mercury News:

When I heard Google had settled its feud with book publishers, I knew exactly whom I wanted to call first: Brewster Kahle, the digital librarian who is the founder of the Internet Archive....

Kahle, who was also critical of the [Google] plan, helped put together the Open Content Alliance, a competing venture of libraries and tech companies such as Yahoo that sought to scan millions of books and make them available for free.  Google's plan was to build a new kind of bookstore. Kahle and the alliance want to build a new kind of library....

[H]ad [the settlement] changed Kahle's view of Google's program?  Nope.

"When Google started out, they pointed people to other people's content," Kahle said. "Now they're breaking the model of the Web. They're like the bad old days of AOL, trying to build a walled garden of content that you have to pay to see." ...

But Kahle and the Open Content Alliance have a better vision....[It] is trying to determine how to create digital copies of in-copyright works that you can "borrow" for a limited time for free, in the same way you check out a book from the library today....

From Wade Roush at xconomy:

...[The settlement] promises to free Google to move forward with its ambitious library digitization effort, which will put a vast collection of literature at the fingertips of students, researchers, and at least a few public library patrons. It should also placate the Chicken Littles in the publishing industry, who have spent years using every available means, including the Google lawsuit itself, to obstruct the sharing of knowledge enabled by the digital revolution.

But for readers —the group whose interests are closest to my own heart, and the only major class of stakeholders in the lawsuit whose interests weren’t being protected by a team of well-paid attorneys— the Book Search settlement contains some major disappointments....I’m saddened by the gap between the level of open access to literature that was considered possible when Google first launched its project to digitize millions of library books and what we’re probably going to get as a result of this agreement.

Specifically, the settlement seems to put an end to hopes that the Google Library Project would result in widespread free or low-cost electronic access to books that are out of print but have not yet passed into the public domain....

It quickly became clear that the plaintiffs in the lawsuits would sooner see out-of-print books remain in limbo forever than sacrifice one penny of potential profit to Google....

It may surprise you that, as a writer, I’m on Google’s side in this dispute. But my point of view is that decent writers can always find ways to get paid for their work. They shouldn’t have to leech off the people who have the vision and the expertise to bring out the latent value in the world’s common heritage of information. More generally, I continue to be astonished by the hostility so many writers and publishers display toward Google, which, to my mind, is the best thing to happen to intellectuals since the First Amendment....

And there’s another provision of the settlement that spells out, to me, just how parsimonious the plaintiffs’ attitude really is. Under the agreement, the authors and publishers give Google permission to provide every public library in the United States with free access to the books database. That sounds great, on the surface....But...[i]f you read the agreement, you’ll see that it restricts each public library to exactly one Google terminal....

That, to me, about sums it up. Even in this digital age, the organizations representing authors and publishers are saying that free access to out-of-print books should be restricted to people who can a) make the physical journey to a library and b) beat their neighbors to the computer room.

There’s something fundamentally medieval about the philosophy that seems to have guided the plaintiffs through the entire Google lawsuit: namely, that profits can only be protected by imposing scarcity. One gets the sense that if they could, the authors and publishers who sued Google would do away with libraries altogether—and that the bloody Internet would be next on their list. Fie on Google, fie!

From Dugie Standeford at Intellectual Property Watch:

...The deal has implications outside the US, said UK intellectual property lawyer Laurence Kaye. It “shows that Google’s activities beyond pure search are within the boundaries of copyright and sets the scene for more licensing deals,” he said. These could be “machine-to-machine” direct licences such as those under the Automated Content Access Protocol (ACAP), arrangements between individual content owners and Google, licences granted by collecting societies, or something else, Kaye said.

“It’s a good day for copyright,” Kaye added.

ACAP Chair Gavin O’Reilly welcomed the settlement, saying it “paves the way for all rights holders, regardless of their chosen business model or rights management systems, to get the appropriate reward for their efforts - while at the same time ensuring the widest possible access.”

The agreement shows that it is possible for industries with diverging interests in the digital environment to find mutually beneficial solutions, said a spokesman for the European Commission Information Society and Media Directorate-General. Consensus is a key element of the Commission’s content online initiative, he said....

From Stanford University in a press release:

Stanford has joined with the University of Michigan and the University of California in supporting a proposed legal settlement that could allow their libraries to digitize millions of books through the Google Book Search project....

“With other libraries, those of the University of California and the University of Michigan, we have been negotiating for almost two years with Google and the plaintiffs to shape this agreement for the public good,” said Michael A. Keller, Stanford university librarian, director of academic information resources, founder and publisher of HighWire Press and publisher of Stanford University Press....

“I think this proposed settlement will break the logjam that has locked up orphan works for so many years,” said Walter Hewlett, a former trustee and member of Stanford’s ad hoc committee on the Google Book Search project....

While the universities have not unanimously agreed to all aspects of the proposed settlement, they believe it is favorable overall to the principles and intentions that led them to join the program....

The project would create a first-ever database of both in-copyright and out-of-copyright (public domain) works on which scholars can conduct advanced research (known as “the research corpus”). For example, a corpus of this sort would allow scholars in the field of comparative linguistics to conduct specialized large-scale analysis of language, looking for trends over time and expanding our understanding of language and culture.

The project also would enable the sharing of public domain works among scholars, students and institutions. Not only would scholars and students at other universities be able to read these online, but this would make it possible to provide large numbers of texts to individuals wishing to perform research....

EU strengthens its support for OA

EU supports open access to scientific and scholarly information, an announcement from SURF, October 29, 2008.  Excerpt:

The European Commission has thrown its weight behind the movement to make science and scholarship more transparent and socially responsible. The European Commissioner for Science and Research, Janez Poto?nik, supports the call for open access, which will make scientific and scholarly information freely available via digital storage areas (“repositories”) on the Internet. SURF has been pressing for open access since 2004 and actively promotes this development in the Netherlands. Mr Poto?nik has now written to SURF’s director, Wim Liebrand, telling him that the Commission will encourage all recipients of EU subsidies to make published scientific/scholarly articles available to the public. This will prevent similar research being duplicated, thus saving researchers time and resources. Mr Liebrand is extremely gratified by the EU’s support: “After years of verbal support for the idea that the results of publicly financed research should also be publicly accessible, the EU is now actually taking steps to make that idea a reality.”

Mr Poto?nik also speaks highly of the powerful open access initiatives by Knowledge Exchange, the European partnership of national education and research institutions, which resulted in the Berlin Declaration, a widely supported call for public availability of publically financed research results. The European Commission has taken the petition to heart and the Seventh Framework Programme for Research and Technological Development (“FP7”) includes a pilot project for open access. The programme obliges researchers to make the results of subsidised research available via a digital repository. The pilot project is evidence of the European Commission’s commitment to making the results of research carried out within FP7 available as widely and effectively as possible with the aim of achieving the optimum impact both inside and outside the world of science and scholarship.

The Commission is also helping to build up the infrastructure for providing access to scientific/scholarly information. Examples of this action include financing infrastructural projects such as DRIVER (Digital Repository Infrastructure Vision for European Research) and a variety of studies to examine the effect of new business models for scientific publication. Mr Poto?nik concludes that the Member States intend formulating joint policy on access to scientific/scholarly information....

Comment.  As SURF says, the EU announced a pilot OA project in August 2008.  What it didn't mention is that the pilot project mandates OA for only 20% of the EU's research budget for 2007-2013.  That's why it matters that Poto?nik told Liebrand that "the Commission will encourage all recipients of EU subsidies to make published scientific/scholarly articles available to the public" (emphasis added).  The other good sign here is Poto?nik's public statement that "Member States intend formulating joint policy on access to scientific/scholarly information".

Labels:

Harvard repository and FAQs

The Harvard University Library has apparently launched the institutional repository to support its OA mandate.  I say "apparently" only because the link isn't working at the moment.

The repository is called DASH, and at least Harvard has launched the DASH Rights and License FAQ and DASH Procedural FAQ.  We may not learn what "DASH" stands for until the repository site is back up.

Update (11/3/08).  Also see Dorothea Salo's comments on what the FAQs reveal about the Harvard policy and its implementation.

An OA encyclopedia of computational economics

Oldenbourg Wissenschaftsverlag has published an OA encyclopedia of computational economics, Enzyklopädie der Wirtschaftsinformatik.  For details, read the press release (October 14, 2008) in German or Google's English.  (Thanks to the Informationsplattform Open Access.)

German podcast on OA

Katja Mruck and Günter Mey discuss OA (in German) in a 104 minute podcast from Küchenradio, October 14, 2008.  Mruck and Mey are editors of the OA journal, Forum Qualitative Sozialforschung.  (Thanks to Anthropnetworking.)

More on journal secrecy and vaporware

Robin Peek, Maturing of Open Access: With Growth Comes Growing Pains, preprint of a column forthcoming in the December issue of Information Today.  Posted October 30, 2008.  Excerpt:

Open access (OA) has had quite a good year....

Still, with maturity it was inevitable that new issues would emerge....

Of particular concern is the management of some of the young OA journals who are now appearing. Without question we have already seen that OA journals can be as well managed as the toll-access journals, whether they operate charging a fee or not. Springer would certainly not be purchasing the BMC journals, which are OA, if they were not well run and well respected....

Good journals require good editors, editorial boards, referees, and a flow of quality papers. If a young journal has any chance to gain traction it has to have an editor that is committed to its well-being and who also can gather a board that will add their reputation and editorial assistance to the project....

So...I was shocked when I was invited -- as were, apparently, casts of thousands - to join an undertaking called Scientific Journals International (SJI)....SJI espouses a “quadruple-blind” peer review system....To achieve this the editors will not be revealed by this company so that they can’t be influenced (what does that mean, bribery or maybe legal threats? I don’t know.) That is not how the academic game is played. Editors become editors of journals because of the prestige; it’s not something they won’t tell someone about. “I just became editor of this journal, but it’s a secret, don’t tell anyone.” ...

Another practice that [some of] these new journal publishers (SJI and others) engage in that they purport to publish, say, “100 journals” but when you visit the web site the majority are mere placeholders with notes like “coming soon” (or even “consider adding your content here.”) ...I hope the day does not come when I must review a promotion report and find the candidate associated with a false front of a journal.

Yes there are people who sign up for these new up-start editorial boards and I am sure for lots of reasons. OA is now cool these days and some may feel they are genuinely helping the cause....

Of course there are people who will sign up for almost anything to pad a vita....

A journal should not be considered “published” until it has actual content. And a proposed journal that does not get any content for a year should be dismantled and its editorial group disbanded for failing to deliver. Further if there was no content then there was no journal and it should be stricken from one’s vita. If you never reviewed an article, solicited an article, or even had an editorial board meeting, you have not been on an editorial board; to say otherwise is a falsehood. Maybe that’s what “quadruple blind” peer review was meant to blind us all to.

OA for research administrators

The presentations from the SRA International (Society of Research Administrators International) 2008 Annual Meeting (National Harbor, Maryland, October 12, 2008), are now online.  See especially the session on OA, Introduction to Open Access Publishing for Research Administrators.  (Thanks to the BMC blog.)

Digital Archimedes manuscript released libre OA

The Walters Art Museum, host of the Archimedes Palimpsest -- a manuscript of treatises by Archimedes of Syracuse -- posted its digital images of the manuscript online on October 29, 2008. The images and supplementary information are OA and licensed under the Creative Commons Attribution 3.0 license. See also the announcement. (Thanks to Glyn Moody.)

Comment. It's fuzzy whether the scans are copyrightable anyway (see Bridgeman v. Corel). But it's a good gesture regardless.

Costs of limiting PubMed searches to Free Full Text

Mary M. Krieger, Randy R. Richter, and Tricia M. Austin, An exploratory analysis of PubMed's free full-text limit on citation retrieval for clinical questions, Journal of the Medical Library Association, October 2008.  Abstract:  

Objective:  The research sought to determine (1) how use of the PubMed free full-text (FFT) limit affects citation retrieval and (2) how use of the FFT limit impacts the types of articles and levels of evidence retrieved.

Methods:  Four clinical questions based on a research agenda for physical therapy were searched in PubMed both with and without the use of the FFT limit. Retrieved citations were examined for relevancy to each question. Abstracts of relevant citations were reviewed to determine the types of articles and levels of evidence. Descriptive analysis was used to compare the total number of citations, number of relevant citations, types of articles, and levels of evidence both with and without the use of the FFT limit.

Results:  Across all 4 questions, the FFT limit reduced the number of citations to 11.1% of the total number of citations retrieved without the FFT limit. Additionally, high-quality evidence such as systematic reviews and randomized controlled trials were missed when the FFT limit was used.

Conclusions:  Health sciences librarians play a key role in educating users about the potential impact the FFT limit has on the number of citations, types of articles, and levels of evidence retrieved.

Comment.  In short, work for OA but don't assume that all valuable literature is already OA. 


Thursday, October 30, 2008

OA journal launches repository for article-related datasets

The Optical Society of America has launched a repository to host datasets associated with articles published in its journals, including the OA journal Optics Express. OSA is calling the initiative "Interactive Science Publishing" ("ISP"). See this email from M. Scott Dineen:

The Optical Society has just started to use a DSpace-based system to host datasets associated with peer-reviewed journal articles. The OSA "MIDAS" system was launched with the help of [the National Library of Medicine] and Kitware, Inc., on a DSpace platform. Our first articles with datasets were published earlier this month with 3D image data (TIFF stacks, RAW, DICOM, and similar formats). ...

Our first special issue of Optics Express with associated datasets is available here.

See also the September 30, 2008 press release announcing OSA's partnership with NLM. From the press release:

... The joint OSA/NLM pilot ISP project will publish three to four focus issues in OSA journals in 2008 and 2009 with articles that include large datasets as primary components. The articles will be open access, as will tools for viewing and analyzing the datasets. The articles and data will be published in OSA’s journals, but will also be deposited in PubMed Central, NIH’s free digital archive of biomedical and life sciences journal literature.

See also the project's wiki.

Comments.

  • There are two parts here: MIDAS, the DSpace-based repository software, and the OSA ISP software, used for viewing the files downloaded from MIDAS.
    • MIDAS is based on FOSS, including DSpace, but based on Kitware's product page, there doesn't seem to be any plan to release MIDAS as open source. (DSpace's license doesn't have a copyleft clause that compels derivative software to be free.)
    • OSA's ISP software, which is only available for Windows and Mac OS, doesn't seem to be open source, either. In fact, the gratis version offers only limited functionality; from the FAQ:
      Access to full OSA ISP authoring functionality is freely available for 30 days following activation. After 30 days, the software reverts to reader mode. In reader mode, one can interactively view data associated with OSA ISP articles but cannot load other data or use the authoring tools.
      However, the FAQ also says it's not necessary to use OSA's ISP software:
      We recommend use of the free OSA ISP software for viewing ISP image datasets. ... However, the source data as provided by the author is available in the MIDAS repository and can be loaded into any capable software application. ...
  • As to the data themselves, here are the terms of use:
    ... You may use the datasets for research purposes, provided that Author(s) are given proper credit as the source of the data, in a manner consistent with generally accepted scientific principles. ...
Update. See also Kitware's press release on the ISP software.

PALINET launches digitization initiative

PALINET, a regional library network in the U.S., launched a digitization project on October 21, 2008. See the press release or this story in Library Journal:

... [T]he PALINET regional library network recently announced its Mass Digitization Collaborative, supported in part by a grant from the Alfred P. Sloan Foundation. Through the project, PALINET member libraries will be able to scan and digitize selected texts as the result of an ongoing partnership with the Internet Archive and its regional network of digitization centers.

Participants will receive high-quality versions of the digital editions, which will also be made freely available through Archive.org. The goal, according to Catherine C. Wilt, PALINET’s executive director, is to make available more than “20 million pages of text from PALINET members,” equivalent to approximately 60,000 books.

Only works free of copyright restrictions and with existing metadata are eligible for the project. Moreover, member institutions are “strongly encouraged” to select unique texts and items of local and regional significance. ...

Presentation on U. Calgary's OA fund

Andrew Waller, Open Access and the Open Access Authors Fund, presentation at the University of Calgary, October 28, 2008. Abstract:
This presentation covers some of the basic elements of Open Access and briefly discusses the recently-established Open Access Authors Fund at the University of Calgary. Presented to faculty members and graduate students of the Faculty of Law, October 28, 2008.

New OA journal of epigenetics

Epigenetics & Chromatin is a new peer-reviewed OA journal published by BioMed Central. See the October 30, 2008 announcement. The article-processing charge is £1180 (€1485, $1875), subject to discounts and waivers. Authors retain copyright to their work and articles are available under a Creative Commons Attribution License. The inaugural editorial is now available.

EDUCAUSE launches interest group on openness

Colleen Luckett, EDUCAUSE Involvement Opportunity: New Openness Constituent Group, EDUCAUSE blog, October 9, 2008.
EDUCAUSE has launched the new Openness Constituent Group, which focuses on the emergence and adoption of open technologies, practices, policies, and initiatives, and how they affect the delivery and support of education. ...
From the group's description:

From critical IT services to educational content, distributed models based on openness are challenging higher education's traditional approaches. The Openness Constituent Group focuses on the emergence and adoption of open technologies, practices, policies, and initiatives, and how they affect the delivery and support of education. Topics include but are not limited to free and open source software, open content, open educational resources, open courseware, open standards, and management practices such as open business and enterprise 2.0.

This group meets at the EDUCAUSE Annual Conference and uses the electronic discussion list to discuss issues throughout the year. ...

On CC and collecting societies

Catherine Saez, Improbable Match: Open Licences And Collecting Societies In Europe, Intellectual Property Watch, October 28, 2008.

... French authors still cannot put their work under free licences, such as Creative Commons, for non-commercial use while being members of [French collecting society] Sacem, they said. Some European collecting societies are trying to find a compromise. ...

French authors give their exclusive rights to Sacem, including on non-commercial use. “We have discussed for years with Sacem without any luck, but our colleagues in the Netherlands and in Denmark are working with [the Dutch and Danish collecting societies] Buma Stemra and Koda to try to achieve an effective compatibility between Creative Commons and collective management,” [Mélanie Dulong of Creative Commons France] said.

In the United States, collecting societies do not have exclusive rights on the works of authors, so the compatibility problem does not arise, she added. ...

See also our past posts on collecting societies.

Update. See also the post at the Creative Commons blog.

Update on WIPO Development Agenda

William New, WIPO Pitches Proposed Programme Of Strategic Realignment, Intellectual Property Watch, October 29, 2008.

A few weeks after its new director general took office, the World Intellectual Property Organization has announced a programme of strategic realignment in the secretariat aimed at opening up the organisation and improving the focus on customer service. ...

[A] notable change to the organisational structure is the addition of a Development Agenda Coordination Division, reporting directly to the director general. This division is headed by Pushpendra Rai. ...

See also our past posts on the Development Agenda.

More on digital copies of public-domain art

Art museums are not obliged to provide OA to digital copies of their public-domain art.  But what if they have committed themselves to OA? 

Klaus Graf reports that the Staatliche Kunstsammlungen Dresden has signed the Berlin Declaration and chosen to sell digital copies of its public-domain art.  Read his post in German or Google's English.

OAD is six months old

From the Open Access Directory:

The OAD has reached its six month birthday! We have grown from six lists to thirty-four, which shows the strength and passion of the OA community.

Thank you to everyone one who supported us, contributed lists, added content, and helped get the word out.

Robin Peek, Editor; David Goodman, Associate Editor; Athanasia Pontika, Assistant Editor; Terry Plum, Technology Coordinator. Editorial board members: Charles Bailey Jr., Leslie Chan, Heather Joseph, Melissa Hagemann, Peter Suber, Alma Swan, and John Wilbanks.

Siva Vaidhyanathan on the Google settlement

Siva Vaidhyanathan, My initial take on the Google-publishers settlement, at The Googlization of Everything, October 28, 2008Excerpt:

...This registry would serve as a helpful database through which scholars and publishers may find rights holders to clear rights. As of today, there is no good database for such book rights for most of the books published in the 20th century. So this has the potential to be a major boon to research and publishing. In addition, it can help rights holders accrue royalties (meager thought they might be) by exploiting a market that currently does not work efficiently or effectively -- reprints or selections from out-of-print works. Google is doing what the U.S. Copyright Office should have done years ago. As usual, Google is making up for public failure -- the opposite of market failure.

• Google will offer (with nasty digital rights management) full-text copies out-of-print books (presumably only those books published by members of the AAP, thus excluding university presses, independent and small presses, vanity presses, etc.) for sale as downloads.

• Google will offer much better access to many out-of-print works still under copyright. Right now Google offers these texts in useless "snippet" form. There would be much richer and broader access under the settlement....

In addition, this settlement, if it goes through, dodges that great copyright meltdown that I had feared. I did not want to see Google lose this suit in court. And I was confident it would. Google lawyers assured me that they were even more confident they would prevail. And they are smarter than I am. But clearly both sides saw real risk in continuing toward a courtroom showdown.

However, back when Google debuted the library scanning part of the Book Search program, many of my fellow copyright critics celebrated the fact that a big, rich, powerful company was taking a stand to make fair use stronger. Well, it looks like that never did happen. Fair use in the digital world is just as murky and unpredictable (not to mention unfair and useless) as it was yesterday.

But what about the problems and pitfalls of this settlement? I have asked Google folks the following questions:

• Isn't this a tremendous anti-trust problem? Google has essentially set up a huge compulsory licensing system without the legislation that usually makes such systems work. One of the reasons it took a statutory move to create compulsory licensing for musical compositions was that Congress had to explicitly declare such a consortium and the organizations that run it (ASCAP, BMI) exempt from anti-trust laws. In addition, this proposed system excludes many publishers (such as university presses) and many authors (those not in the Authors' Guild). More importantly, this system excludes the other major search engines and the one competitor Google has in the digital book race: the Open Content Alliance. Don't they now have a very strong claim for an anti-trust action? ...

Overall, though, I have to offer my congratulations to both Google and the publishers. They forged a beneficial system that could make a difference to many authors, scholars, and researchers while making both Google and publishers a little money that they might otherwise never see....

My major criticisms of Google Book Seach have always concerned the actions of the university libraries that have participated in this program rather than Google itself....[U]niversity libraries have a different, much higher mission [than for-profit corporations]. And they have clear ethical obligations. So I now turn to them.

From the beginning, this has seemed to be a major example of corporate welfare. Libraries at public universities all over this country (including the one that employs me) have spent many billions of dollars collecting these books. Now they are just giving away access to one company that is cornering the market on on-line access. They did this without concern for user confidentiality, preservation, image quality, search prowess, metadata standards, or long-term sustainability. They chose the expedient way rather than the best way to build and extend their collections.

I am sympathetic to the claim that something is better than nothing and sooner is better than later. But sympathy remains mere sympathy. These claims are not convincing when one considers just how great an alternative system could be, if everyone would just mount a long-term, global campaign for it rather than settle for the quick fix....

Ultimately, I have to ask: Is this really the best possible system for the universal spread of knowledge? I think we can do better....

Harvard doesn't like the Google settlement

Laura G. Mirviss, Harvard-Google Online Book Deal at Risk, Harvard Crimson, October 30, 2008.  Excerpt:

Harvard University Library will not take part in Google’s book scanning project for in-copyright works after finding the terms of its landmark $125 million settlement regarding copyrighted materials unsatisfactory, University officials said yesterday.

Harvard had been one of five academic libraries—along with Stanford, Oxford, Michigan, and the New York Public Library—to partner with Google when the book scanning initiative was announced in October 2004. University officials said that Harvard would continue its policy of only allowing Google to scan books whose copyrights have expired....

University spokesman John D. Longbrake said that HUL’s participation in the scanning of copyright materials was contingent on the outcome of the settlement between Google and the publishers.

Harvard might still take part in the project, Longbrake said, if the settlement between Google and publishers contains more “reasonable terms” for the University.

In a letter released to library staff, University Library Director Robert C. Darnton ’60 said that uncertainties in the settlement made it impossible for HUL to participate.

“As we understand it, the settlement contains too many potential limitations on access to and use of the books by members of the higher education community and by patrons of public libraries,” Darnton wrote.

“The settlement provides no assurance that the prices charged for access will be reasonable,” Darnton added, “especially since the subscription services will have no real competitors [and] the scope of access to the digitized books is in various ways both limited and uncertain.”

He also said that the quality of the books may be a cause for concern, as “in many cases will be missing photographs, illustrations and other pictorial works, which will reduce their utility for research and education.” ...

“We have said that we believe that Google’s treatment of in-copyright works is consistent with copyright law,” Longbrake said in 2005 after the lawsuit against Google was filed.

Comment.  This is not a comment so much as a careful paraphrase, if only for myself, to get clear on what happened.  Harvard is not refusing to take part in the settlement.  It's not a party to the lawsuit and couldn't be a party to the settlement.  Nor is it terminating its agreement to let Google scan books from the Harvard library.  Harvard never allowed Google to scan copyrighted books from its library, as (say) Michigan did.  Instead it limited Google-scanning to public-domain books.  Today it announces that it will continue to limit Google to public-domain books.  Google just arranged for publishers to drop their objections to the scanning of copyrighted books, provided the scans meet certain terms, and expected that libraries would leap to participate.  But Harvard doesn't like the terms, either for unpaid access and use or for paid access.  Apparently Harvard is also saying, like many others, that Google could have prevailed on its original fair-use claim and should have litigated it to the end.

More on the Google-Publisher settlement

Here are some comments on the settlement from the press and blogosphere.

From Andrew Albanese at Library Journal:

...On a conference call this morning [10/28/08], the parties said that there remained a strong difference of opinion over the copyright principles at the core of the case. “We had a major disagreement with Google, and we still do,” said Paul Aiken, executive director of the Authors Guild. “We also don’t see eye-to-eye on with publishers on book contract law,” he added, before calling the settlement the “the biggest book deal” in U.S. publishing history. Taylor said two “guideposts” helped lead his organization through a thicket of issues in the suit. “Authors like their books to be read,” he noted, “and like they like a nice royalty check.” ... 

$45 million of [Google's $125 million] will be used to resolve claims for those whose books have been digitized—roughly $60 a book to authors....

[T]he “snippet”—the short glimpses of in-copyright book content initially offered by Google—will be replaced by a “preview” function, offering up to 20 percent of the book, including entire pages....

From Andrew Albanese in a second article for Library Journal:

...As with any class action suit, don’t expect the final results to come quickly. The settlement must still be approved by a federal judge—and as the recent Tasini settlement shows that may be no slam dunk. Further, as one attorney told LJ on background, executing the nuts and bolts of the deal—creating the registry, setting up the subscription plan, and especially disbursing Google’s payment to authors and publishers—will provide no shortage of challenges to all parties. “There will be objectors, there always are,” the attorney stated. “This is going to be incredibly complex.” The settlement could even see another lawsuit filed to seeking stop it....

From Kirk Biglione at Medialoper:

The Winners

  • Google: It’s hard to overstate how important this agreement is for Google. Google has essentially acquired the digital rights to the long tail. At least the portion of the long tail that’s locked up in out of print books. That’s a VERY long tail. Google has mastered the art of turning arcane search phrases into money. In the future they’ll have a lot more content to monetize. Content that no other search engine will have access to. That’s a huge competitive advantage.
  • The Rightsholders: Authors and publishers will benefit immediately as they allocate the funds from the initial settlement, and over time as they collect revenue generated from out of print works. In the vast majority of cases, these out of print works would have never generated any additional income. I’ve already heard some grumbling that publishers gave too much away in this deal, but it’s hard to see how that can be the case. Google has basically created an entirely new revenue stream that publishers can use to profit on books that would otherwise not have generated a cent.
  • Libraries: The libraries that participate in the digitization program will get to keep control over their archives. Equally important, libraries will have digital access to the archives of other libraries. The academic community as a whole will benefit in ways that we can’t yet imagine.
  • The Public: The public gets easy access to millions of rare and out of print works.
The Losers
  • Amazon: Amazon’s 190,000 Kindle titles look puny compared to the millions of books Google now has access to. Granted many of those Kindle titles make up the big head of consumer demand, as opposed to the long tail. Still, Google now has the ability to monetize millions of books Amazon can’t, if for no other reason because they’re out of print. What’s more, under the new agreement Google has the right to sell printed copies of those books via print on demand. And I have a sneaking suspicion that Google still has a few more surprises in store for us. Android may turn out to be more than just a mobile phone platform....
  • Fair Use Advocates: There are many (myself included) who believed Google had a strong fair use argument to support their scanning efforts. It was hoped that a Google court victory would reaffirm those rights. By settling out of court Google avoided the issue entirely. Clearly Google has some long term goals for this content that would not have fallen under Fair Use. In the end Google was better off striking a deal with the rightsholders. Also, it’s been noted that by avoiding this issue entirely Google may have effectively locked out any future competition.

From Dan Cohen at DanCohen.org:

Finally, and perhaps most interesting and surprising to those of us in the digital humanities, is an all-too-brief mention of computational access to these millions of books:

In addition to the institutional subscriptions and the free public access terminals, the agreement also creates opportunities for researchers to study the millions of volumes in the Book Search index. Academics will be able to apply through an institution to run computational queries through the index without actually reading individual books.

From Paul Courant at Au Courant:

...First, and foremost, the settlement continues to allow the libraries to retain control of digital copies of works that Google has scanned in connection with the digitization projects....Moreover, we will be able to make research uses of our own collections....

Second, the settlement provides a mechanism that will make these collections widely available. Many, including me, would have been delighted if the outcome of the lawsuit had been a ringing affirmation of the fair use rights that Google had asserted as a defense. (My inexpert opinion is that Google’s position would and should have prevailed.) But even a win for Google would have left the libraries unable to have full use of their digitized collections of in-copyright materials on behalf of their own campuses or the broader public....

The settlement is not perfect, of course. It is reminiscent, however, of the original promise of the Google Book project: what once looked impossible or impossibly distant now looks possible in a relatively short period of time. Faculty, students, and other readers will be able to browse the collections of the world ‘s great libraries from their desks and from their breakfast tables. That’s pretty cool.

From James Grimmelmann at The Laboratorium:

...The result of the settlement will be to give Google a license to keep on doing what it’s doing, while allowing the authors to use their now-sharpened knives to sue anyone else who tries to do the same. At that point, of course, Google would be delighted for the authors to succeed, since it keeps the competition at bay. The settlement may also be bad for other search engines in another respect: the authors will claim that it undermines any claim of fair use in indexing books and making them searchable. Look, they’ll say, Google struck a deal to pay for its uses. That proves there’s a functioning market for these rights, and you should have to pay up, too. I happen to disagree, and this brings me to my second reaction:

You can’t strike a deal like this without court approval. That matters, because even if this settlement is approved, there is still no functioning “market” for these uses of copyrighted works. The issue is that this is a class-action settlement requiring judicial approval to bind all authors. It’s practically impossible for anyone else to take advantage of Google’s terms without filing suit to obtain a similar class-binding order. Individual license negotiation — the route that Google considered and rejected when it started the project — is utterly infeasible. Since voluntary negotiation can’t produce the result one needs to do comprehensive indexing, there’s still no market for it, and this settlement therefore shouldn’t prejudice future fair use claims by search engines....

In addition, there’s an antitrust issue with the proposed settlement....

It’s urgent that these concerns be placed in front of the court. I would argue that a necessary first step would be modifying the proposed settlement to offer any search engine equal ability to participate on the same terms as Google, with no prejudice to their ability to negotiate better terms if they can. Other modifications to prevent adverse fair use and antitrust consequences may also be necessary.

From Adam Hodgkin at Exact Editions:

...I wonder whether there is not an element of a 'winner's curse' about to descend on Google. Some parts of the settlement outline a fantastically complicated and ingenious business model for our future access to digital books. Very specific mechanisms for the pricing of books and the regulation of access, access to content within books, and access from within institutions to digital resources. If you read the stuff about 'Pricing Bins' and 'Pricing Algorithms' (pp49-50) you will get a good flavour of the extraordinarily detailed prescriptions.

A lot of this setup and this detail really needs to be established by innovation, by experiment and by markets, not by a court approved Settlement to a private dispute....

From Carolyn Kellogg in the Los Angeles Times:

...Other questions: Google will provide some full-access terminals at public libraries for free, but is it incentivizing multiple terminals as a paid service? If publishers opt out of the entire service, will they be protecting their intellectual property, or making a grave mistake? Will this kind of electronic book replace previous ebooks? Will the Google process of scanning and keywording existing bound books (which makes them look nice and bookish on the screen, with visible page textures and the occasional slightly sideways scan) be used for new books, or will they get digital files from publishers? And if that happens, will the electronic versions begin to look less like books and more like text on a screen, changing the way books are designed? Will small presses have a voice in the shaping of the registry, or will it be dominated by corporate players? And, with Google centralizing and, as they said on the call, "tracking" so much of this information, should we be thinking about privacy -- about who knows what about what we read? ...

From Lawrence Lessig at Lessig.org:

IMHO, this is a good deal that could be the basis for something really fantastic. The Authors Guild and the American Association of Publishers have settled for terms that will assure greater access to these materials than would have been the case had Google prevailed. Under the agreement, 20% of any work not opting out will be available freely; full access can be purchased for a fee. That secures more access for this class of out-of-print but presumptively-under-copyright works than Google was initially proposing. And as this constitutes up to 75% of the books in the libraries to be scanned, that is hugely important and good....

It is also good news that the settlement does not presume to answer the question about what "fair use" would have allowed....That leaves "fair use" as it is, and gives the spread of knowledge more that it would have had....

The hard question for the registry is how far they will go to support the range of business models that authors and publishers might have. E.g., Yale Press "Books Unbound" and Bloomsbury Academic both have Creative Commons licensed authors. Will the registry enable that fact to be recognized? Indeed, though the comment was made by someone from the plaintiffs' side that it would be "perverse" for authors to choose free licensing, it is perfectly plausible that an author would choose to make his or her work available freely electronically, but contract with one commercial publisher to deal with selling the physical book, or licensing rights commercially. That, again, is the Bloomsbury Academic business model....

But key to the good in the agreement is that we don't have to trust the nonprofit [registry] to do good here. Google has committed both to making the data it can control (not private data about telephone numbers and contact info, but public data about copyright registration, terms, etc.) nonexclusively available, and more importantly, downloadable by anyone who wants to build a competing and complementary database....

The biggest loser in this whole battle is the Orphan Works legislation....

From Wendy Seltzer at Seltzer.org:

...I worry about the effects on competition — Google’s high settlement payments are barriers to entry by anyone else. Though it’s plausible no one had the resources or spine to compete with Google regardless, a judicial determination that the use was fair would have enabled more competition in parallel and distinct library offerings. Now, Google cements its advantage in yet another field. (And of course, with the circularity of “effect on the market” testing, makes it harder for someone else to claim fair use.)

From Sherwin Siy at Public Knowledge:

...Depending on how you saw the merits of the case, and how confident you were in the court reaching the right decision, that can be good or bad. On the one hand, we don’t have a federal court saying that scanning books is a per se fair use; on the other hand, we don’t have a court saying that scanning is per se infringement, either.

This does mean that the financial and legal might of Google is no longer going to be aligned with libraries and archives that may wish to provide digital services that are technologically similar to Google’s efforts. This will mean that further fair use fights for digital libraries start closer to square one than they would have otherwise....

One of the interesting things about the settlement is how it draws the distinction between books that are in-print and out-of-print....This is an important distinction for practical matters of accessing works, but one not so explicitly present in copyright statutes....As a practical matter, it seems much more reasonable to make a copy of a work if there’s no way for me to obtain it from a bookstore. Yet this might not save me from being found an infringer under fair use, given a sufficiently litigious plaintiff and a sufficiently unsympathetic court. After all, even if there are no other copies of the book available, there’s a potential market in licensing the right to make a copy of the book.

Which is why it’s refreshing that this distinction is drawn at all in the agreement, and in what will be available to users. This sort of arrangement can be cited as a positive feature of licensing and the power of contract—the ability to draw distinctions that matter to the parties that the law doesn’t recognize.  Of course, there are distinct drawbacks to contract, too. Contract is a two-way street, where each party gives up something of value to the other. But that means that contract isn’t a town square or a commons; the interests of those not party to the contract are often ignored....

From David Sohn at the Center for Democracy and Technology:

If there’s a downside here, it is that the path Google has pursued here will not be easy for others to follow. Google’s scanning and display of excerpts of out-of-print books will rely not on fair use, but rather on what amounts to a broadly binding license derived from the class action settlement. The settlement states clearly that authors and publishers are free to strike deals with other companies that may want to offer some kind of search tool or online access capability for their books. But any new would-be new entrant in this market would either have to seek out its own set of licenses with a vast number of authors and publishers, or proceed based on fair use and expose itself to the same kind of lawsuit that Google faced. Because Google did not litigate the fair use question through to the end, it didn’t blaze a fair use trail that others could follow. Naturally, that’s just fine with the publishing industry, which rejects the idea that what Google was doing could have qualified as fair use. But it creates a considerable challenge for anyone eyeing the creation of some type of new indexing, search, or analogous service.

Indeed, there is an argument that the settlement actually may increase the legal danger of relying on fair use in this kind of context. One of the key factors in any fair use determination is what impact the use in question may have on the rights holder’s potential market. With this settlement, the parties aim to create a market in which book searching generates various types of revenues. That is not a bad thing. But it could make it harder for a newcomer to argue that a feature or service that delivers similar functionality would not affect the rights holder’s opportunities for commercially exploiting their works....

From Jack Stripling at Inside Higher Ed:

...Patricia Schroeder, president and chief operating officer of the Association of American Publishers, one of the plaintiffs in the suit, said both parties thought that resolving the litigation was more important than fighting out some of the larger — and lingering — legal questions about copyright in the digital age.  “We could have all fallen on our swords dueling to the last drop of blood over what is fair use,” said Schroeder....

Siva Vaidhyanathan, an associate professor of media studies and law at the University of Virginia, said the book registry will improve scholarship by clarifying who owns the rights to works. That said, Vaidhanathan suggested that the settlement fell short of what many saw as the promise of the legal challenge.

“When this whole project started four years ago, there were a lot of people declaring Google was striking a major blow for fair use and freer content, and this settlement I think shows there was a bit of hyperbole attached to those claims. Clearly neither Google nor the publishers wanted to roll the dice on that question,” said Vaidhyanathan, author of the forthcoming book The Googlization of Everything....

Peter Petre, an author, said the compensation arrangement outlined in the agreement is similar to the arrangement that the American Society for Composers, Authors and Publishers offers to the music industry. ASCAP distributes royalties to musicians when their works are broadcast or performed.

“What makes me most excited about this deal is not the $60 — it will buy a round of drinks. [But] this agreement creates the writers’ equivalent of ASCAP; that gives me hope, and it makes me feel secure about online displays of my work,” said Petre, treasurer of the Authors Guild, a copyright advocacy group for authors that joined the suit....

Laine Farley, interim executive director of the California Digital Library at the University of California System, said it’s still unclear whether universities that supplied books for digitization will be given free or reduced cost subscriptions....

From TechDirt:

...Authors and publishers will allow books to go online, but it locks Google in to a specific business model that might not be the most reasonable and, most importantly, it does not answer the legal question concerning the overall legality of book scanning. Pretty much any way you look at it, Google caved here -- and this is unfortunate for a variety of reasons.

Two years ago, there was a story in the NY Times about how Google's legal department saw all of these lawsuits against the company as a way to stand up on principle and make better law. Specifically, the company positioned itself as being willing to fight certain lawsuits on principle in order to get precedent setting rulings on the books in support of openness, fair use, safe harbors and many other important issues. The company suggested that, rather than settle, it would fight these lawsuits knowing that it alone, with its big war chest of money, could fight some of these battles that tiny startups could never afford.

It may not be surprising, but it's safe to say those days are long gone....

Not surprisingly, authors and publishers sued Google over this, and went around claiming how awful it was -- even though it was really not all that different than creating a much better card catalog for books. The purpose was to help people find more books that were useful, rather than to break any sort of copyright. And, in fact, studies showed that books that showed up in Google's search improved sales. In other words, it should have been a win-win situation all around. But, like so many content providers, authors and publishers falsely overvalue the content and undervalue services that make that content more valuable....

So, it's quite upsetting to see Google cave on this. The settlement does not establish any sort of precedent on the legality of creating such an index of books, and, if anything pushes things in the other direction, saying that authors and publishers now have the right to determine what innovations there can be when it comes to archiving and indexing works of content....From a short-term business perspective this might make sense, but from a long-term business perspective (and wider cultural perspective) it's terrible.

It will only encourage more lawsuits against Google for trying to innovate, as more and more people hope that Google will settle and throw some cash their way. Furthermore, it greatly diminishes the incentives for making books more useful, and that's damaging to our cultural heritage....

From the University of California, University of Michigan, and Stanford University in a joint statement:

The University of California, University of Michigan, and Stanford University announce today their joint support for the outstanding public benefits made possible through the proposed settlement agreement...by Google...and plaintiffs....

"It will now be possible, even easy, for anyone to access these great collections from anywhere in the United States," said University of Michigan’s Paul N. Courant, University Librarian and Harold T. Shapiro Collegiate Professor of Public Policy. “This is an extraordinary accomplishment.” ...

"Millions of books are held in our libraries as a public trust," said Daniel Greenstein, Vice Provost at the University of California. "This settlement will help provide broad access to them as well as other public benefits, and it also promises to promote innovation in scholarship. For these reasons, UC is pleased to have given input along with Universities of Michigan and Stanford in support of the public good, and we look forward to playing a continuing role by contributing UC library volumes to the development of this rich online resource." ...

“The settlement promises to change profoundly the level of access that may be afforded to the printed cultural record, so much of which is presently available to those who are able to visit one of the world’s great libraries, Michael Keller continued. “The democratic impulses – the access to knowledge – are simply too compelling to ignore....

Among the important benefits to higher education are:

  • Free full text access at public libraries around the country
  • Free preview and ability to either find the book at a local library or through a consumer purchase.
  • A first-ever database of both in-copyright and out-of-copyright (public domain) works on which scholars can conduct advanced research (known as the “the research corpus”). For example, a corpus of this sort will allow scholars in the field of comparative linguistics to conduct specialized large scale analysis of language, looking for trends over time and expanding our understanding of language and culture.
  • Enabling the sharing of public domain works among scholars, students and institutions. Not only will scholars and students at other universities be able to read these online, but this will make it possible to provide large numbers of texts to individuals wishing to perform research;
  • Institutional subscriptions providing access to in-copyright, out-of-print books;
  • Working copies of partner libraries' contributed works for searching and web services complementary to Google's.
  • Accommodated services for persons with print disabilities – making it possible for persons with print disabilities to view or have text read with the use of reader technology;
  • Digital copies of works digitized by Google provided to the partner libraries for long term preservation purposes. This is important because, as university libraries, we are tasked by the public to be repositories of human knowledge and information....

IR plans at the U of Kashmir

Ishfaq Mir, An Interview With the Vice Chancellor of the University of Kashmir, KashmirForum.org, October 29, 2008.  Excerpt:

Professor Riyaz Punjabi...is a distinguished academician and an expert on International Peace and Conflict Studies. He started his career from Kashmir University and holds a Doctorate in Law....[He took] over as Vice Chancellor of the University seven months back....

IM: After assuming charge as VC, did you try to know about the research conducted so far in the Kashmir University and how it has benefited the State in socio-economic aspects?

RP: We are developing a website named ‘e-repository’ in which the abstracts of all the researches done so far would be kept. Afterwards the whole research would be publicized as such. The Department of Library and Information Science is likely to make available the facility of Open Access (OA) movement, a world-wide effort to provide free online access to the scholarly literature especially peer-reviewed journal articles and other pre- prints. The Department will make the facility available not only to affiliated colleges but will formulate a committee to draw members from SKUAST, SKIMS and the University of Kashmir to develop strategies for making Open Access operational for scientists and scholars....


Wednesday, October 29, 2008

Software for managing IR ingest

The University of Utah has released the University Scholarly Knowledge Inventory System (U-SKIS). (Thanks to Charles Bailey.) From the announcement:
... U-SKIS tracks items or citations prior to ingest to CONTENTdm. This provides a workspace for staff to determine what can be added to the repository based on publishers’ archiving policies and to efficiently manage every stage of this process.

The software tracks files, attaches metadata, tracks communications and publisher policies, and deposits the files and metadata into CONTENTdm. U-SKIS uses the Dublin Core standard to apply metadata to documents, which are then re-used once the item is ready to be added to the repository. ...

New release of Fedora

Version 3.1 of the Fedora repository software was released on October 28, 2008. See the announcement or the release notes. (Thanks to Charles Bailey.)

More on privately held patents on publicly-funded research

Anthony D. So and six co-authors, Is Bayh-Dole Good for Developing Countries? Lessons from the US Experience, PLoS Biology, October 28, 2008.  Excerpt:

Recently, countries from China and Brazil to Malaysia and South Africa have passed laws promoting the patenting of publicly funded research, and a similar proposal is under legislative consideration in India. These initiatives are modeled in part on the United States Bayh-Dole Act of 1980. Bayh-Dole (BD) encouraged American universities to acquire patents on inventions resulting from government-funded research and to issue exclusive licenses to private firms, on the assumption that exclusive licensing creates incentives to commercialize these inventions. A broader hope of BD, and the initiatives emulating it, was that patenting and licensing of public sector research would spur science-based economic growth as well as national competitiveness. And while it was not an explicit goal of BD, some of the emulation initiatives also aim to generate revenues for public sector research institutions.

We believe government-supported research should be managed in the public interest. We also believe that some of the claims favoring BD-type initiatives overstate the Act's contributions to growth in US innovation. Important concerns and safeguards —learned from nearly 30 years of experience in the US— have been largely overlooked. Furthermore, both patent law and science have changed considerably since BD was adopted in 1980. Other countries seeking to emulate that legislation need to consider this new context....

Student support for OA at Uppsala

The student union at the University of Uppsala has drafted a statement calling for OA to publicly-funded research and OA to the university's research.  Read it in Swedish or Google's English.

Another OA repository for research on digital preservation

Jason Kucsma, Preserving the Digital Preservation Conversation, Jason Kucsma, October 29, 2008.  An article-length proposal of an OA repository for literature on digital preservation:  the Digital Preservation Resource Repository (DiPPR).  Excerpt:

The rapidly changing landscape and relative “newness” of the digital preservation field...affords scholars, practitioners, and students the opportunity to swap roles more fluidly than virtually any other profession. Such role-swapping, however, demands a thorough, centralized repository of published literature on digital preservation theory and best practices....

Adopting a documentation strategy introduced by Helen Samuels (1986), this repository would serve as a historical record of where the field has been, a current record of where the field stands, and a projection of where the digital preservation movement is heading....

Documenting an entire field is not without its challenges....However, now is an appropriate time to cast a wide net on published work on digital preservation literature while the scope of work is relatively manageable....

A centralized open access repository would  aid in eliminating geographic and institutional barriers that may be seen as impeding the progress of the digital preservation movement as a whole.

In an attempt to reign in such a seemingly large body of knowledge, the Digital Preservation Resource Repository (DiPRR, pronounced “dipper”) will focus entirely on scholarly works published in online and print journals and those additional works published independently by organizations that self-identify digital preservation as their primary concern....

Comment.  It's a great idea.  But there already is an OA repository for literature on digital preservation, ERPAePRINTS.  From the ERPAePRINTS front page: 

The Electronic Resource Preservation and Access Network (ERPANET) and the Digital Curation Centre (DCC) have established this Open Archives ePrint service in conjunction with DAEDALUS to make international digital curation and preservation research outputs visible, accessible and usable over time.

OA is changing the balance of rights between authors and publishers

Andrea Rinaldi, Access evolved?  EMBO Reports, vol. 9. no. 4 (2008) pp. 317-321 (accessible only to subscribers at least so far).  A recap of the rise of OA journals, with some attention to the NIH policy and objections to it from the publishing lobby.  Excerpt:

Versatile open access policies are evolving together with scholarly information, but copyright issues remain unsettled....

In practice, OA journals seek to cover their editorial and production costs by charging authors to publish and thus make the final article freely available on the internet. In addition, OA journals require authors to sign a copyright licence that fulfils at least the Budapest definition of OA....

OA has begun to transform the copyright model used by traditional publishers. Historically, the author(s) of an article —while retaining the right to be acknowledged as the creator(s) of the work— usually transferred all other rights to the publisher. In practice, this meant that the publisher had full control over the distribution, use and re-use of scholarly material. Access to and the republication of a paper —even for educational purposes or by the author himself— thus depended on permission from the publisher. OA , instead, limits copyright and licencing restrictions to enable the right for re-use for any responsible purpose (Fig 1)....

Comment.  Unfortunately, Rinaldi assumes that all OA journals charge author-side publication fees, when most do not.  (More evidence here and here.)  She also assumes that all of them are libre OA, under open licenses, when many, perhaps most, are merely gratis OA, limiting users to fair use.


Tuesday, October 28, 2008

Enhancement of OA microarray database

Jeremy Hubble, et al., Implementation of GenePattern within the Stanford Microarray Database, Nucleic Acids Research, October 25, 2008. Abstract:
Hundreds of researchers across the world use the Stanford Microarray Database (SMD) to store, annotate, view, analyze and share microarray data. In addition to providing registered users at Stanford access to their own data, SMD also provides access to public data, and tools with which to analyze those data, to any public user anywhere in the world. ... [W]e have incorporated the GenePattern software package directly into SMD, providing access to many new analysis tools, as well as a plug-in architecture that allows users to directly integrate and share additional tools through SMD. ... This extension is available with the SMD source code that is fully and freely available to others under an Open Source license, enabling other groups to create a local installation of SMD with an enriched data analysis capability.

OER project wins JISC/THE award

Teacher Training Videos, a collection of OA videos created to help teachers incorporate technology into their teaching, was named "Outstanding ICT initiative of the year" by JISC and Times Higher Education on October 23, 2008. See the announcement. The site was created by Russell Stannard, lecturer at the University of Westminster. (Thanks to Open Education News.)

Call for participation on science and Web 2.0 research

The Research Information Network has issued a call for expressions of interest on research about the effect of Web 2.0 tools on scientific practice. Researchers who express interest will be invited to submit a full proposal; £90,000 in funding will be available. Expressions of interest are due on November 3, 2008. (Thanks to Cameron Neylon.)

Google and publishers settle

Google and the book publishers who sued to stop the Google library project have reached a settlement.  See the AAP's settlement page and press release, as well as Google's settlement page, press release, and blog post.  The two press releases use the same text.

From the common press release  (October 28, 2008):

The Authors Guild, the Association of American Publishers (AAP), and Google today announced a groundbreaking settlement agreement on behalf of a broad class of authors and publishers worldwide that would expand online access to millions of in-copyright books and other written materials in the U.S. from the collections of a number of major U.S. libraries participating in Google Book Search.  The agreement, reached after two years of negotiations, would resolve a class-action lawsuit brought by book authors and the Authors Guild, as well as a separate lawsuit filed by five large publishers as representatives of the AAP’s membership.  The class action is subject to approval by the U.S. District Court for the Southern District of New York....

The agreement acknowledges the rights and interests of copyright owners, provides an efficient means for them to control how their intellectual property is accessed online and enables them to receive compensation for online access to their works.

If approved by the court, the agreement would provide:

  • More Access to Out-of-Print Books -- Generating greater exposure for millions of in-copyright works, including hard-to-find out-of-print books, by enabling readers in the U.S. to search these works and preview them online;
  • Additional Ways to Purchase Copyrighted Books -- Building off publishers’ and authors’ current efforts and further expanding the electronic market for copyrighted books in the U.S., by offering users the ability to purchase online access to many in-copyright books;
  • Institutional Subscriptions to Millions of Books Online -- Offering a means for U.S. colleges, universities and other organizations to obtain subscriptions for online access to collections from some of the world’s most renowned libraries;
  • Free Access From U.S. Libraries -- Providing free, full-text, online viewing of millions of out-of-print books at designated computers in U.S. public and university libraries; and
  • Compensation to Authors and Publishers and Control Over Access to Their Works -- Distributing payments earned from online access provided by Google and, prospectively, from similar programs that may be established by other providers, through a newly created independent, not-for-profit Book Rights Registry that will also locate rightsholders, collect and maintain accurate rightsholder information, and provide a way for rightsholders to request inclusion in or exclusion from the project.

Under the agreement, Google will make payments totaling $125 million. The money will be used to establish the Book Rights Registry, to resolve existing claims by authors and publishers and to cover legal fees....

Holders worldwide of U.S. copyrights can register their works with the Book Rights Registry and receive compensation from institutional subscriptions, book sales, ad revenues and other possible revenue models, as well as a cash payment if their works have already been digitized.

Libraries at the Universities of California, Michigan, Wisconsin, and Stanford have provided input into the settlement and expect to participate in the project, including by making their collections available....

It is expected that additional libraries in the U.S. will participate in this project in the future....

From the parties' joint FAQ:

1. Why did the Class Plaintiffs, the Authors Guild, Association of American Publishers (AAP), and Google come to an agreement?

This agreement will enable us to do more together than copyright owners and Google could have done alone or through a court ruling. Our agreement promises to benefit readers and researchers, and to enhance the ability of authors and publishers to distribute their content in digital form, by significantly expanding online access to works through Google Book Search. It also acknowledges the rights and interests of copyright owners, provides an efficient means for them to control how their intellectual property is accessed online and enables them to receive compensation for online access to their works. The agreement opens new opportunities for everyone - authors, publishers, libraries, Google, and readers....

12. How much will it cost to get full access to a book?

The price of purchasing online access to a book will be set in one of two ways, at the rightsholder’s option.  Google will automatically set and adjust prices through an algorithm designed to maximize revenues for the book. This algorithm will be based on multiple factors; it is not a subjective evaluation of each individual book....For the Institutional Subscription, Google will work with the Book Rights Registry to set the price based on the type of institution and the expected number of users at an institution....

13. Will advertising be shown with the books included in this project?

As with advertising currently offered through Google’s Partner Program, advertising may be displayed on books.google.com webpages.  Advertising will not be overlaid on pages from a book.  Rightsholders will receive the majority of the revenue from the advertising on web pages for specific books....   

From Google's blog post on the settlement:

...[The] Book Rights Registry...will help address the "orphan" works problem for books in the U.S., making it easier for people who want to use older books. Since the Book Rights Registry will also be responsible for distributing the money Google collects to authors and publishers, there will be a strong incentive for rightsholders to come forward and claim their works....

The agreement gives public and university libraries across the U.S. free, full-text viewing of books at a designated computer in each of their facilities. That means local libraries across the U.S. will be able to offer their patrons access to the incredible collections of our library partners -- a huge benefit to the public....

It is important to note that the agreement does not affect users outside the U.S., but it will affect copyright holders worldwide because they can register their works and receive compensation for them. While this agreement only concerns books scanned in the U.S., Google is committed to working with rightsholders, governments, and relevant institutions to bring the same opportunities to users, authors, and publishers in other countries....

Comments.  I'm still digesting this.  But here are some first  impressions.

  • What looks good here? Google will continue to scan copyrighted, OP books (as well as public domain books) and make them full-text searchable.  Those searches will continue to be free of charge and may now display much more than short snippets (20% of the text by default, less if publishers individually object).  Publishers are dropping their objection to future scans, which will encourage more libraries to participate in the program and enlarge Google's book index.  Publishers of non-OA books have found a way to enter the 21st century without shunning the internet or losing money.
  • What looks bad here?  Other book scanners may have to pay to play as well, even if Google's original fair-use claim was valid.  The settlement may reduce scanning of copyrighted books by everyone except Google.
  • Some of Google's $125 million will set up the Book Rights Registry and some will be "compensation" to publishers whose books have already been scanned.  Google will also share revenues with publishers going forward.  I can't tell whether Google will "compensate" publishers for future scans or merely share revenue with them.  That may look like a fine point.  But if Google will compensate publishers for future scans, then it has relinquished its fair-use claim:  that the scanning was lawful without permission or payment provided the company displayed only short snippets.  But if Google is merely sharing revenue, then it hasn't necessarily relinquished that claim.  Giving up a valid fair-use claim would be a serious loss and could tie the hands of search engines forever.  Moreover, the claim seemed valid to a gaggle of copyright specialists including Jack Balkin, Susan Crawford, William Fisher, Lawrence Lessig, Jessica Litman, Fred von Lohmann, and William Patry (now also Google's chief copyright counsel and persumably one who signed off on the settlement).
  • See our many past posts on this lawsuit and my article from October 2005, Does Google Library violate copyright?  In that article I called the publisher lawsuit a shakedown, and so far I see no reason to change my mind.

Update. Read the full-length settlement document or Google's three page summary.

Update (10/31/08).  I just heard from Derek Slater, a policy analyst at Google.  (Thanks, Derek.)

You wondered, "I can't tell whether Google will 'compensate' publishers for future scans or merely share revenue with them." As you know, under the settlement we will be compensating rightsholders for past scans with a fixed payment of at least $60 per book (and at least $45 million total). For future scans, we will *not* be paying any such compensation, though we will have a revenue share for all the new access models (Preview, Purchase, Institutional Subscription).  Preview is free to the user, and the revenue share involves advertising on Preview pages.

Update (11/6/08). I add some second thoughts to my first impressions in a new post.

Labels:

Open data vs. genetic privacy

Brenda Patoine, Speed Bump for Open Access to Genomic Data, Annals of Neurology blog, October 27, 2008.

... Genome-wide association studies have been used to great effect in recent years ...

To facilitate data sharing and accelerate genetic studies, the National Institutes of Health has made a concerted effort to ensure that summary data from genome-wide association studies is freely available to researchers, and to require researchers to bank genetic data from NIH-funded studies in online repositories. But in a policy change announced August 29, the National Human Genome Research Institute (NHGRI)- along with The Wellcome Trust and the Broad Institute — took a cautionary step backward, limiting access to the very same data for which they’ve advocated greater sharing.

The move was prompted by the discovery that, with enough genomic data on an individual, it is possible to determine whether that individual participated in a given genetic study by analyzing pooled summary data such as that readily available on NIH’s dbGaP or CGEMS Web sites until recently. In the August 29 issue of PLoS Genetics, David W. Craig and colleagues at the Translational Genomics Research Institute (TGen) in Phoenix and the University of California, Los Angeles, spelled out a methodology by which an individual genotype could be detected, probabilistically, from a mix of DNA samples or from pooled data sets of aggregate single nucleotide polymorphisms. ...

In a letter to Science magazine published online September 4, NIH Director Elias Zerhouni and National Heart, Lung and Blood Institute Director Elizabeth Nabel said that, in addition to having important implications for forensics and genome-wide studies, the TGen/UCLA research “has also changed our understanding of the risks of making aggregate genomic data publicly available.”

“Sharing genomic data and, particularly, allele frequencies has become common practice, if not an imperative, in science,” Zerhouni and Nabel wrote. “Yet, the protection of participant privacy and the confidentiality of their data are of paramount importance.”

Informed by Craig in advance of the paper’s publication that study participants’ genetic information privacy could be compromised, NIH moved quickly to remove aggregate genomic data from public access. Such data is now sealed off behind a firewall, accessible to researchers only after an application and review process and subject to specific terms and conditions of use. The change essentially treats aggregate data as individual-level genotype/phenotype data, to which access was already controlled because of perceived privacy vulnerabilities. ...

The move has nonetheless caused ripple effects throughout the genetics research community, as universities mull whether to pull data from their own Web sites and grapple with issues of informed consent in the face of the apparent vulnerabilities to participant confidentiality. ...

Review of OJS

PKP Project, Open-Source Software Helping Journals Around the World, Science Editor, September-October, 2008.  Not even an abstract is free online, at least so far.

Happy birthday, PLoS Biology

Theodora Bloom and eight co-authors, PLoS Biology at 5: The Future Is Open Access, PLoS Biology, October 28, 2008.  Excerpt:

On the 13th of October in 2003, with the first issue of PLoS Biology, the Public Library of Science realized its transformation from a grassroots organization of scientists to a publisher. Our fledgling website received over a million hits within its first hour, and major international newspapers and news outlets ran stories about the journal, about science communication in general, and about our founders —working scientists who had the temerity to take on the traditional publishing world and who pledged to lead a revolution in scholarly communication....Not all of the reactions were positive, of course, especially from those in the scientific publishing sector with a vested interest in maintaining the subscription-based system of journal publishing. But thanks in no small part to the efforts of the founders —Pat Brown, Mike Eisen, and Harold Varmus— and an editorial team that included a former editor of Cell and several from Nature, our call for scientists to join the open-access revolution...did not go unheeded. Five years on, the publishing landscape has changed radically. How much have PLoS Biology and PLoS contributed to that change and what might the future hold for us and for publishing?

PLoS Biology is the flagship journal that gave PLoS its initial credibility as a publisher....

The past five years have seen fundamental changes in the publishing infrastructure....

It is not possible to measure PLoS Biology's or even PLoS's contribution to all this change. We are now a small part of a much larger movement....

The next challenge—for PLoS Biology, for PLoS and for all open-access publishers—is to demonstrate the utility of open access in advancing science beyond what can be gained from just making the information publicly available to read. The biggest misconception about open access is that it's only about putting online what was in print and removing any toll for access. It's not: it's about having the freedom to reuse that material without restriction....Open-access publishing is therefore a crucial catalyst for a genuine shift in the way we use and mine the literature and integrate it with databases and other means of scientific communication....

As for the journal itself, PLoS Biology's key goal remains essentially the same as it was for our first issue; to attract and publish outstanding papers in the broad field of biology. Our founders laid it out in their 2003 editorial. “With all that is at stake in the choice of a journal in which to publish—career advancement, grant support, attracting good students and fellows—scientists who believe in the principle of open access and wish to support it are confronted with a difficult dilemma.” This challenge remains the case today because most open-access journals —even PLoS Biology— are still new and lack the prestige of established toll-access journals...And, as Peter Suber notes, “it will take time for OA journals to earn prestige in proportion to their quality” ....

Those of us who have taken part in the open-access scientific revolution can feel proud: open access has come far. But we must not be complacent. Most scientific publications still remain behind a subscription or other access barrier. For those who have not yet taken part, there is still time to help change the system. Commit to making your research-related publications open access by publishing in open-access journals and archiving your existing papers in publicly available digital repositories. It is not just the future —but your future— that is open access.

Brief introduction to authors' rights

Charles Bailey released his Author's Rights, Tout de Suite on October 27, 2008. From the announcement:

... [The publication] is designed to give journal article authors a quick introduction to key aspects of author's rights and to foster further exploration of this topic though liberal use of relevant references to online documents and links to pertinent Web sites.

It is under a Creative Commons Attribution-Noncommercial 3.0 United States License ...

The prior publication in the Tout de Suite series, Institutional Repositories, Tout de Suite, is also available.

See also our past post about the earlier publication in the series.

Columbia joins Nereus consortium

The Columbia University joined Nereus, the mostly-OA repository of economic research. Columbia is the first American member of Nereus.

See also our past posts on Nereus.

Presentations on the visibility of research

Videos of presentations from the CEA conference, Visibilité de la recherche: de la publication aux partenariat (Paris, September 29-30, 2008), are now online.  (Thanks to the INIST blog.)


Monday, October 27, 2008

Beta testers needed for Canadian public domain registry

The Canadian Public Domain Registry is looking for librarian beta testers. The registry will collect information about the copyright status of Canadian literary works. See the announcement, dated October 20, 2008. (Thanks to Michael Geist.)

The registry is a project of Access Copyright and Creative Commons Canada, in partnership with Creative Commons and the Wikimedia Foundation. See also the March 2006 project announcement.

See also the WorldCat Copyright Evidence Registry, a similar project.

On keeping OA and OER content under the same roof

Lorna Campbell, Exclude teaching and learning materials from the open access repositories debate. Discuss., Lorna’s JISC CETIS blog, October 27, 2008.

... Andy Powell put forward the suggestion that teaching and learning materials should no longer be included in the same discussions as open access scholarly works as the issues relating to their use and management are just so different.

As one of the small quota of “teaching and learning” type folk on [the JISC Repositories and Preservation Advisory Group] I was inclined to cautiously agree with Andy. Many of us who have an interest in the management of teaching and learning materials have been frustrated for some time that repository discussions, debates and developments often focus too much on scholarly communications and research papers while neglecting other resource types such as teaching and learning materials and data sets. ... There has in the past been a tendency to assume that Institutional Repositories set up to accommodate scholarly works could also provide a home for teaching and learning materials in their spare time. ...

So what’s the answer? I’d suggest that we need to begin by asking a lot more questions before we can start coming up with answers. Questions such as:

What [do] teachers actually do with their materials? Where do they currently store them? How do they manage them? How do they use them? Are there things teachers can’t do now that they would like to? How do learners interact with teaching materials? Are there personnal [sic], domain and institutional perspectives to consider? And how do they relate to each other?

We need a discussion that is focused squarely on the requirements and objectives of teachers and learners not one that is an addendum to the, admittedly worthy, open access debate. ...

Blog notes on OA panel at FSOSS

Andrea Kosavic, FSOSS 2008, the relog experiment, October 26, 2008.

... I was happy to see that [Free Software and Open Source Symposium 2008 (Toronto, October 23-24, 2008)] featured a session on open access.  Leslie Chan discussed the convergence of open access with open source. His session reminded us of the significance of the open source contribution to the open access revolution.  John Willinsky was visionary in realizing that a major barrier to publishing journals on-line barrier-free was the cost of creating journal publishing software.  His Open Journal Systems project has enabled over 2000 journals worldwide to make journal content available on-line, most of it without barriers to access. Open source projects like his are contributing to the steady increase of peer-reviewed scholarship freely available on-line. ...

OA and heterodox economics

The latest issue of On the Horizon, a theme issue on publishing, refereeing, and rankings, is now available. See these relevant articles:

Nominations for best OA content in anthropology

The blog Savage Minds is running a contest for the best OA content in anthropology. A list of nominees for categories including best article and best journal is available, and is still accepting nominations.

Declarations in support of OA

The Open Access Directory (OAD) just opened a list of Declarations in support of OA.

Remember that OAD is a wiki and counts on its users to keep its lists comprehensive, accurate, and up to date.


Sunday, October 26, 2008

More on OA to cultural heritage

Nicholas Crofts, Digital Assets and Digital Burdens:  Obstacles to the Dream of Universal Access, text of a presentation at the 2008 Annual Conference of CIDOC (Athens, September 15 – 18, 2008).  (Thanks to FGI.)

Abstract:   Over recent years, a number of high-profile projects have promoted the dream of universal access to cultural heritage through the integration and dissemination of the digital assets held by ‘memory institutions’: museums, libraries and archives. We argue that this vision is based on a number of questionable assumptions about the nature of the obstacles involved, the quality of the digital assets held by these institutions, their objectives and imperatives they face.  The paper concludes that meaningful and sustainable universal access to cultural heritage is unlikely to be achieved through such broad-scale projects, but that other trends can already be detected that point towards a different future, one which challenges the traditional role of museum documentation.

From the body of the paper:

What the foregoing examples seem to suggest is that museums and other cultural heritage institutions may be caught in a Catch 22 situation with respect to universal access to cultural heritage. While making cultural material freely available is part of their mission, and therefore a goal that they are obliged to support, it may still come into conflict with other factors, notably commercial interests: the need to maintain a high-profile and to protect an effective brand image. If museums are to cooperate successfully and make digital resources widely available on collaborative platforms, they will either need to find ways of avoiding institutional anonymity, or agree to put aside their institutional identity to one side. While cultural institutions are wrangling with these problems, other organisations and individuals are actively engaged in producing attractive digital content and making it widely available. Universal access to cultural heritage will likely soon become a reality, but museums may be losing their role as key players.

Toward a World Data System

International science community agrees on first steps to establish a global virtual library for scientific data, a press release from the International Council for Science (ICSU), October 23, 2008.  Excerpt:

The existing networks for collecting, storing and distributing data in many areas of science are inadequate and not designed to enable the inter-disciplinary research that is necessary to meet major global challenges. These networks must be transformed into a new inter-operable data system and extended around the world and across all areas of science. The General Assembly of the International Council for Science (ICSU) agreed today to take the first strategic steps to establish such a system....

[A] large amount of valuable scientific data remains inaccessible. Over 50 years ago, ICSU established networks of data centres and services to provide full and open access to scientific data and products for the global community. But the world has changed enormously in 50 years, most notably with advances in technology, and it is time for the existing structures to be integrated into a new expanded system —a World Data System....

Ray Harris, chair of the expert Committee that produced the report said, ‘Data is the lifeblood of science and there are many exciting developments, which mean that access to scientific data both for science and for policy making should be much easier....’

The report and more information on the General Assembly are available [here].

PS:  There are many links on the page to which the press release refers us.  I can find the info on the general assembly but not the report.  If anyone has a deep link to the report, please drop me a line.

Update (10/27/08).  Andrew Treloar believes this is the report ICSU had in mind, even though it's dated June 2008.  Sections 3.2 and 5.3 cover the world data system, although the document covers many other topics as well.

Update (10/27/08).  Also see the article in Research Information.

Sulight Foundation launches Open Senate Project

On October 21, 2008, the Sunlight Foundation announced it had launched the Open Senate Project; see the press release or blog post.
Building on the achievements of the Open House Project, the Open Senate project is a bipartisan, collaborative initiative to study the Senate's current information-sharing practices to recommend how to improve public access to the Senate's work on the Web. ...

On openness at the CBC

Joe Clark, Dry post about desirable technical feature, The Tea Makers, October 15, 2008. (Thanks to Michael Geist.)

... Geistards™ are always calling for CBC to open up to the Canadian public all the archives it owns (or has the rights to; these are viewed as equivalent). Just like the BBC did! And they want the archives opened up under Creative Commons licensing. ...

But proponents are making a Jack Valenti–style error here – assuming that video and audio content can be easily made available in bulk. It can’t. Implicit in the proposition is full-on digitization of content, which is inordinately expensive in bulk, takes acres of disc space (however inexpensive per gigabyte that might be now), has to be provided in multiple formats, has to be backed up and transcoded into the indefinite future, and – wait for it! – has to be accessible.

It would be cheaper if we just made you a VHS.

The whole thing is too big for the CBC to handle. ...

So if we’re going to clone something the BBC did in the name of public access to our “content,” can we start small instead? Like with program (not “programme”) guides? ...

On openness at the BBC

Jemima Kiss, The BBC can be an open source for all of UK plc, The Guardian, October 6, 2008.

... The [British Broadcasting Corporation's] director general Mark Thompson has directed the corporation to think beyond proprietary rights management to a new era of interoperability that offers consumers wider choice, control and benefits from "network effects" - the virality and interconnectedness of the web.

In post-Hutton 2004, startup investor and former BBC strategy manager Azeem Azhar proposed a "BBC Public Licence" that would allow both the public and business to use BBC content and code to build on, play with and share. It seems his vision is finally coming to life. ...

Steve Bowbrick, recently commissioned to initiate a public debate about openness at the corporation, thinks empowerment could be as important as the traditional Reithian mantra, "Educate, inform and entertain." ...

What could the BBC create? It sits on a vast content resource, much of which is already being digitised under the BBC Archive scheme. It will take until 2022 to digitise material around each programme, from transcripts, audio, D notices and expenses to letters of complaint. The most significant part of the archive - 900,000 hours of TV and radio programmes - is likely to be the last thing to be digitised because of the complex rights issues.

BBC internet controller Tony Ageh says the notion of a dusty archive is now redundant; in the web era, everything is permanent and everything is, or should be, accessible. ...

In regional news, the BBC could make all its video reports, audio, text and comments available to commercial rivals and trigger a renaissance in local journalism. And it could allow people to remix BBC news footage for themselves, perhaps for a "day you were born" birthday present or a significant football match. ...

See also our past posts on the BBC Creative Archive.

OA to win the "nerd vote"

Matt Haughey, How to get my nerd vote, A Whole Lotta Nothing, October 21, 2008. (Thanks to Boing Boing.)

... Regardless of party affiliation, if you're running for an office from as small as city council all the way up to president, if you hit on any/all of these things, you just might get my vote. ...

... Public Domain dumps of every photograph, recording, film, and publication commissioned by the government in an easy to retrieve place. ...

No open licensing for Canadian PM debates

Michael Geist, Canadian Political Parties Practice Politics 1.0 in a Web 2.0 World, Toronto Star, October 20, 2008.

... [B]oth [Democrat Barack] Obama and Republican candidate John McCain have supported open licensing for the presidential debates so the public can use the footage to create their own videos and engage more actively in the political process.

No similar initiatives occurred in Canada. ...

See also our past posts about open licensing for the presidential debates.

More on data sharing among patients

Barrett Sheridan, 'Open Wide...', Newsweek, October 16, 2008.

... Now that the health sector is slowly but surely beginning to embrace Web 2.0 tactics like social networking, sharing your health information with friends, family and even strangers may become an everyday occurrence. ...

That's why 19,000 people—the number of users on PatientsLikeMe.com—have agreed to put intimate details, like whether a certain drug causes constipation, on a social-networking site. Collective knowledge—something that the Web, and in particular the social Web, is very good at enabling—allows them to put their disease in context. Am I taking a lower dose than other ALS sufferers? How normal is this side effect? Bringing health histories out into the open can provide answers to those questions, something that even doctors can't do.

It's also about gathering the collective wisdom, and making it available to researchers. "In the end, it's the same as open-source software," says Heywood. "If you can see all the information, you can correct the errors." Drug companies and doctors are far from infallible, and in this way the PatientsLikeMe community serves as a useful check. The site is, in effect, building an enormous database of patient data that can determine whether drugs and treatments are having the desired effect. ...

See also our past posts about PatientsLikeMe.

On a bill for OA to taxpayer-funded educational resources

David Wiley, How about a Utah bill?, iterating toward openness, October 17, 2008.

... As we drafted the language for the Cape Town Declaration’s Strategy 3 on Open Education Policy, I worked to champion the idea that ‘taxpayer-funded educational resources should be open educational resources.’ ...

So now what is obviously needed is some legislation that makes these policies real! ...

Interview with Bora Zivkovic

Brandon, An Interview with Bora Zivkovic, Organizer of Science Online ‘09, Extreme Biology, October 25, 2008.
... I think that my research and publication contributions are dwarfed by the influence I have had as a biology teacher for 16 years, as a science blogger for the past four or so years, as the organizer of three science blogging conferences and editor of two (and the third is coming out soon) science blogging anthologies, as a community manager for PLoS ONE, and as a vocal proponent of the Open Access model of publishing. With those activities, I think I have reached more people in a positive way than with my scientific papers, I have changed more minds, made more people think, spread more good information around, and did more good for the entire enterprise of science than with my research ...

Medical resources online

Elena Giglia, The library without walls: images, medical dictionaries, atlases, medical encyclopedias free on web, European Journal of Physical and Rehabilitation Medicine, September 2008; self-archived October 25, 2008. (Thanks to ResourceShelf.) Abstract:
The aim of this article was to present the "reference room" of the Internet, a real library without walls. The reader will find medical encyclopedias, dictionaries, atlases, e-books, images, and will also learn something useful about the use and reuse of images in a text and in a web site, according to the copyright law.

OA publishing from Argentina and Brazil in physics and chemistry

Alberto Gustavo Albesa and Gabriela Prêtre, Estudio comparativo de las publicaciones en revistas de acceso abierto de los investigadores de Argentina y Brasil. (Física) [Comparative study of the publications in OA journals by researchers in Argentina and Brazil (physics)], presented at the 93 Reunión Nacional de Física Argentina / XI Reunión de la Sociedad Uruguaya de Física (Buenos Aires, September 15-19, 2008); self-archived October 24, 2008.

Alberto Gustavo Albesa and Gabriela Prêtre, Estudio comparativo de las publicaciones en revistas de acceso abierto de los investigadores de Argentina y Brasil. (Química) [Comparative study of the publications in OA journals by researchers in Argentina and Brazil (chemistry)], presented at the XXVII Congreso Argentino de Química (San Miguel de Tucumán, Argentina, September 17-19, 2008); self-archived October 24, 2008. (Thanks to Alberto Albesa.)

The papers analyze the publications by authors from Argentina and Brazil in OA journals in physics and chemistry, respectively. In both cases, the papers found that the tendency to publish in these journals is increasing year by year, it is still at an early stage of development.

Who cares?

Lorenz Khazaleh, George Marcus: "Journals? Who cares?", anthropologi.info, October 25, 2008.  Excerpt:

When George Marcus, one of the most influential anthropologists, was in Oslo recently, I asked him what he thinks about Open access. His answer surprised me. He said: “Journals? Who cares?” There is little original thinking in journals, no longer exciting debates, he told me. “Maybe it’s because I’m getting older. I don’t care.” He explained that “journals are meant to establish people”. They are more important for one’s career.

George Marcus offered similar pessimistic views in an interview he gave for the journal Cultural Anthropology (subscription needed) in spring. Among other things, he said, that there are no new ideas in anthropology....

Comment.  Did this transcript miss something or did George Marcus miss something?  Even if we concede for the sake of argument that there are no new ideas in the field of anthropology, and that journals are more about advancing careers than advancing research, Marcus' answer was not responsive.  Apparently he thinks OA is all about journals, which it isn't.  It's all about access, which may be through journals or repositories or many other vehicles (like wikis, ebooks, multimedia webcasts, P2P networks, RSS feeds...).  It's as if someone had asked, "What do you think about freedom of speech?" and he answered, "Public speaking?  Who cares?  It's all grandstanding and vanity." 

Update (10/27/08).  Lorenz has blogged a response to my comment.

Good point! I have to admit that Marcus was very busy and did not have much time for this interview and I had lots of questions. We talked just a few minutes on Open Access while we we took the subway from the city to the university. He admires Chris Kelty’s work on open source and open access but he does not seem to be up to date in regard to blogging, web2.0 etc (few anthropologists are actually, and most anthropologists have never heard of the Open Access movement).

Update. Also see Dorothea Salo's comments.

Flexible access control for repositories

Chi Nguyen, Flexible Access Control, Federated Identity and Heterogeneous Metadata Supports for Repositories, a presentation at eResearch Australasia 2008 (Melbourne, September 29 - October 3, 2008). 

Abstract:   In this paper, we present a new framework complete with implementation, for a digital repository that will address some of the most difficult issues facing repository managers today: how to enable federated identity access, rapidly changing access control requirements, and the management of multiple metadata standards for different types of digital objects.  Our work draws together leading industry standards in the area of authentication, authorization, and metadata management, and apply them in a new and innovative way to the repository landscape. As a demonstration, we apply our work to a speech annotation research project which makes use of a repository to manage its culturally sensitive data.

The rest of the presentations from eResearch Australasia 2008 are now online as well.  (Thanks to Charles Bailey.)