Open Access News

News from the open access movement

Saturday, July 21, 2007

Two videos on OA at CERN

PhysMath Central is hosting two videos, one of them new, on CERN’s project to convert particle physics journals to OA:

We have a new video on open access advocacy for physics from CERN's Director General, Robert Aymar. We have also put up another interview, with librarian Jens Vigen, which featured on our beta-release site. Both videos are illuminating and, whilst focussing on particle physics, put OA in a wider context.

OA resources on infectious diseases

Matthew E. Falagas, Efthymia A. Karveli, and George Panos, Infectious Disease Cases for Educational Purposes: Open-Access Resources on the Internet, Clinical Infectious Diseases, August 15, 2007.  Only this abstract is free online, at least so far:

The use of medical cases in the problem-based learning approach has been increasing in medical schools worldwide. We collected information regarding the relevant World Wide Web resources with infectious diseases cases for educational purposes by searching frequently used Internet search engines (i.e., Google, AltaVista, and Yahoo), the PubMed database, and Current Contents, as well as the Web sites of each of the US medical schools and other major institutions and organizations. We compiled a list of Internet links of 25 English-language, open-access (free) World Wide Web resources of educational cases in the field of infectious diseases. We collected information regarding the case developers, the number of the presented cases, the types and modes of the case presentations, and the target groups of the Web pages. Although we did not aim to generate an exhaustive list of relevant Web sites, we believe that a readily available list of electronic resources of medical cases with a focus on infectious diseases will be useful to medical students and physicians in training and practice.

More on WiderNet and eGranary

Jeffrey Thomas, Africa: 'Internet in a Box' Brings Information to Developing World,, July 17, 2007.  Excerpt:

Since the year 2000, the University of Iowa has been trying to bring the benefits of the Internet to parts of the world where access is minimal and/or expensive.

The university's WiderNet Project manages the eGranary Digital Library, which places Web resources on a server on university campuses in developing countries that have little or no Internet connectivity. Based at the University of Iowa in Iowa City, the eGranary Digital Library manually updates its library at least twice each year on campus intranets in Africa, India, Bangladesh, Azerbaijan and Haiti....

The eGranary Digital Library is often called "The Internet in a Box" because it offers offline approximately 10 million educational resources from more than 1,000 Web sites, including OpenCourseWare from course offerings by the Massachusetts Institute of Technology (MIT), Project Gutenberg's complete collection of classic literature and the entire Wikipedia Web site. The eGranary's interfaces are streamlined for easy navigation and offer a comprehensive search engine. All materials - including 40,000 books in their entirety and 150 to 200 full text journals with their archives - are included with the author or publisher's permission. Many would be prohibitively expensive for a library in the developing world to own.

The eGranary's 750 gigabytes of storage space hold the largest collection of informational materials available on a server that can be accessed without an Internet connection....

More on the NIH policy

NIH Policy Spurring Discussion of How Best to Ensure Public AccessLibrary Journal Academic Newswire, July 19, 2007.   Excerpt:

Three years and many battles after it was first proposed, the National Institutes of Health's (NIH) mandatory public access policy has gotten Congressional backing. But if increasing public access to research is the goal of the NIH, will the proposed policy be effective? Martin Frank, executive director of the American Physiological Society (APS), told the LJ Academic Newswire the 56 publishers and authors who signed on to the 2004 DC Principles for Free Access already make their articles freely available within 12 months of publication, so the net gain in access could be rather limited....

Individual authors, meanwhile, could have the greatest impact on public access, maintains the University of Southampton's Stevan Harnad, a researcher and pioneering advocate of author self-archiving...."An article whose final draft has been self-archived in the author's own institutional repository or an article that has paid for open access on the publisher's web site would be OA immediately," he observed, "It would only be the PubMed version that was embargoed."

Although critical of the NIH plan [for relying on central rather than distributed OA archiving], Harnad said he was not opposed to it and appreciates the public access principles behind it. "I still prefer it over the old policy or over no mandate at all," he told the LJ Academic Newswire....


  • The opening sentences suggest that this is the first time Congress has called for an OA mandate at the NIH.  But it's the second or third, depending on what you count. Congress asked for a mandate in July 2004, although the NIH didn't adopt one.  And last year the House adopted the same kind of mandate language it adopted this year, but the Senate never voted on it and the proposal died without a vote when the Democrats took over Congress and improvised a new one-time appropriations process.
  • If Martin Frank and the DCPrinciples Coalition believe that an OA mandate at the NIH will largely duplicate what publishers are already doing voluntarily, then is the Coalition willing to say in public that --apart from other issues-- an OA mandate would not kill their revenues, not kill their journals, and not kill peer review?

More evidence that OA helps (and TA hurts) journal impact factors

Shu-Kun Lin, Non-Open Access and Its Adverse Impact on Molecules, Molecules, July 16, 2007.  An editorial.  (Thanks to Stevan Harnad.)  Excerpt:

In 2005 and 2006 papers were published in MDPI journals in both non-Open Access (non-OA) and Open Access (OA) form. In 2005, 70 papers (just under 50% of the total) were published in the password protected (non-Open Access) area of Molecules. Not unexpectedly, this reduced number of Open Access papers had a significant influence on the impact factor of Molecules, which was reduced from 1.113 in 2005 to 0.841 in 2006....

Like Molecules, in 2005 another MDPI journal, International Journal of Molecular Sciences, also published a significant percentage of papers in its password protected (non-Open Access) area. The impact factor of International Journal of Molecular Sciences was also reduced from 1.467 in 2005 to 0.679 in 2006....

As shown in Tables 1 and 2 [showing article, citation, and OA status data for the two journals], the non-Open Access papers and their obviously lower citation numbers appear to be the main reason behind the reduced impact factors of these two journals, where almost half of the papers published were in non-OA form....

As a contrast, the MDPI journal Sensors published only a very small number of non-Open Access papers in 2005, and its impact factor increased slightly from 1.208 in 2005 to 1.373 in 2006.

Since April 2007, we have granted full Open Access to all the papers published in all MDPI journals in 2005 and 2006, and in 2007, we have published papers exclusively in Open Access form. We expect this will be reflected in an increase in impact factors come the next evaluation.

Now that there are more than a million OA books online...

David Adams, Once upon a time ... e-book revolution, The Age, July 19, 2007.  (Thanks to Charles Bailey.)  Excerpt:

…The father of all these e-book projects, Project Gutenberg, was founded in 1971 by American Michael Hart. The project...originally keyed in books by hand so that people could read them online for free.  The project recently celebrated the creation of its 100,000th e-book....

Mr Hart, often described as the "father" of the e-book, says Project Gutenberg is all about sharing knowledge." ...

Estimating that there are a million free e-books these days available for download, Mr Hart believes the personal computer is rapidly becoming the personal library."

Just look at the new terabyte drives for under $US400 ($A459)," he says. "You could store a million e-books in one uncompressed, and perhaps 2+ million in zip drives." ...

[Bruce Preston is a former Sydney librarian who started a website on e-books] hopes the book industry is taking note of moves by Apple and music company EMI to remove copyright protection software known as Digital Rights Management from downloadable songs, a move that allows the songs to be played on a range of devices....

Friday, July 20, 2007

Repository deposits via APP and SWORD

APP and Repositories, Inkdroid, July 16, 2007.  (Thanks to Charles Bailey.)  Excerpt:

Pete Johnston blogged recently about a very nice use of the Atom Publishing Protocol (APP) to provide digital library repository functionality. The project is supported by UKOLN at the University of Bath and is called Simple Web-service Offering Repository Deposit (SWORD).

If you are interested in digital repositories and web services take a look at their APP profile. It’s a great example of how APP encourages the use of the Atom XML format and RESTful practices, which can then be extended to suit the particular needs of a community of practice....

The full details are available in the latest draft of the RFC–and also in a wide variety of articles including this one....

An atom feed for a collection would essentially enable harvesting of a repository, much like ListRecords in OAI-PMH....

More on the House support for an OA mandate for the NIH

House Backs Taxpayer-Funded Research Access, a press release from the Alliance for Taxpayer Access, July 20, 2007.  Excerpt:

In what advocates hailed as a major advance for scientific communication, the U.S. House of Representatives yesterday approved a measure directing the National Institutes of Health (NIH) to provide free public online access to agency-funded research findings within 12 months of their publication in a peer-reviewed journal. With broad bipartisan support, the House passed the provision as part of the FY2008 Labor, HHS, and Education Appropriations Bill.

"The House has affirmed the principle that broad sharing of publicly funded research findings on the Internet is an essential component of our nation's investment in science," said Heather Joseph, Executive Director of SPARC (the Scholarly Publishing and Academic Resources Coalition), and a leader of the Alliance for Taxpayer Access (ATA). "This action paves the way for all scientists and citizens to access, use, and benefit from the results of publicly funded biomedical research."

"We're pleased by Congress's recognition of the fundamental rationale for public access ­ that better-informed patients, clinicians, and researchers will mean better health outcomes," said Sharon Terry, President of the Genetic Alliance and an ATA activist. "The time has come to sweep away unnecessary barriers to understanding and treating disease. The Genetic Alliance thanks and congratulates the House of Representatives for taking this vital step."

The current NIH Public Access Policy, implemented in 2005 as a voluntary measure, has resulted in the deposit of less than 5% of eligible research by individual investigators.

In a recent letter to Congress, 26 Nobel Laureates called for enactment of mandatory NIH public access...

"The coalition of support for the NIH policy is extremely broad," added Joseph. "This critical step was achieved as a result of the vision and collective effort of patient groups, scientists, researchers, publishers, students, and consumers who registered their support."

A similar measure has been approved by the Senate Appropriations Committee and will be considered by the full Senate later this summer.

Keen v. Weinberger

Full Text: Keen vs. Weinberger, Wall Street Journal, July 18, 2007.  An excellent back-and-forth between Andrew Keen and David Weinberger on the value of the web and Web 2.0.  Is the web a low-culture nightmare of amateurs and narcissism or a messy, open opportunity for every kind of culture?

PS:  You know where I come down:

Scholars who expect to find the very best [scholarly journal] literature online, harmlessly cohabiting with crap, are replacing scholars who, despite themselves perhaps, still associate everything online with crap.... 

Reviewing the literature on wikis in education

Paul Kawachi, Critique of the Literature on Wikis, Open Education Network Blog, July 20, 2007.

Here I would like to offer a critique of the literature on using wikis in education – noting that there is not much available yet....

PS:  Kawachi cites and discusses nine articles, including two of his own.

Save the UK Select Committee on Science and Technology

38 notable UK scientists and science funders have published a letter to the editor in today's issue of The Guardian calling for the survival of the  House of Commons Select Committee on Science and Technology.  Among other things, the Select Committee provides oversight of the UK Office of Science & Innovation, which has already disappeared in Gordon Brown's reorganization of the government.  As a consequence, Parliament may lay down the Select Committee as well.  Excerpt:

The recent changes to the organisation of government departments involved moving the work of the Office of Science & Innovation into the Department of Innovation, Universities and Skills, (DIUS) which will be scrutinised by a departmental select committee covering all those areas and matters to do with expenditure. However, as a result, there is the prospect of the abolition of the science and technology select committee along with its important functions.

This committee does a great deal of vital work scrutinising scientific matters and the use of evidence across government departments and agencies. Recent important inquiries include hybrid/chimera embryos, nanotechnology, the future of health research, the impact of EU legislation on MRI, [and] open access publishing....

Just as peer review is important in science, so is adequate oversight of the use of science in policy-making....

The lead signatory is Prof Sir Martin Rees, President of the Royal Society, and among the others are Mark Walport, Director of the Wellcome Trust, Colin Blakemore, Chief Executive of the Medical Research Council, and four Nobel laureates.


Update.  Also see brief notice from CORDIS News and the Daniel Clery story in Science Magazine.

Engineering publisher launches a hybrid journal program

Professional Engineering Publishing (PEP) has launched a hybrid OA program for all 19 of its journals.  From yesterday's announcement:

Professional Engineering Publishing, publishers to the Institution of Mechanical Engineers, have launched a new open access service for the engineering community called Engineering Open Choice.

Engineering Open Choice gives authors the option of having their published research made available free to anyone throughout the world, on an open access basis. The service is available to authors of all papers accepted for publication, upon payment of a standard fee. Engineering Open Choice is designed to satisfy the needs of the community - authors, funding bodies, readers - who want research material to be made available on an open access basis.

Engineering Open Choice will be available in all Journals published by Professional Engineering Publishing, most notably the Proceedings of the Institution of Mechanical Engineers.

Also see PEP's Overview of Engineering Open Choice.  I'd post an excerpt here but the PDF is locked and doesn't permit cutting/pasting.  (Why?)  A few details:

  • The option will be available before the end of 2007.
  • The Engineering Open Choice (EOC) articles will be included in the same abstracting and indexing services as PEP's ordinary articles.
  • The publication fee is £1,700.  Color charges are extra.  EU authors must also pay VAT.
  • The EOC edition is the final published edition of the article.
  • EOC articles are released under a Creative Commons BY-NC 3.0 license.
  • PEP guarantees that EOC articles will be free online at the PEP site "for a minimum of 10 years from the date of publication" but authors are also allowed to deposit EOC article in the repositories of their choice from the moment of publication.


  • This is policy meets many of my criteria for a hybrid journal program.  In particular, I'm impressed that OEC articles are the published editions, are released under CC licenses, and may be deposited in repositories independent of the publisher.
  • PEP leaves three important questions unanswered.  (1) Will it reduce its subscription prices in proportion to author uptake?  (2) Will authors have to pay the EOC publication fee in order to comply with a prior funding contract?  (3) Will PEP retreat from its current green self-archiving policy by requiring either a fee for no-embargo self-archiving or an embargo for no-fee self-archiving?

More on the HHMI-Elsevier deal

Stevan Harnad, Think Twice Instead of Double-Paying for Open Access, Open Access Archivangelism, July 19, 2007.  Excerpt:

Howard Hughes Medical Institute (HHMI) did not "sell out" to Elsevier in agreeing to pay for Open Access publication charges in exchange for compliance with their (very welcome and timely) Open Access mandate. They (and the Wellcome Trust) simply made a strategic mistake -- but a mistake that no one at HHMI (or Wellcome) as yet seems ready to re-think and remedy:

What HHMI should have done was to mandate that all HHMI fundees must deposit the final, accepted, peer-reviewed drafts ("postprints") of all their published articles in their own institution's Institutional Repository (IR) immediately upon acceptance for publication. Instead, they uncritically followed the (somewhat incoherent) "e-biomed" model, and mandated that it must be deposited directly in PubMed Central, a central, 3rd-party repository, within 6 months of publication.

The reason this was a mistake (and the reason it is silly to keep harping on HHMI's "selling out") is that all Elsevier journals, including Cell Press, are already "Green" on immediate Open Access self-archiving in the author's own IR: It is only 3rd-party archiving that they object to (as rival publication).

But there is no reason whatsoever to hold out (or pay) for direct deposit in a central repository: All IRs are OAI-compliant and interoperable....

So in exchange for their unnecessary and arbitrary insistence on having the full-text deposited directly in PubMed Central within six months of publication, HHMI (and Wellcome, and other followers of this flawed model) have agreed instead to pay arbitrary, inflated, and unnecessary "Gold" OA publication charges. That would in itself be fine, and simply a waste of money, if it did not set an extremely bad example for other research funders and institutions, who are also looking to mandate OA self-archiving, but do not have the spare change to pay for such extravagant and gratuitous expenses....

Comment.  My similar but not identical take from the April issue of SOAN:

Elsevier already gives blanket permission for self-archiving in institutional repositories and could easily have added PubMed Central to the list of eligible destinations.  For this purpose, the distinction between institutional repositories (IRs) and PubMed Central (PMC) is arbitrary.  In openness, interoperability, and visibility to search engines, they are equivalent.  If a PMC deposit hurts Elsevier, so does an IR deposit.  If an IR deposit is harmless for Elsevier, so is a PMC deposit.  Elsevier could have adapted to HHMI without incurring any new costs or risks.  Instead, HHMI paid it to adapt....

[I]t seems that HHMI could just as easily have required its grantees to deposit their postprints in their own IRs, rather than PMC.  The snag there, unfortunately, is that not all HHMI grantees work at institutions with IRs.  Hence, HHMI had good reason to call Elsevier's bluff....

[The Wellcome Trust] and Elsevier struck a deal last September for the same reasons that HHMI and Elsevier struck one now.  WT agreed to pay Elsevier higher fees than HHMI is paying ($3,000 per Elsevier article and $5,000 per Cell Press article), but it got more for its money.  WT got immediate OA, while HHMI is getting embargoed OA.  WT got OA to the published edition, while HHMI is getting OA to an unedited edition.  WT got a Creative Commons license or equivalent; while HHMI could use CC licenses on deposited, unedited manuscripts, the published editions will remain under Elsevier's copyright with no significant reuse rights....

I have to conclude that HHMI was ripped off.  Or if that's too negative, Elsevier got a fantastic deal....

HHMI is paying a fee for green OA....

Interview with John Wilbanks

Abby Seiff, Will John Wilbanks Launch the Next Scientific Revolution?  Popular Science, July 2007.  Excerpt:

…As scientific goals grow more multifaceted, the challenges for research and development lie not only in the experiments themselves, but also in the transfer of information among peers.

Enter John Wilbanks, executive director of the Science Commons initiative, and the six-year-old innovation of its parent organization, Creative Commons —an intelligent, understandable copyright that's revolutionizing how everything from photos to publications are shared. Wilbanks and his team (which includes Nobel Prize winners Joshua Lederberg and John Sulston) are focused on three areas where roadblocks to scientific discovery are most common: in accessing literature, obtaining materials, and sharing data….

How will an open-access system improve scientific research?

The question is, have we now hit a point where scientific problems are so complex that one person alone can't solve them? It would certainly seem that way. The problems science is pursuing today —issues like global warming and genomic mapping— demand a distributed approach across disciplines. But currently, journal articles, data, research, materials and so on are stopped by contracts and copyrights at such a rate that it's become nearly impossible to pull them together. The estimated utility half-life of a scientific paper is 15 years, but the copyright lasts until 70 years after the author's death. It's hard to get data sets shared, and the basic elements of the commercial Web (like eBay, Amazon and Google) function poorly, if at all, inside the sciences. The knowledge simply isn't moving as easily as it should, and transactions are slow on a good day, non-existent on a bad one….

In the past, you've noted that improving data-sharing among scientists may take more of a cultural shift than a legal one. What do you think it will take to achieve this? 

Right now, it's still in scientists' interest to follow the classical model of one scientist working alone. In today's system, you don't get rewarded for sharing—no one gets tenure for choosing to publish preprints of their papers in molecular biology, or for spending weeks making cells for other labs to do research. And you sometimes get ahead by deliberately withholding. If you think you can squeeze more papers out of your data, you might not share it even if it takes years for someone else to replicate the research you've written about. Even if it's not a matter of deliberately withholding, it takes a great deal of effort to share information with others once you're through with it. It takes common standards to annotate data and databases to hold the data. It takes infrastructure to make sharing work. There's no easy system in place.

How do you change things so that it's in scientists' best interest to share rather than withhold?

We can provide the wiring, but only people within the system can make the incentives. That's why we've worked with funders from the beginning: They are the ones who can make an incentive for scientists to share, and they are also the ones for whom sharing is in their best interest….The Gates Foundation, for example, is now offering millions for malaria research, and it's contingent on the researchers making it available to share. Sharing maximizes the return on investment in early-stage research. No pharmaceutical company is making money by selling biological knowledge —they make money by selling chemicals. So getting as much of that knowledge as possible into the efficiency of the Web-commerce world is going to make it faster to find those chemicals.

What about the journals? How can an open-access system be in their interest?

Admittedly, right now the traditional for-profit publishing companies don't have a strong incentive to change. These publishers are making as much as a 35 percent profit, and in the absence of prodding from the scientific and research communities they're not going to change. But over the long term, people will get frustrated that we can easily find everything we need recreationally online but we can't do it for science. Imagine if Google couldn't read pages on the Web —it would be hard for PageRank to do its magic. Well, that's the situation in science. Google doesn't work as well for finding science as it does for finding pizza, and that's a shame. Open access isn't just about getting a scientist access to a file. It's the best thing for science because it allows all the smart people in the world to start hacking on the scientific literature and applying tools like text mining, collaborative filtering and more. Right now, all that content is basically dark to most of the smart people on Earth.

When that happens, journals will be forced to create a new business model to pay for peer review and layout….Nature is at the forefront of this, implementing CC licenses for its revolutionary preprint archive, Nature Proceedings, but there are lots of possibilities out there—think of Amazon's market partners. If a scientist is reading a paper online and clicks through to purchase material, there's value there. It might be a business model; it might be enough to defray the cost of open access. I just want to create the infrastructure that makes movement and sharing easier. If we can build the wiring, everyone involved can experiment with the business models and we can let the markets work it out….

House approves OA mandate for NIH, but Bush may veto

Late yesterday the House of Representatives approved the Labor-HHS-Education appropriations bill creating an OA mandate at the NIH. 

The Associated Press gives both the good news and the bad.  The good:

The House on Thursday passed a spending bill that covers the departments of Labor, Health and Human Services and Education and related agencies, for the budget year beginning Oct. 1.

The bad:

The legislation faces a veto from President Bush, who complains that Democratic add-ons have made it too expensive.


  • More on the good news.  This is important.  The full House has approved an OA mandate for the NIH —we’ve cleared one of the largest hurdles.  Now we only need approval by the Senate and President. 
  • More on the bad news.  Bush’s threatened veto (1) is serious and (2) has nothing to do with the OA provision in this large and complicated appropriations bill.  Last year the House also approved language creating an OA mandate for the NIH, but it died without a vote as a side-effect of the Democratic take-over in Congress.  (The lame-duck Republicans dawdled and then decided to leave appropriations to the Dems; when the Dems took over in January, the country was already three months into an unfunded fiscal year, action was urgent, and the Dems improvised a solution that disregarded nearly all of the carefully crafted appropriations bills.)  This year the language might be collateral damage in another political battle.  Here’s some perspective from Brendan Murray and Brian Faler at

    Bush, who only vetoed one piece of legislation passed by the Republican Congress in his first six years in office, is now threatening to reject almost every spending bill sent to him by the Democratic-controlled Congress unless lawmakers abandon plans to spend $23 billion more than he requested.

    While the amount involved is less than 1 percent of the $2.9 trillion federal budget, the political stakes are greater. A little more than 16 months before the 2008 elections, Democrats and Republicans alike figure a fight may be in their interests.

    “It's a very big fight over a fairly small sum of money,” says Bob Bixby, executive director of the Concord Coalition, an Arlington, Virginia-based nonpartisan group that advocates a balanced budget. “It has a lot of political significance in terms of the signals being sent.”

    Bush and the Republicans, stung by criticism that they presided over a surge in government spending, are looking to rehabilitate themselves among core supporters by holding the line on the budget. Democrats, meanwhile, are trying to show they can deliver on promises to shore up education, health care and a host of other initiatives.

Update.  For more perspective on the prospect of a Bush veto, see this story in today’s National Journal.  Both parties seem to relish the prospect of a battle:

The [Labor-HHS appropriations] bill passed on a 276-140 margin, not enough to demonstrate the two-thirds of those present and voting to override the veto Bush has threatened….

That is the largest difference between Bush and the Democrats among the 12 spending bills…

The Labor-HHS bill has traditionally enjoyed broad bipartisan support in the House, with its funding for biomedical research, low-income heating and cooling subsidies, education for disabled children and community service block grants providing basic services for the poor and elderly appealing to Republicans and Democrats….

But this year's atmosphere is different, and with GOP leaders seeking to draw sharp distinctions between their party and the Democrats on fiscal matters, they were able to largely keep their troops in line.

Minority Leader John Boehner, R-Ohio, was confident that in the end, Republicans could sustain a veto, noting a number of absences on his side on Thursday's vote. "We have other members who while they may have voted 'yes' here, will vote to sustain a veto. I'm not worried about it," he [said].

Appropriations Chairman David Obey, D-Wis., had a different take. "It was a damn good vote," he said. "With all of the Sturm und Drang, they couldn't find anything in the bill that they wanted to change [in the GOP motion to recommit]. I think that demonstrates that they think it's a pretty good doggone bill." …[Obey added:] "There is a reason why there were no votes expressed in opposition in committee: That's because this is the people's bill." …

The bill must go to conference with the Senate, which might not consider it on the floor until October, before being sent to Bush for his expected veto.

At that rate, it increases the likelihood that the Labor-HHS bill will run out of time to move on its own and simply be wrapped into a year-end omnibus package, some Democrats privately acknowledge.

Update. Charles Bailey has listed the names of all House members who voted against the bill.

Thursday, July 19, 2007

Recent posts by Peter Murray-Rust

Peter Murray-Rust has been writing some very good blog posts very fast.  Because they’re directly about OA, and good, I want blog excerpts here.  But because they’re numerous, and I’m already overloaded, I can’t keep up with them.  So with apologies to all for my lateness and brevity, here are some quick pointers to some of his recent posts on (1) open data and (2) access barriers left in place at “free”, “open”, and “hybrid” journals. 

In short, if you follow my blog, you should also follow his.  There are many other blogs in this category, but —so far— I’ve been able to keep up with them.

The OA Scientometric Web

Tim Brody, Les Carr, Alma Swan, and Stevan Harnad, Time to Convert to Metrics, Research Fortnight, July 18, 2007 (accessible only to subscribers, at least so far).  Excerpt:

…But, in the online age, citation links are just a special form of web link between a citing article and one or more cited articles. If he had been starting now, [Eugene] Garfield would not have been working on a proprietary database descended from the cut-and-paste paper era; he would have been developing open access scientometrics….

Two very important online developments are currently converging in the UK. First, authors are making their research free for all online (“open access”, OA), to maximise its use and impact. And second, research funders are using metrics to rank and reward research contributions on the basis of online measures of their usefulness and impact….

Citation metrics today are based largely on journal articles citing journal articles —and mostly just those 8,000 journals that are indexed by ISI’s Web of Science. That represents only a third (although probably the top third) of the total number of peer-reviewed journals published today, across all disciplines and all languages. Open access self-archiving can make the other two-thirds of journal articles linkable and countable too….

At Southampton, the world’s first departmental self-archiving mandate helped to demonstrate that OA enhances research impact. We also contributed to the movement to convert the RAE from panels to metrics. If Eugene Garfield had come of age in the online era, he would be at Southampton designing the Open Access Scientometric Web.

Open-source journal publishing software

Roman Chýla, What open source webpublishing software has the scientific community for e-journals? In Proceedings CASLIN 2007, Stupava (Slovak Republic), 2007.  Self-archived July 19, 2007.

Abstract:   Nowadays a scientific community can use different electronic publishing systems for the e-journals (journal management systems). Open-source ones were developed solely for e-publications' management and now, in 2007, we can say three of them are of general use for e-journals: Digital Publishing System (DPubs), ePublishing toolkit (ePubTk), Open Journal System (OJS). There exists also different content management systems (CMS), yet those were not primarily built for e-journals and are usable only with a special publication module. Finally, the third option is to pay for a service of different publishers and providers with their own publication systems. This paper deals with comparison of the first two options: specialised systems for e-journals on one side and general CMS on the other side. With examples of open-source publication systems we want to compare their advantages and disadvantages, area of aplication, and their functions for the management of the e-journal publishing process.

The changing publication environment for US research

Robert K. Bell, The Changing Research and Publication Environment in American Research Universities, a working paper from the US National Science Foundation, July 2007.  From the body of the paper:

Although researchers were making continuing, and perhaps increasing, use of library services, they said that they rarely visited libraries any longer. Instead, they were using the electronic search capabilities and database subscriptions that their university libraries provide to find relevant literature. Researchers consequently were doing more targeted searching using key words, which has enabled them to access a broader range of directly relevant literature more efficiently....They also noted that older literature, when it is not covered in the electronic databases, effectively becomes inaccessible. Many researchers noted that the Internet has made the literature more accessible to them and increased their productivity as a result....

Researchers in physics, mathematics, astronomy, computer sciences, and related disciplines reported that for access to the latest research findings, they relied on ArXiv, an unrefereed online compendium of manuscripts in their fields.

Although a journal's prestige is largely a product of the perceived quality and selectivity of its peer review process, accessibility also plays a role. Several researchers said they preferred to submit to journals that published readily accessible electronic versions, since this increased the chance that others would see their work. They viewed electronic versions that were available only long after paper publication or to subscribers who pay high fees as less desirable....

The study team heard numerous complaints about commercial publishers and spoke with some proponents of open-access publishing, in which the author (or the funding agency that sponsored the author's research) pays the cost of publication. Study informants gave considerable evidence that the business model for scientific publication is in flux, but almost no evidence that recent developments had generally changed whether or when researchers chose to publish their work in journals....

Electronically accessible international databases, software, and similar tools have enabled well-trained researchers to do better work even when they were not located in the best universities....

Advances in communication have made the international scientific literature more accessible to researchers in other countries....

Sharing genomic data

Morris W. Foster and Richard R. Sharp, Share and share alike: deciding how to distribute the scientific and social benefits of genomic data, Nature Reviews Genetics, August 2007.  (Thanks to Garrett Eastman.)  Only this abstract is free online, at least so far:

Emerging technologies make genomic analyses more efficient and less expensive, enabling genome-wide association and gene–environment interaction studies. In anticipation of their results, funding agencies such as the US National Institutes of Health and the Wellcome Trust are formulating guidelines for sharing the large amounts of genomic data that are generated by the projects that they sponsor. Data-sharing policies can have varying implications for how disease susceptibility and drug-response research will be pursued by the scientific community, and for who will benefit from the resulting medical discoveries. We suggest that the complex interplay of stakeholders and their interests, rather than single-issue and single-stakeholder perspectives, should be considered when deciding genomic data-sharing policies.

Open-source tool for automated metadata extraction

The National Library of New Zealand has upgraded and opened the source code for its Metadata Extraction Tool.  (Thanks to ResourceShelf.)  From yesterday’s announcement:

The National Library of New Zealand Te Puna Mātauranga o Aotearoa is pleased to announce the open-source release of version 3.2 of its Metadata Extraction Tool.

The Metadata Extraction Tool programmatically extracts preservation metadata from a range of file formats including PDF documents, image files, sound files, office documents, and many others. It automatically extracts preservation-related metadata from digital files, then outputs that metadata in XML. It can be used through a graphical user interface or command-line interface.

The software was created in 2003, and redeveloped this year. It is now available as open-source software under the terms of the Apache Public License.


  • Kudos to the NLNZ.  The more we improve the tools for automated metadata extraction, the more we remove ergonomic barriers to self-archiving.  And by opening the source code to this tool, the National Library has greatly bumped the odds that it will continue to improve.
  • There's another nice consequence of opening the source code. Someone could make it into a module or plug-in for one of the open-source archiving packages, like EPrints, DSpace, or Fedora.  When I self-archive, I’d love to have the archiving software take an automated whack at filling out the metadata fields and only bother me to check its work and supply any missing information.  Even if this tool could only do 50% of the job today, rather than 95%, that’s a big step toward metadata consistency and streamlined self-archiving.  And over time it will only get better.

Update.  Thanks to Dorothea Salo for this splash of cold water: 

New Zealand's gizmo doesn't extract descriptive/bibliographic metadata, which is the sort your comment was about.  It extracts what they're calling preservation metadata and I usually call technical metadata -- technical information about the file itself. So if you feed it an image, it will output file format, size, bit depth, resolution, and so forth.  Still quite cool (though I think PRONOM and DROID are a bit more useful), but not quite what you're hoping for.

Thanks, Dorothea.  Got it now.  But when an open-source tool for extracting descriptive/bibliographic metadata comes along, we’ll know what to do with it--

More on the HHMI-Elsevier deal

Alex Palazzo, JCB to HHMI: Why did you sell out to Elsevier? The Daily Transcript, July 18, 2007.  Excerpt:

Yesterday…I came across a commentary by Mike Rossner and Ira Mellman, the two big guys at the Journal of Cell Biology. The commentary concerns the resolution of a year long fight between the Howard Hughes Medical Institute and Elsevier. To force the hand of the publishers and to support open access, HHMI instituted a new policy - they would evaluate prospective and continuing HHMI investigators based on published manuscripts that were freely accessible within 6 months of the publication date. In other words, HHMI evaluators could not consider any manuscript that was published in a journal like Cell, whose policy is to allow open access of manuscripts only after 12 months. Since Elsevier is one of the major publishing companies that has a >12month wait period, and since Elsevier owns Cell, one of the premiere journals, this action by HHMI was seen by some as a clash between these two institutions.

Recently HHMI and Elsevier came to a compromise, in that the former would pay the latter 1,500$ per manuscript that came from an HHMI investigator. In exchange, Elsevier would allow free access to these publications via PubMed Central within the 6 month waiting period. So is this a victory for open access? Not really….

[Here]…is the editorial from the June 18th edition of JCB:

How the rich get richer. HHMI will bestow monetary rewards on a commercial publisher in return for the type of public access already provided by many nonprofit publishers….

Two problems with this deal immediately come to mind. First, there is a clear potential for conflict of interest when a publisher stands to benefit financially by publishing papers from a particular organization. Second, and even more seriously, this action by HHMI undermines the effort to persuade commercial publishers to make their content public after a short delay, by rewarding them for not doing so….
For many years The Rockefeller University Press and many other nonprofit publishers have released all of their content to the public after only six months, and have proven that such a policy does not reduce subscription revenues. We thus provide all authors with a free service for which HHMI will now pay Elsevier. Commercial publishers should need no financial incentive to provide this service to the scientific research community, on whom they rely for their content, their quality control, their subscribers, and for the patronage of their advertisers. Instead, Elsevier has accepted a deal that does a disservice to that community by increasing publication costs and thus further reducing the funds available for research.

HHMI has rewarded Elsevier for their steadfast refusal to release their content by further enhancing their already highly profitable business model….It is unfortunate that HHMI has forfeited its substantial bargaining power in a deal that represents a setback to the mission of public access….

Comment.  Exactly.  See my similar evaluation in the April issue of SOAN.

OA and the developing world

BioMed Central has launched Open Access and the Developing World, a new information portal that

…aims to provide resources about open access and internet technologies in the developing world. Emphasising the benefits to the developing world of increased internet technologies and open access to research, we hope to encourage projects and intiatives, and to showcase research published in open access journals that are of relevance to emerging countries.

From yesterday’s announcement on the BMC blog:

…This new website calls attention to the benefits of open access to the scientific and medical literature for the developing world.

Open access and the developing world brings together the latest relevant research articles from BioMed Central’s open access journals, newsfeeds, author profiles, resources and a new section of the BioMed Central blog, which will provide a regular round-up of news and resources relating to open access and the developing world.

From the BMC follow-up post to the launch announcement:

If access to the literature is something which affects your work in a low-income country, we'd like to hear from you.

As part of the launch of our Open Access and the developing world portal, BioMed Central is inviting researchers and practitioners working in developing countries to send in a photograph or video relating to their work, along with a story explaining how access to the scientific literature is an important issue for them.

To find out more, visit the Share your story page on the Open access and the developing world site….The sender of the best story received by 30 September 2007 will receive a contribution of $1000 towards computer equipment for the lab or project of their choice.

From BMC’s press release:

"Open access to the scientific and medical literature is a key way in which the developed world can help developing countries," said Matthew Cockerill, Publisher of BioMed Central. "In recent years, the funding for research on global health issues such as AIDS, malaria, and tuberculosis has increased significantly thanks to support from philanthropic foundations. Open access is vital to ensure that the full use is made of the results of this research."

The research articles that will feature on the new portal includes publications from Malaria Journal, a leading BioMed Central journal which was recently ranked by Thomson Scientific as number one in the field of Tropical Medicine. Other BioMed Central journals which publish research highly relevant to developing countries include AIDS Research & Therapy, BMC Infectious Diseases, BMC Public Health and the International Journal for Equity in Health.

The portal also offers profiles of BioMed Central authors who work in developing countries, newsfeeds and a blog which will provide a regular round-up of news and resources relating to open access and the developing world.

"Working in a developing country I feel like I need to be one of those to take a lead in publishing much of my work in open-access journals," said Dr. Philip Hill, Clinical Epidemiologist at MRC Laboratories in Banjul, The Gambia. "I am very pleased that BioMed Central has provided this resource, which will be of particular benefit to researchers in developing countries." …

Wednesday, July 18, 2007

OA mandate for NIH clears another hurdle

Over the weekend I and many others spread the word that the House of Representatives would vote Tuesday, July 17, on the appropriations bill establishing an OA mandate at the NIH.  Here's an update.

The bill did move to the House floor on Tuesday (yesterday).  But the bill must be read aloud before the vote and it's a large bill, containing appropriations for all the agencies and programs in three federal departments (Labor, Health and Human Services, and Education).  The reading started yesterday and continues today.  It's proceeding as I write.  It may end later today or it may not end until tomorrow.

The last chance to offer an amendment on a given section is the moment when that section comes up in the reading.  The section creating the OA mandate (§217) was just read a few minutes ago.  The amendment window opened briefly and then closed.  No amendments were offered.

We've cleared a major hurdle.  Publisher organizations have been unusually intense and well-organized in lobbying to amend or strike this language.  They may have had some House members on their side but, if so, they retreated and the strong language survived intact.

This victory reflects the groundswell of public support for OA at the NIH.  House members definitely heard the message.  For all of you who contacted your representative and urged others to do so, thank you.

We still await the House vote, which should come later today or tomorrow, and then the Senate vote, which is still unscheduled.  Then we'll need a Presidential signature.

OA Update updated

Charles Bailey has upgraded his useful service, Open Access Update, which pulls together OA information from many different sources (including OAN).  From his announcement:

I’ve revised Open Access Update migrating the aggregate RSS feed for the OA-related weblogs to Yahoo Pipes, switching the source feed for the aggregate FeedBurner feed to the new Yahoo Pipes feed; adding more weblogs to the aggregate RSS feed; correcting the URLs for the OA-related mailing lists, e-journals, and wikis; and correcting the URLs in the Google Search Engines for those resources.

Proceedings of the Rome conference on IRs and OA

Paola De Castro and Elisabetta Poltronieri (eds.), Institutional archives for research: experiences and projects in Open Access, the entire proceedings of the conference, Institutional archives for research: experiences and projects in Open Access (Rome, November 30 - December 1, 2006).  Self-archived July 18, 2007.  Abstract:  

The Congress was organised into four sessions: 1) Open Access (OA) and authors: support from the international community; 2) OA in Italy: knowledge and tools to write and search; 3) institutional policies for OA; 4) opportunities and services to develop OA. It was aimed at achieving the following objectives: a) make authors of biomedical publications aware of the benefits of depositing research material in digital open archives and publishing in OA peer-reviewed journals; b) outline the impact of the OA publishing model on the assessment of research output; c) enhance the adoption of policies encouraging the OA paradigm; d) promote cooperation between research institutions in Italy and abroad to share resources and experiences on institutional repositories. A useful introductory bibliography on the OA publishing model in the biomedical field is included in the Appendix.

New OA journal in high energy physics

Advances in High Energy Physics is a new peer-reviewed OA journal from Hindawi.  From today’s announcement:

Hindawi is pleased to announce the launch of "Advances in High Energy Physics." AHEP is one of Hindawi's "Community Journals," which are edited by distributed editorial boards of highly distinguished, international members.

"Community Journals use a very scalable, collaborative editorial model," said Mohamed Hamdy, Hindawi's Editorial Manager. "They enable us to build a strong partnership between Hindawi as a publisher and the communities we serve with our open access journals."

"High Energy Physics is a unique scientific community in their unparalleled support of Green open access via the arXiv," said Paul Peters, Hindawi's Head of Business Development. "We are very pleased to be in a position to launch AHEP and to work with the High Energy Physics community during their transition to the Gold Open Access era."

Hindawi is planning to launch a number of journals in other areas of physics, including Astronomy, Gravitational Physics, Atomic and Molecular Physics, and Condensed Matter Physics in the next few months. Hindawi is also planning on expanding its collection in several other areas, including Chemistry, Materials Science, Ecology, and Engineering.

Update.  I just learned about another new OA journal from Hindawi, Advances in Artificial Intelligence.  (Thanks to MEDAL Blogging.)  I haven’t yet seen an announcement for this one.

For data too, access and quality are independent

Tracey Lauriault, Cost Recovery Policies are NOT Synonymous with Data Quality, DataLibre, July 17, 2007. Excerpt:

One of the great data myths is that cost recovery policies are synonymous with higher data quality. Often the myth making stems from effective communications from nations with heavy cost recovery policies such as the UK who often argue that their data are of better quality than those of the US which have open access policies….

I just read an interesting study [Bastiaan van Loenen and Jitske de Jong, Institutions Matter: The impact of institutional choices relative to access policy and data quality on the development of geographic information infrastructures, GSDI-9 Conference Proceedings, November 6-10, 2006] that examined open access versus cost recovery for two framework datasets….

The study’s hypothesis was:

that technically excellent datasets have restrictive-access policies and technically poor datasets have open access policies….

Specific Results:

The case studies yielded conflicting findings. We identified several technically advanced datasets with less advanced non-technical characteristics…We also identified technically insufficient datasets with restrictive-access policies…Thus cost recovery does not necessarily signify excellent quality.

Although the links between access policy and use and between quality and use are apparent, we did not find convincing evidence for a direct relation between the access policy and the quality of a dataset….

Update on the EPA library fiasco

Public Access To EPA Library Holdings In Jeopardy, a press release from the Public Employees for Environmental Responsibility, July 17, 2007.  (Thanks to ResourceShelf.)  Excerpt:

The U.S. Environmental Protection Agency is finalizing procedures that may lock away a large portion of its library collections from access by the public, according to agency documents released today by Public Employees for Environmental Responsibility (PEER). Compounding the inaccessibility of physical collections, the public’s ability to electronically search digitized EPA holdings is problematic as well.

Over the past 18 months, EPA has closed large parts of its library network, including regional libraries serving 23 states, as well as its Headquarters Library and several technical libraries. The holdings from these shuttered facilities have been shipped to one of three “repositories” – located in Cincinnati, North Carolina’s Research Triangle Park and D.C. How the public, and even EPA’s own staff, access these growing repositories has been uncertain.

Even as Congress moves to reverse EPA’s library closures, the agency is now racing to cement new procedures restricting the ability of the public to locate or read technical documents in the agency’s possession. A new proposed policy circulated internally for comment on July 11, 2007 provides:

“Repository libraries are not required to provide public access to their collections…”

…Meanwhile, the remaining libraries are directed to provide public access but may tailor or reduce that access depending upon resource limitations…

“EPA claims that its libraries are designed for the twin purposes of improving the quality of information for agency decision-making as well as raising public environmental awareness, but right now the libraries are not serving either purpose very well,” stated PEER Associate Director Carol Goldberg. “Significantly, EPA is not even bothering to consult the public who paid for these collections.”

In addition to the public, EPA’s own scientists have not been consulted either.  A union grievance filed on August 16, 2006 protesting the closure of libraries as making it harder for scientists and other specialists to do their work. EPA ignored the grievance. On Monday, February 5, 2007, the American Federation of Government Employees National Council of EPA Locals filed an unfair labor practice complaint before the Federal Labor Relations Authority (FLRA). On June 26, 2007, the FLRA upheld the complaint and ordered EPA into binding arbitration with a hearing slated for August 14, 2007.

“Not only is the public locked out of the libraries, but the agency’s own scientists are having trouble getting needed information as well,” Goldberg added. “EPA claims that it plans to make more information more readily available, but judging by the results so far it has failed miserably.”


PKP Scholarly Publishing presentations

Abstracts of the presentations from the First International PKP Scholarly Publishing Conference (Vancouver, July 11-13, 2007) are now online.  Also see this list

The conference blog is now complete as well.  It’s one of the best I’ve seen, with a detailed entry on each presentation.

Tuesday, July 17, 2007

Open Library launches a working demo site

The Open Library (from the Open Content Alliance) now has a working demo site.  From the site:

What if there was a library which held every book? Not every book on sale, or every important book, or even every book in English, but simply every book—a key part of our planet's cultural legacy.

First, the library must be on the Internet. No physical space could be as big or as universally accessible as a public web site. The site would be like Wikipedia—a public resource that anyone in any country could access and that others could rework into different formats.

Second, it must be grandly comprehensive. It would take catalog entries from every library and publisher and random Internet user who is willing to donate them. It would link to places where each book could be bought, borrowed, or downloaded. It would collect reviews and references and discussions and every other piece of data about the book it could get its hands on.

But most importantly, such a library must be fully open. Not simply "free to the people," as the grand banner across the Carnegie Library of Pittsburgh proclaims, but a product of the people: letting them create and curate its catalog, contribute to its content, participate in its governance, and have full, free access to its data. In an era where library data and Internet databases are being run by money-seeking companies behind closed doors, it's more important than ever to be open….

Earlier this year, a small group of people gathered at Internet Archive's San Francisco office to discuss whether this was possible. Could we build something so grand? We concluded that we could. We located a copy of the Library of Congress card catalog, phoned publishers and asked them for their data, created a brand new database infrastructure for handling millions of dynamic records, wrote a new type of wiki that lets users enter structured data, set up a search engine to look through it all, and made the resulting site look good.

We hooked it up to the Internet Archive's book scanning project, so that you can read the full text of all the out-of-copyright books they've made available. And we hope to add a print-on-demand feature, so that you can get nice paper copies of these scanned books, as well as a scan-on-demand feature, so you can fund the scanning of that out-of-copyright book you've always loved.

But we can only do so much on our own. Hopefully we've done enough to make it clear that this project is for real—not simply another pie-in-the-sky idea—but we need your help to make it a reality. So we're opening up the demo we've built so far, opening up the source code, opening up the mailing lists, and hoping you'll join us in building Open Library….

Also see this listserv announcement from Alexis Rossi of the Internet Archive:

This is a technology demo, so it doesn't have all of the bells and whistles just yet.  But we're looking for help!  If you've got data, we want it!  If you're a programmer interested in helping, please let us know!

We have a series of pages describing our project and goals, a marvelous demo site that shows off what we're capable of, and a new series of mailing lists to bring more people into the project.  Please subscribe to the lists that interest you [announcements or discussion], poke around the site, and let us know what you think.

More on the conversion of GIGA's journals to OA

The German Institute of Global and Area Studies (GIGA) is converting six of its journals to OA, thanks to funding from Germany’s DFG.  The bilingual announcement I’m excerpting below is undated, but the DFG grant was awarded on May 21 and a related German-only press release is dated July 9.  Excerpt:

The GIGA Journal Family [of six journals] is a pioneer project for the conversion of established journals in the field of social science into Open Access journals with worldwide coverage. On the 21st of May 2007 the German Research Foundation (DFG) approved  GIGA’s application for financial support. The project, which is a cooperation between GIGA and the Hamburg University Press, the online publisher of the Hamburg State and University Library (SUB), will be supported for two years.

All GIGA journals will still be available in the usual printed form. Mechanisms to ensure the quality of the journals’ content by the editors, through peer-review and academic advisory boards will be continually advanced. The journals’ content is estimated to go online in the second year via a staged process. The journals will become part of the GIGA Journal Family-Portal and will be accessible worldwide without any delay. The cooperation with the Hamburg State and University Library guarantees an  integration of the GIGA journals into all major international search engines and library services. The use of  "OpenURL-Standards" by professional data bases and library catalogues facilitates a direct access to the complete version of the requested texts. All this will help increase the GIGA journals’ international coverage, thereby enabling a closer interaction and dialogue with academic communities in Africa, Asia, Latin America and the Middle East….

As a signatory of the so-called "Berlin Declaration", the Leibniz Association, of which GIGA is a member institution, supports Open Access as the prime form of publishing of non-commercially driven academic research. With this project, GIGA will be at the cutting-edge of spreading Open Access publishing in the field of social science. The model of step-by-step migration of the established GIGA journals to combine traditional print with online open access formats aims at motivating more academic institutions to move into Open Access publishing.

Comment.  Kudos to GIGA and DFG.  This is notable for several reasons.  First, it’s one of the first strong steps toward OA by the influential Leibniz Gemeinschaft.  Here’s hoping there’s more to come.  Second, it’s six journals at once.  The pace of TA-OA journal conversions has definitely picked up over the past year, but seeing six move at once is still unusual.  Third, it’s publicly-funded.  If it’s not the first journal conversion program in this category, it’s one of the first.

July/August D-Lib

The July/August issue of D-Lib Magazine is now online.  Here are the OA-related articles:

  • Bonnie Wilson, Book Digitization Options for Libraries.  An editorial.
  • Oya Y. Rieger, Select for Success: Key Principles in Assessing Repository Models.  No abstract.  Excerpt:  Many cultural and educational institutions are in the process of selecting or developing repositories to support a wide range of digital curation activities, including content management, submission, ingest, archiving, publishing, discovery, access, and preservation. In addition, there is an increasing emphasis on deploying systems that support content re-purposing and delivery of a wide range of web services. This article offers strategies to match specific institutional requirements with repository system features and functionalities....
  • Leslie Carr and Tim Brody, Size Isn't Everything: Sustainable Repositories as Evidenced by Sustainable Deposit Profiles.  Abstract:   The key to a successful repository is sustained deposits, and the key to sustained deposits is community engagement. This article looks at deposit profiles automatically generated from OAI harvesting information and argues that repositories characterised by occasional large-volume deposits are a sign of a failure to embed in institutional processes. The ideal profile for a successful repository is discussed, and a new service that ranks repositories based on these criteria is implemented.
  • Michael Nelson, OAI-ORE Tackles Problem of Compound Information Objects on the Web.  No abstract.  Excerpt:  The Open Archives Initiative Object Reuse and Exchange (OAI-ORE) project is the latest interoperability project of the Open Archives Initiative....OAI-ORE plans to do [what OAI-PMH did] for compound objects on the web. Examples of compound information objects could include a scholarly eprint with multiple formats, versions, and data types, or a blog entry with comments, or an uploaded video with a corresponding description page....As humans, we intuitively recognize compound information objects on the web when we see them, but this distinction is not readily available to web crawlers and other automated applications. ORE will provide an unambiguous, extensible method for enumerating, and describing the relationships between the web resources that comprise a compound object.

JISC launches RepositoryNet to connect existing projects

JISC has launched RepositoryNet, an umbrella initiative to unify four of its existing repository projects.  From today’s announcement:

Repositories are important for universities and colleges in helping to capture, manage and share institutional assets as a part of their information strategy. JISC is funding JISC RepositoryNet to help universities and colleges build and manage repositories so they are interoperable and research and learning outputs can be accessed and re-used…. 

In 2006 JISC committed £15 million towards Digital Repositories and Preservation activity, to support the UK educational community in realising the benefits of digital repositories, and as part of this work JISC has established JISC RepositoryNet.

JISC RepositoryNet brings together a number of activities that have been funded by JISC including:

JISC is also funding a number of universities and colleges to set up or further develop repositories for their research and learning assets. In order to realise the potential of a distributed network of digital repositories it is important that institutions set up their repositories according to common standards and for them to embed the use of repositories within their institutions and working processes. The aim of JISC RepositoryNet is to help form an interoperable network of repositories. It will do this by providing UK universities and colleges with access to trusted and expert information about repositories and by supporting some key services that form building blocks for a network of repositories. By working together, sharing practice and implementing common standards, UK universities and colleges can help to improve access to research and learning and to manage and curate their output.

All repositories within UK universities and colleges can participate in and contribute to RepositoryNet, for example by implementing common practice….

Making data OA while blocking downstream enclosures

Chris Rusbridge, Open Data... Open Season?  Digital Curation Blog, July 16, 2007.  Excerpt:

Peter Murray Rust is an enthusiastic advocate of Open Data (the discussion runs right through his blog, this link is just to one of his articles that is close to the subject). I understand him to want to make science data openly accessible for scientific access and re-use. It sounds a pretty good thing! Are there significant downsides?

Mags McGinley recently posted in the DCC Blawg about the report "Building the Infrastructure for Data Access and Reuse in Collaborative Research" from the Australian OAK Law project….[B]uried in the middle of the report is a cautionary tale. Towards the end of chapter 4, there is a section on risks of open data in relation to patents, following on from experiences in the Human Genome and related projects.

"Claire Driscoll of the NIH describes the dilemma as follows:

It would be theoretically possible for an unscrupulous company or entity to add on a trivial amount of information to the published…data and then attempt to secure ‘parasitic’ patent claims such that all others would be prohibited from using the original public data."

(The reference given is Claire T Driscoll, ‘NIH data and resource sharing, data release and intellectual property policies for genomics community resource projects’ Expert Opin. Ther. Patents (2005) 15(1), 4)

The report goes on:

"Consequently, subsequent research projects relied on licensing methods in an attempt to restrict the development of intellectual property in downstream discoveries based on the disclosed data, rather than simply releasing the data into the public domain."
They then discuss the HapMap (International Haplotype) project, which attempted to make data available while restricting the possibilities for parasitic patenting [through a GPL-type license]….Checking HapMap, the Project's Data Release Policy describes the process, but the link to the Click-Wrap agreement says that the data is now open. See also the NIH press release). There were obvious problems, in that the data could not be incorporated into more open databases. The turning point for them seems to be:
"...advances led the consortium to conclude that the patterns of human genetic variation can readily be determined clearly enough from the primary genotype data to constitute prior art. Thus, in the view of the consortium, derivation of haplotypes and 'haplotype tag SNPs' from HapMap data should be considered obvious and thus not patentable. Therefore, the original reasons for imposing the licensing requirement no longer exist and the requirement can be dropped."
So, they don't say the threat does not exist from all such open data releases, but that it was mitigated in this case.

Are there other examples of these kinds of restrictions being imposed? Or of problems ensuing because they have not been imposed, and the data left open? (Note, I'm not at all advocating closed access!)

More on the DSpace Foundation

HP and MIT Create Non-profit Organization to Support Growing Community of DSpace Users, the HP press release on the launch of the DSpace Foundation, July 17, 2007.  Excerpt:

HP and the MIT Libraries today announced the formation of the DSpace Foundation, a non-profit organization that will provide support to the growing community of institutions that use DSpace, an open source software solution for accessing, managing and preserving scholarly works in a digital archive….

The foundation will assume responsibility for providing leadership and support to the ever-growing DSpace community and promote even wider distribution and use. Michele Kimpton, formerly of the Internet Archive, will serve as executive director of the DSpace Foundation….

More than 200 projects around the world are using DSpace and additional projects are getting underway, including: 

  • 2008 Virtual Olympic Museum/Beihang University: Beihang University in Beijing, one of Chinas top universities, will use DSpace to archive the 2008 China Summer Olympics, thus creating a collection of materials about the Beijing games that can be shared over the Internet quickly and easily. It is scheduled to open in March 2008.
  • Texas Digital Library: This project provides a digital infrastructure for the scholarly activities of Texas universities, which possess an enormous amount of intellectual capital that is not readily available to faculty, staff and students. The librarys contents include open access journals, electronic theses and dissertations, faculty datasets, departmental databases, digital archives, course management and learning materials, digital media and special collections….
  • The China Digital Museum: This project includes 18 campus museums, each with 20,000 - 50,000 objects covering geoscience, biology, anthropology, science and technology….
  • Open Repository: A managed service from BioMed Central in the United Kingdom, Open Repository will build, launch, host and maintain institutional repositories for organizations….
  • National Institute for Technology and Liberal Education (NITLE): A non-profit initiative dedicated to promoting liberal education, NITLE offers its participating colleges a pilot for a hosted and shared instance of the DSpace open source software. For a fee, NITLE hosts the pilot….

The creation of the DSpace Foundation and Michele Kimptons appointment are important steps in the evolution of DSpace, said Ann J. Wolpert, director, MIT Libraries. Together these actions signal that both the platform and the community have successfully reached the point where an independent organization is needed to direct the project.

New book on setting up an OAI-compliant repository

Using the Open Archives Initiative Protocol for Metadata Harvesting (Third Millennium Cataloging), a new book from Libraries Unlimited by Timothy W. Cole and Muriel Foulonneau, June 30, 2007.  (Thanks to Muriel Foulonneau.)  From the publisher’s description:

Online scholarly publishing is revolutionizing scholarly communication, and the Open Archives Initiative (OAI) is among those protocols leading the way in the transformation process. OAI enables access to Web-accessible material by harvesting (or collecting) the metadata descriptions of the records in an archive so that services can be built using metadata from many archives. Through a series of case studies, Cole and Foulonneau guide the reader through the process of conceiving, implementing and maintaining an OAI-compliant repository. Its applicability to both institutional archives and discipline based aggregators are covered, with equal attention paid to the technical and organizational aspects of creating and maintaining such repositories.

Launch of DSpace Foundation

Andrea Foster, DSpace Archiving Project for Research Will Get $500,000 Shot in the Arm, Chronicle of Higher Education, July 17, 2007 (accessible only to subscribers).  Excerpt:

A foundering project to promote free digital archives of scholarship is getting new life, as a nonprofit foundation will pump in more dollars and technical expertise.

The DSpace Foundation, drawing on $500,000 from the Massachusetts Institute of Technology and Hewlett-Packard, will help hundreds of colleges, museums, and other nonprofit groups using DSpace, a program that helps institutions establish archives of research papers, data sets, images, and journal articles that the public can access freely. The foundation will offer technical advice to developers who are refining the software and help them integrate it with other applications. The foundation also will promote a wider distribution of the archives. The formation of the foundation is scheduled to be announced today.

MIT's libraries and HP said they had developed DSpace to promote collaboration among researchers, spark new ideas for study, and make scholars' intellectual output freely available to all.

Michele Kimpton, the foundation's executive director, said that at least 230 institutions, half of which are in the United States or Britain, are using the DSpace software.  "The community was getting too large, and the amount of code and diversity of applications was getting too big to have no infrastructure behind it," she said….

The 13-member DSpace Federation Governance Advisory Board…believed that DSpace was hindered by a widely held view that it was "owned" by MIT and HP, and that a nonprofit governing group would -- in the long term -- help eliminate that perception and attract more financial contributions….

Ms. Kimpton, who was a director at the Internet Archive, said part of her role would be to raise money from corporations and other foundations for DSpace. She said she expected to hire a staff of three to five people.

PS:  “Foundering”?

Update. Also see Dorothea Salo, DSpace is Not Foundering.

Publish-or-perish policies entail OA

Jan Velterop, Fly or flounder, The Parachute, July 16, 2007.  Excerpt:

If one looks at scientific information from an economic point of view, and considers supply and demand, it will probably look like this: In an area mainly driven by readers who clamour to see the research (a 'read-or-rot' area), subscriptions make sense; in an area mainly driven by the need to publish (a 'publish-or-perish' area, arguably the most common in science), article processing charges for open access publishing makes sense; and in an area mainly driven by political or other overarching societal concerns ('fly-or-flounder'?), direct subsidies make sense.

The question is, can one, or should one, look at scientific information in this way. The answer is, in my view at least, 'yes'….

But are the three scenarios mentioned above of equal importance? Scientific information is to a very large degree a 'product' for which supply and demand are overlapping, suppliers (authors) being 'demanders' (in their role as readers) - and vice versa. With regard to formally publishing scientific findings, the demands placed on the system by 'suppliers' are, in general, much stronger than the demands placed on it by readers. What I've often heard in research circles is that as a scientist, you can mostly get away with reading only a selection of relevant literature (the rest being of a confirmatory nature, so seeing the abstract is enough, or even just knowing that an article exists), or rather, you must, because there's an information overload in most disciplines and you wouldn't be able to read it all anyway. As an author, though, there's no escape: you have to publish.

Of the three scenarios mentioned, the last two are arguably the most important. Yet the overwhelming majority of the economic activity takes place in the framework of scenario 1. That's an 'issue' (euphemism for 'problem') and our challenge is to make the transition to scenarios 2 and 3 while keeping the crucial elements of the system of formal publishing intact and economically viable, especially peer-review.

Monday, July 16, 2007

Access policies at the Leibniz Gemeinschaft

Klaus Graf has written a series of blog posts (one, two, three, four), in German, on the access policies of the Leibniz Gemeinschaft.  Here’s his English-language summary (by email):

The Leibniz Gemeinschaft which has signed the Berlin declaration has to support OA. My series of blog entries asks in which form OA is supported by the member institutes. Generally Max Planck Gesellschaft or Fraunhofer-Gesellschaft or Helmholtz Gemeinschaft are more active. There is a working group for OA in the Leibniz Gemeinschaft which has elected Dr. Stempfhuber as a speakter in April, 2007. Noteworthy is that the GIGA journals are becoming OA.

The first case study was devoted to the Bergbau-Museum Bochum part of the humanities section of the Leibniz Gemeinschaft (I will review only these institutes). There could be no OA activities detected (no mentioning of OA on the website, no full texts of staff publikations, no database of museum items with pictures etc.)

The second was dedicated to the DIE (institution for education of adults). From 989 publications in the publication database are 743 online free of cost. Partly CC-BY-NC-ND licenses are used. The journals are not OA but a lot of articles are. Thus DIE is on the right way.

The third was on the DIPF (Deutsche Institut für Internationale Pädagogische Forschung).  It has very few publications online but a lot of heritage items from the library on education history (historical journals, pictures).

Before reviewing the single Leibniz institutions I have reviewed the Dresden State Museums (Staatliche Kunstsammlungen Dresden) which have signed the BD as only German museum in 2003. There is absolutely no evidence that this museum supports OA in any way. No mentioning on the website, no full texts of publications, no permission-free use of pictures, and pictures on the website are to small for scholarly use.

Part 4: There is no evidence on the website that the famous German Museum (Deutsches Museum) supports OA.

French obstacles to the scholarly use of images

Klaus Graf reports (in German) on a Paris conference on the obstacles —from lack of fair use to high museum prices— faced by art historians who wish to use images in their publications.  See the conference announcement, which links to relevant articles from the right sidebar, and a report summarizing the discussion (both in French).

OpenURL link resolvers increase journal usage

Hua Yi and Catherine S. Herlihy, Assessment of the impact of an open-URL link resolver, New Library World, 108, 7/8 (2007) pp. 317–331.  Only this abstract is free online, at least so far:

Purpose – This paper seeks to report a data-driven assessment of student and faculty use of electronic scholarly resources pre- and post-implementation of an open-URL link resolver.

Design/methodology/approach – Usage data were extracted from two multidisciplinary scholarly aggregators pre- and post-implementation of an open-URL link resolver. Open-URL link resolver usage data for both aggregators were also collected and two timelines established. Statistical analysis was performed to assess direct and indirect impact.

Findings – Study results show that the implementation of an open-URL link resolver has directly contributed to usage increase in the short and long periods under study. Usage patterns also indicate the technology has indirect impact.

Research implications/limitations – Limitations include one-semester limits of short-term data. Non-standardized data could be compared only within each aggregator.

Practical implications – Research outcomes provide a tool for the assessment of student/faculty use of electronic scholarly resources and Collections and Catalog librarian participation in teaching and learning. Usage data are increasingly available to librarians, so work based on research findings can be assessed.

Originality/value – This paper reports student/faculty usage data of searching activities, not their perceptions of electronic resources. Usage data demonstrate that librarians who select and provide access to electronic resources positively affect teaching and learning.

Comment.  These results show that reducing access barriers improves usage even for priced or toll-access journals.  Imagine the effect of removing access barriers.  Unfortunately it’s easier to do a controlled before/after test with the former than with the latter.  But does anyone doubt that if we measured the usage of a set of TA journals before and after their conversion to OA, we would see usage would go up?  Time and resources permitting, journals planning to convert should consider such a controlled study as part of the process. 

Update. I just learned about a similar study: John D. McDonald, Understanding Online Journal Usage: A Statistical Analysis of Citation and Use, Journal of the American Society for Information Science & Technology, January 1, 2007. Abstract:

This study examined the relationship between print journal use, online journal use, and online journal discovery tools with local journal citations. Local use measures were collected from 1997 to 2004 and negative binomial regression models were designed to test the effect that local use, online availability, and access enhancements have on citation behaviors of academic research authors. Models are proposed and tested to determine whether multiple locally recorded usage measures can predict citations and if locally controlled access enhancements influence citation. The regression results indicated that print journal use was a significant predictor of local journal citations prior to the adoption of online journals. Publisher-provided and locally recorded online journal use measures were also significant predictors of local citations. Online availability of a journal was found to significantly increase local citations and for some disciplines, a new access tool like an OpenURL resolver significantly impacts citations and publisher provided journal usage measures.

Finding online versions of printed texts

Eric Engleman, Startup's aim: Scan a printed page, get a website, Puget Sound Business Journal, July 13, 2007.  (Thanks to ResourceShelf.)

A Seattle startup is working on a novel device that could capture a few words from a book or printed article and quickly find the full text on the web.

The company, Exbiblio, expects to have a prototype ready in the fall that could span the growing divide between the internet and the world of printed material.

That’s all that the newspaper shows to non-subscribers.  But ResouceShelf adds a bit more:

The device Exbiblio is developing will house a small optical reader, which connects [wirelessly] to a computer. Users can scan a snippet of printed text — as few as six words at a time — and use that “identifying barcode” to find the corresponding full text on the web, said [founder Martin King].

A person reading a printed newspaper, for example, could instantly get an online version of an article and e-mail it to friends or colleagues. Such a technology could take the advantages of the web — the interactivity and ability to directly measure traffic — and apply that to the printed word.

PS:  I like this idea.  As far as I can tell, it only works for texts that have been indexed for free online searching, the vast majority of which are OA.

Jim Till reviews John Willinsky

Jim Till, Review of The Access Principle, Be openly accessible or be obscure, July 15, 2007.  Excerpt:

An invited review of The Access Principle, by John Willinsky (MIT Press, 2006) was submitted on 13 July 2007 for publication (after copyediting) in the University of Toronto Quarterly 77:1 (Winter 2007/2008) — “Letters in Canada 2006.” Publisher: The University of Toronto Press. This version (but not the submitted version) includes links to relevant URLs. I’m the author of the review. (Credit line: James E. Till, Project Open Source|Open Access, University of Toronto).

From the viewpoint of a researcher with a background in the biomedical and health sciences, John Willinsky’s book can be regarded as an experimental intervention designed to stimulate changes in the current system of scholarly publishing.

Willinsky’s book can also be classified as a policy-oriented intervention. Its goal (as stated on page 31) is ‘incremental advances in the circulation of knowledge within the academic community and beyond’. The access principle that underlies the book is the belief (as defined on page 5) that a ‘commitment to the value and quality of research carries with it a responsibility to extend the circulation of this work as far as possible, and ideally to all who are interested in it and all who might profit by it’. Routes to the provision of free access to research articles are described, such as open access repositories and open access journals. However, the emphasis is on opening access, that is, on ways to increase access to the outputs of scholarship and research, rather than on any particular inflexible definition of open access.

Experimental interventions need to be evaluated....

[M]ost evaluations of the book have been quite positive....

There’s another experimental intervention that’s been fostered by John Willinsky and is described in the book. It’s the Open Journal Systems (OJS), open source software for journal management and publishing that is having substantial ongoing impact. As of March 2007, over 900 titles were using OJS, in ten languages. One recent example is a new open access Canadian general medical journal, Open Medicine. It seems likely that the OJS may have much greater impact on scholarly publishing over the longer term than will Willinsky’s book....

Pennsylvania may catch up with the rest of the US

Putting laws online overdue, Altoona Mirror, July 16, 2007.  An editorial.  Excerpt:

There ought to be a law to ensure the public has free online access to the laws of the commonwealth….

Last month, the state House unanimously approved a bill that would require Pennsylvania to put all of its statutes on a free Internet site.

Such an effort is not unusual.  Forty-nine states have made their statutes available on free Internet sites. Pennsylvania is the only exception….

In an era in which online 24/7 access to information is the norm and the state is trying to lure more high-technology businesses and research facilities, Pennsylvania is far behind the curve.

Rep. Lisa Bennington, D-Allegheny, who sponsored House Bill 976, sees free online access to Pennsylvania’s statutes as an important part of government reform. The Associated Press quotes her as saying, ‘‘We need to have greater transparency in government and in our laws.’’

We agree.  Senators should move quickly to consider HB 976.

Your tax dollars paid to create and enforce the laws. You should not have to pay again to view the statutes at your leisure. It’s time for Pennsylvania to catch up to the rest of the nation.

More on OA and plagiarism

Clement Vincent, The Purloined Bibliography, Chronicle of Higher Education, July 16, 2007. 

Vincent compiled a bibliography, made it OA, and later found it plagiarized in a published book.  (The proof was that the published version repeated Vincent’s typos and annotations.)  When he confronted the publisher with the evidence, it questioned whether a web site or bibliography could be copyrighted.  When Vincent wrote to the authors, he received an acknowledgement of his work and an apology for the omission.  The publisher eventually reprinted the plagiarized portion of the book with credit to Vincent.  Excerpt:

What have I learned from the experience? …First, I am not sure I can call my experiment in open-access publishing a success. I have been thinking about starting a bibliography on another topic. Should I also put it online? I don't know.


  1. Vincent was treated very badly, but he should blame the publisher, not OA.  OA reduces the likelihood of plagiarism, though without eliminating it.  As I’ve often argued:
    OA deters plagiarism.  In the early days, some authors worried that OA would increase the incentive to plagiarize their work.  But this worry made no sense and has not been borne out.  On the contrary.  OA might make plagiarism easier to commit, for people trolling for text to cut and paste.  But…plagiarism from OA sources is the easiest kind to detect.  Not all plagiarists are smart, of course, but the smart ones are steering clear of OA sources….
  2. Yes, web pages can be copyrighted.  Shame on this unnamed publisher either for not knowing that elementary fact of publishing life or for responding to Vincent in bad faith.
  3. The publisher should also know better than to confuse plagiarism with copyright infringement, which are separate misdeeds (even if they overlap, as here).  By chance I wrote about this in SOAN last month:
    Someone can commit plagiarism without infringing copyright (by copying a fair-use excerpt and claiming it as one's own) and infringe copyright without committing plagiarism (by copying a larger excerpt but with attribution).  One can also commit both together (by copying a large excerpt and claiming it as one's own), but that doesn't collapse the distinction….Plagiarism is typically punished by the plagiarist's institution, not by courts, that is, by social norms, not by law….

Publishing for machine and human readers

Carlos Henrique Marcondes and five co-authors, Representing and coding the knowledge embedded in texts of Health Science Web published articles, in Leslie Chan and Bob Martens (eds.), Proceedings ElPub - International Conference on Electronic Publishing, Vienna, Austria, 2007, pp. 33-42.  Self-archived July 12, 2007.

Abstract:   Despite the fact that electronic publishing is a common activity to scholars electronic journals are still based in the print model and do not take full advantage of the facilities offered by the Semantic Web environment. This is a report of the results of a research project with the aim of investigating the possibilities of electronic publishing journal articles both as text for human reading and in machine readable format recording the new knowledge contained in the article. This knowledge is identified with the scientific methodology elements such as problem, methodology, hypothesis, results, and conclusions. A model integrating all those elements is proposed which makes explicit and records the knowledge embedded in the text of scientific articles as an ontology. Knowledge thus represented enables its processing by intelligent software agents The proposed model aims to take advantage of these facilities enabling semantic retrieval and validation of the knowledge contained in articles. To validate and enhance the model a set of electronic journal articles were analyzed.

The success of OJS

Heather Morrison, 1,000 journals using Open Journal Systems, Imaginary Journal of Poetic Economics, July 15, 2007.  Excerpt:

More than 1,000 journals are now using Open Journal Systems (OJS).

Of these:
99% are academic
49% are fully open access
40% are delayed open access
11% have yet to publish their first issue
NOT ONE JOURNAL USING OJS was found to be entirely subscription-bound

Disciplinary breakdown:
50% - sciences
23% - social sciences
14% - humanities
12% - interdisciplinary
1% - non academic

This comes 10 years after the beginning of the Public Knowledge Project, and 4 years after the first OJS journal, Post-Colonial Text, at Kwantlen University College in Vancouver, British Columbia.

As announced by John Willinsky at the First International PKP Scholarly Publishing Conference, Vancouver, July 11 - 13, 2007. With thanks to PKP Librarian Kevin Stranack for the research.

Given the success of this conference, which sold out with minimal publicity, and which received a very great many compliments from participants - ongoing, and increasing, growth of the PKP community appears to be a fairly safe prediction….

Sunday, July 15, 2007


A spammer has started using my email address as the faked return address in a locust-storm of spam.  There are three consequences that might affect you:

  1. If you receive spam apparently from me, it’s a frame-up.  I only send one mass mailing (my newsletter) and it’s strictly opt-in.
  2. I’m receiving hundreds of bounce messages a day now, as ISP spam filters detect the spam and tell me that “my” message will not be allowed through.  I actually appreciate this and wish that all the faked messages could be detected and blocked.  But the bounce messages are so numerous that I have to filter them to the trash in order to save a remnant of my workday.  I realize that one in a thousand of them is telling me that one of my legitimate emails didn’t go through.  I’m sorry to lose that information, which would allow me to re-send the message, but I don’t see that I have any choice.  If I owe you an email, this is now one more reason why my response might be delayed or destroyed.
  3. Your ISP might have added my email address to a blacklist.  If you have reason to think that I’m sending you personal (legit, direct, non-spam) emails that you’re not receiving, then check with your ISP to see whether I’ve been blacklisted and ask it to whitelist me.

Items #2 and #3 are serious inconveniences for both of us.  I’d much rather have legit mail go through than to block illegit mail, if I had to choose, and therefore find these consequences more harmful than spam itself. 

Heather Joseph podcast on NIH policy

Hear an 8 minute podcast by Heather Joseph, Executive Director of SPARC, on the importance of strengthening the NIH public access policy from a request to a requirement.  (Thanks to DigitalKoans.)

Action alert for contacting Congress on the NIH policy

To make it easier to contact your Congressional delegation, the American Library Association has created an online action alert.  Just enter your zip code, fill in your contact info, compose your message, and go.  (Thanks, ALA.)

Unlike some other action alerts, unfortunately, this one does not supply a pre-written message.  But the front page of the alert contains some bullet points in support of an OA mandate at the NIH.  If you copy that text before advancing to the next stage, you can paste it into to the message field.  You’ll have to reformat it before sending it, since the bullets will disappear in the plain-text message field.  Or, you could use some of the language from the open letter to Congress from 26 US Nobel laureates in science.

If you don’t want to use the ALA web form, find your Representative and Senators and use their own preferred web forms, email addresses, or telephone and fax numbers.  But please do it one way or another and do it soon.  The House vote is on Tuesday, July 17, and time is running out.

Update.  Charles Bailey has turned the ALA/SPARC bullet points into plain text ready to cut/paste into the ALA action-alert text box.  This is a great convenience.  Use the language as it is or paste it into the text box and modify it before sending it.  (Thanks, Charles!)

OA journals in archaeology

Mike Smith, Open Access Journals and Archaeology, Publishing Archaeology, July 14, 2007.  Excerpt:

There are a few OA (Open Access) journals in archaeology, and these are listed in the Directory of Open Access Journals. Two good ones that I have used are the Journal of Caribbean Archaeology, and the Bryn Mawr Classical Review….

Journals published by commercial publishers are a real problem. Subscription costs are skyrocketing; to take just one example, the Journal of Archaeological Method and Theory just doubled in price, so I dropped my subscription. Libraries are in crisis mode over commercial journal costs. Given the economics of these journals, coupled with their efforts to restrict access to published articles (C.T. Bergstrom and Bergstrom 2006; T.C. Bergstrom 2001; Fisher 2007), I have almost decided to stop publishing in commercial journals.

Journals published by university presses are better than commercial journals in both their economics and their access policies; in these areas they are similar to journals published by many professional societies. I was initially surprised that the main societies that I belong to, the SAA (Society for American Archaeology) and the AAA (American Anthropological Association), generally oppose OA policies for their journals. For the AAA case, see discussion and links in the Open Access Anthropology Blog….

The fact that finances trump science in the SAA is clear from the Annual Meetings….

There are institutions that help in the production and operation of OA journals for relatively small fees (for example, the Scholarly Exchange), and many university libraries now help faculty who want to start OA journals by providing expertise and resources. But unless one is willing to sacrifice basic quality features of journals (such as copy-editing and professional-looking page layout), financing OA journals remains an obstacle….

Special issue on Creating Commons

The March 2007 issue of SCRIPT-ed is devoted to Creating Commons.  (I missed this at the time; thanks to Cedarburgdentalgroup for the alert.)  All the articles are OA-related, but see especially:

  • Roger Clarke, Business Models to Support Content Commons.  Abstract:  The application of conventional, 'scarce resource' economics to content has been mistaken and harmful. More appropriate forms of economic analysis highlight the critical role that accessibility to information plays in the process of innovation. Meanwhile, down at the micro-economic level, there is an all-too-common perception that open content approaches are unsustainable and bad for business, and reflect naïve idealism on the part of their proponents. This paper identifies a range of suitable business models, and thereby demonstrates that the content commons is sustainable and appropriate for profit-oriented business enterprises.