News from the open access movementJump to navigation
Robin Peek, "Open (?) Debate on RCUK," Information Today, October 2005 (accessible only to subscribers). Excerpt from Robin's preprint:
Following the Open Access (OA) movement has been like watching two tennis players fighting for a point in a high stakes tournament. Last month I highlighted the Research Council of the United Kingdom’s (RCUK) proposal to require that research that they fund be made available through self-archiving....A curiously temporary open letter was published on August 5th at Association of Learned and Professional Society Publishers (ALPSP) web site....Sally Morris, chief executive of ALPSP, the letter signator declares that, “We are convinced that RCUK’s proposed policy will inevitably lead to the destruction of journals. A policy of mandated self-archiving of research articles in freely accessible repositories, when combined with the ready retrievability of those articles through search engines (such as Google Scholar) and interoperability (facilitated by standards such as OAI-PMH), will accelerate the move to a disastrous scenario.”...[O]n August 21, eight friends of OA, including Sir Tim Berners-Lee, inventor of the World Wide Web, and Stevan Harnad, long-standing advocate of OA, joined six other computer science professors in an open letter to Professor Ian Diamond the Chair of the RCUK Executive Group to refute the claims made by ALPSP....Morris noted member concerns that if the mandate passes smaller circulation journals and the society journals will suffer, if not cease publishing altogether. "Institutional repositories of non-organised papers will draw away subscriptions because the articles are easier to find due to Google Scholar and because budgets are dropping," said Morris....The rebuttal letter observes that, “This hypothesis has already been tested and the actual evidence affords not the slightest hint of any “move to a disastrous scenario.” Further, “when asked, both of the large physics learned societies (the Institute of Physics Publishing in the UK and the American Physical Society) responded very explicitly that they cannot identify any loss of subscriptions to their journals as a result of this critical mass of self-archived and readily retrievable physics articles....Researchers who cannot access the journal version, however – because their institutions “lack the funds to purchase all the content their users want” -- should not be denied access to the basic research results, which have always been given away for free by their authors (to their publishers, as well as to all requesters of reprints).”
(PS: Robin would like to make clear that she knows that the ALPSP letter was subsequently made public again, though her article went to press before this was known.)
Project Censored has released its 2005 list of the top 25 news stories that didn't make the news. What's #1? Bush Administration Moves to Eliminate Open Government. (Thanks to Garrett Eastman.) Excerpt:
Throughout the 1980s, Project Censored highlighted a number of alarming reductions to government access and accountability....It tracked the small but systematic changes made to existing laws and the executive orders introduced. It now appears that these actions may have been little more than a prelude to the virtual lock box against access that is being constructed around the current administration. “The Bush Administration has an obsession with secrecy,” says Representative Henry Waxman, the Democrat from California who, in September 2004, commissioned a congressional report on secrecy in the Bush Administration. “It has repeatedly rewritten laws and changed practices to reduce public and congressional scrutiny of its activities. The cumulative effect is an unprecedented assault on the laws that make our government open and accountable.”...Under the Bush Administration, agencies make extensive and arbitrary use of FOIA exemptions....Quite commonly, the Bush Administration simply fails to respond to FOIA requests at all....The Bush Administration has dramatically increased the volume of government information concealed from public view....The Bush Administration has also obtained unprecedented authority to conduct government operations in secret, with little or no judicial oversight....Compared to previous administrations, the Bush Administration has operated with remarkably little congressional oversight.
The U.S. National Science Foundation (NSF) has published version 4.0 of its report, NSF’s Cyberinfrastructure Vision For 21st Century Discovery, September 26, 2005. The report outlines the agency's vision for cyberinfrastructure and seeks public comment --by email at email@example.com. Excerpt:
While hardware performance has been growing exponentially – with gate density doubling every 18 months, storage capacity every 12 months, and network capability every 9 months – it has become clear that increasingly capable hardware is not the only requirement for computationenabled discovery. Sophisticated software, visualization tools, middleware and scientific applications created and used by interdisciplinary teams are critical to turning flops, bytes and bits into scientific breakthroughs. It is the combined power of these capabilities that is necessary to advance the frontiers of science and engineering, to make seemingly intractable problems solvable and to pose profound new scientific questions. The comprehensive infrastructure needed to capitalize on dramatic advances in information technology has been termed cyberinfrastructure. Cyberinfrastructure integrates hardware for computing, data and networks, digitally-enabled sensors, observatories and experimental facilities, and an interoperable suite of software and middleware services and tools. Investments in interdisciplinary teams and cyberinfrastructure professionals with expertise in algorithm development, system operations, and applications development are also essential to exploit the full power of cyberinfrastructure to create, disseminate, and preserve scientific data, information, and knowledge....NSF envisions a world in which digital science and engineering data are routinely deposited in convenient repositories, can be readily discovered in well-documented form by specialists and non-specialists alike, are openly accessible, and are reliably preserved.
Susan Morrissey, Chemical Society Offers Resolution In PubChem Debate, Chemical & Engineering News, October 3, 2005. Excerpt:
The American Chemical Society has offered an olive branch to the National Institutes of Health in the ongoing dispute over PubChem, the agency’s small-molecule database. The three-part proposal focuses on areas of “common ground” and puts aside the two groups’ philosophical differences. (C&EN is published by ACS.) “It’s time to put aside the divisive philosophical differences in favor of reaching an accord on key operational issues,” ACS President William F. Carroll said in a statement to C&EN. “We trust [NIH Director Elias A.] Zerhouni’s assurances that PubChem is not headed toward duplication of the CAS Registry and that PubChem truly will be complementary, not competitive,” he explained. In a Sept. 22 letter to Zerhouni, ACS seeks confirmation that PubChem will not “disseminate information on the commercial availability of compounds.” ACS also expressed concern in the letter over the quality of data being put into PubChem. The letter asks NIH --which has noted that it is open to additional safeguards-- to take steps to ensure the data are original prior to submission. The letter also asks NIH to introduce a predissemination process to “ensure that data are pertinent and derived from established, bona fide sources.” According to an agency spokesman, “NIH leadership is carefully reviewing the letter and is preparing a response. NIH continues to work with ACS on this issue and is hopeful that a resolution will be reached in the near future.”
Update. The October 3 ACS letter is online here.
dLIST is running a survey "to understand the self-archiving behaviors of scholars in Library and Information Science (LIS) and use/non-use of the Digital Library of Information Science and Technology (DLIST). You are eligible to participate because you are either an author or a reader of articles in LIS journals, a faculty in a LIS school/other academic unit or a registered user or potential user of DLIST."
Petroleum Journals Online publishes six peer-reviewed, open-access journals. (Thanks to the Internet Resources Newsletter.) From the web site:
Petroleum Journals Online is concerned with fundamental and applied research, new technology, operations, and case studies in petroleum engineering. It provides an international forum for online publication of high quality peer reviewed papers on the principles and techniques for hydrocarbon exploitation. We publish the following journals: e-journal of drilling engineering, e-journal of production engineering, e-journal of reservoir engineering, e-journal of petrophysics, e-journal of production geology, and e-journal of energy economics.
Proceedings (Baylor University. Medical Center) has been added to the collection of journals available through PubMed Central. Currently the journal archive only covers the last two years, but efforts to prepare additional content continue and v16 (2003) is expected to come online in October.
Proceedings (Baylor University. Medical Center) - Fulltext v17+ (2004+); ISSN: 0899-8280.
PLoS Pathogens is the fifth Open Access journal published by the Public Library of Science and the third title launched in 2005. Extracts from the inaugural editorial:
The journal aims to publish "... breakthrough[s] in understanding the biology of pathogens and pathogen-host interactions." The editorial board is "... committed to publishing groundbreaking findings on bacterial, fungal, parasitic, prionic, and viral pathogens." The journal has addressed the concerns expressed by some publishers in resisting the RCUK and NIH archiving proposals through "... [t]he addition of a short, nontechnical summary accompanying each article in PLoS Pathogens adds another dimension to our accessibility. These synopses represent an important mechanism for sharing scientific advances with nonspecialists, and facilitating communication between scientists and members of the general public."
PLoS Pathogens - Fulltext v1+ (2005+); Print ISSN: 1553-7366 | Online ISSN: 1553-7374.
Outsell has published FutureFacts: Information Industry Outlook 2006, September 19, 2005. It's free for downloading. Excerpt:
Prediction 8: Experimentation will continue in open access, driven by funding shifts, new alliances, and technology innovation à la Google Scholar. Google Scholar is still going strong; Public Library of Science is still adding titles. The shift to open access continues. This year PLoS Biology has been assessed by Thomson ISI to have an impact factor of 13.9, which places it among the most highly cited journals in the life sciences, ahead of several prestigious traditional journals. That’s a solid sign of legitimacy and PLoS’ ability to attract high-caliber editors and authors....Key Trends: Transition from print to online accelerating; open access growing, but with little effect on publishers’ bottom lines; reference content hot in the form of e-books and other electronic products; young students driving new collaborative ways to use information in their studies....[H]igher education trends include open access, new delivery methods, and pressure on textbook pricing....Some of the biggest changes in the market are coming in the academic sphere. Open access is taking root, not just in new forms of scholarly journals but in the many forms of collaboration and knowledge-sharing that are expanding in academia, especially as a new digital-native generation moves in. Alternative publication models such as e-books, electronic reference, and print-on-demand are finding fertile ground here. Pressure on traditional textbook pricing will continue....This segment is heavily weighted toward a few players including giant Reed Elsevier, which continues to innovate, has stellar renewal rates, and publishes must-have content. Library budgets – a big factor in this segment – are loosening up, but pressure on prices of textbooks and journals continues. Open access has yet to make a significant dent in revenues, and next-generation players are small....Open Access: Traction, but Little Effect on Publishers’ Bottom Lines. New models have been adopted by American Institute of Physics, Springer, PLoS, and BioMed Central. HighWire Press and the American Chemical Society both freely post articles six months to a year after publication. Open access is here, even as it continues to be debated by the information professional community....Universities and research funding organizations have tired of paying for research and then paying to buy it back in the form of journals; even business and professional publishing will see new forms of “open” creation and distribution including authoring, editing, peer review, and distribution....Content is collaborative and social; publishing models are moving from one-to-many to many-to-many. Scholarly publishing, particularly in the STM sector, is clearly making this shift; other professional sectors will be next.
Andy Powell, Divided by a Common Language? Focus on UKOLN, September 2005.
There is a significant and growing interest in repositories at the moment. (I doubt many readers will faint with shock from reading that.) This is perfectly appropriate since the community is rightly interested in a better understanding of the technical, policy and operational challenges that repositories produce as well as of their benefits, and of other issues raised by the 'open access' movement more generally. It is, however, interesting to note how the community latches on to buzzwords every so often. It happened with 'portals' and, to a lesser extent, 'portlets'...and is happening again now with 'repositories'. This is a good thing in many respects, since it allows the community to focus on a particular set of issues and activities in a way that might otherwise never happen. But there are dangers too. Perhaps most importantly, we need to remember that not all systems capable of delivering repository-like functionality will be called 'repositories'. In particular, a large part of the community will never have heard of repositories or eprint archives but will think instead of 'content management systems'. Our use of terms can sometimes prove problematic.
The EU has launched a consultation on European digital libraries. Excerpt:
The European Commission today [September 30] unveiled its strategy to make Europe’s written and audiovisual heritage available on the Internet. Turning Europe’s historic and cultural heritage into digital content will make it usable for European citizens for their studies, work or leisure and will give innovators, artists and entrepreneurs the raw material that they need. The Commission proposes a concerted drive by EU Member States to digitise, preserve, and make this heritage available to all. It presents a first set of actions at European level and invites comments on a series of issues in an online consultation (deadline for replies 20 January 2006). The replies will feed into a proposal for a Recommendation on digitisation and digital preservation, to be presented in June 2006.
The consultation documents include an FAQ on European digital libraries. Excerpt:
Initiatives that improve the accessibility and flow of information are good for the knowledge economy, and the digital libraries initiative has a considerable economic potential. Once digitised, our cultural and scientific heritage can be used as input for a wide range of information products and services....Google’s [Library] initiative is an example that shows the potential of the online environment for making information more accessible for all. The sheer size of the announced operation – 15 million books – appeals to the imagination. The initiative has certainly triggered a reflection on how to deal with our European cultural heritage in the digital age. It is also interesting in that it highlights the possibilities for public/private initiatives in this area. Public/private partnerships or sponsoring by private companies will accelerate digitisation....Under current legislation only public domain works (where there is no longer copyright) can be made available to the public online. For other works, digital libraries need to get the explicit agreement of rightholders. In practice this means that only works from the 1920s or before will be covered in a digital library, or works for which there is an agreement, on a case by case basis, with the rightholders.
Google On-Line Library Project Challenged, The CalTrade Report, September 30, 2005. Excerpt:
Book publishers in the United Kingdom are seriously considering legal action against Google over the Bay Area-headquartered search engine company's plans to create a virtual library by digitizing millions of books held in public libraries and universities. According to press reports, the London-based Publishers Association has refused to rule out taking legal action over Google's "Print Project," saying that it was holding a "full and frank debate" with the company and other parties.
Julie Hilden, Authors Sue Google Over Its "Print for Libraries" Project: Will the Suit Succeed? Should It? And Why, As An Author, I'm Opting Out of Any Class Action, Writ, September 26, 2005. (Thanks to Law Pundit.) Excerpt:
In this column, I will address four questions: Should this suit be certified as a class action? What should Google's position be on class certification? (We know the plaintiffs position: They want it.) Who's likely to win this suit? And, assuming the suit is certified as a class action, should individual authors opt in, or opt out?...A class action will mean that if the Authors Guild wins, Google simply can't go forward. The damages would be too great, and an injunction could be very broad. So it would be a complete win for the Guild. Conversely, if Google wins, it won't have to contend with numerous duplicative suits by individual authors across the country - the legal equivalent of death by a thousand cuts. And that would be a complete win for Google....On the whole, I think Google's use of a book should be deemed fair use. And, most likely, it will be....[T]aking a small chunk doesn't usually interfere with the market for the whole. In general, that will be true for Google's project, too: Its search function seems more likely to be used to find books, than to moot the need for their purchase....[Even for non-fiction books,] Google, then, is more likely to enable new research, rather than displacing the income stream to nonfiction authors from old research....For all these reasons, I'd deem Google's "fair use" argument the likely winner here. Surely, technical copying is going on here - in the form of the scanning of the book. But the point of copyright law isn't to protect against copying, it's to protect against harm to the value of intellectual property. And it seems that Google's project is likely to inflict little of that latter type of harm....As an author myself (and, incidentally, a one-time Author's Guild member) I feel very conflicted about this lawsuit. But I've decided I'll opt out of the class action, if this suit becomes a class action (or decline to opt in, as the case may be). In other words, I won't take Google's money for this use, no matter what. Besides being a writer, I'm also a strong free speech advocate, and Google's project may well help free speech more than it hurts it. I want to see the advance of human knowledge much more than I want to get paid for making my books searchable. The fact that they might become searchable, to me, is a welcome surprise.
Lawrence Solum, Google Print Lawsuit & Class Certification, Legal Theory Blog, September 21, 2005. Solum is a law professor at the U of San Diego. (Thanks to Law Pundit.) Excerpt:
Putting on my proceduralist hat for a moment, there is a very substantial problem with class certification. The complaint defines the class as follows: "The Class is initially defined as all persons or entities that hold the copyright to a literary work that is contained in the library of the University of Michigan." That class includes many authors who would be injured if the plaintiffs were to prevail--including, for example, me! I am member of the plaintiff class--owning the copyright to at least three or four dozen works in the University of Michican library. I have a very strong objective interest in Google Print succeeding--because as a scholar, I benefit from the dissemination of my works and because reaching agreement with Google will be costly to me and Google, essentially killing the project. A substantial intraclass conflict of interest destroys "adequacy of representation," making class certification inappropriate, both under the federal rules of civil procedure and under the due process clause of the fifth amendment of the U.S. Constitution. Opt out is not a solution--because that would create an affirmative duty to monitor the litigation and opt out (in order to preserve a constitutional right), and the Supreme Court has made it clear that no such duty should be created in a number of cases, including Phillips Petroleum v. Shutts. Pro-bono representation for intervenors opposing certification, anyone?
(PS: I have a book under copyright at the U of Michigan library and would gladly join a group of intervenors in opposing class certification. The message for authors who object to Google Library, translated out of legalese: speak for yourselves.)
Jack Balkin, Search Me. Please. Balkinization, September 28, 2005. Balkin is a law professor at Yale University. (Thanks to Law Pundit.) Excerpt:
As an author who is always trying to get people interested in my books,...I have to agree with Tim O'Reilly's op-ed: the Author's Guild suit against Google is counterproductive and just plain silly....Every author wishes that more people read his or her books. Most of us would happily stand on street corners with sandwich boards if we thought it would help. Anything that brings our work in front of a larger public should be welcomed as a good thing, not something to be feared. The Authors Guild, and indeed all authors, should be working with search engines like Google to come up with new and creative ways to get people to know about and sample what we have often spent many months-- and sometimes many years-- working on. Authors spend their lives putting the best part of themselves into their books. The cruelest fate they can suffer is not criticism and rejection-- it is being forgotten. The digitally networked environment gives them a chance to avoid that fate. All authors who care about their work should embrace it.
The International Association for Media and Communication Research (IAMCR) is proposing an International Researchers' Charter (September 30, 2005) for adoption at the November WSIS meeting in Tunis. Excerpt:
Worldwide, research activity is confronted by diminishing budgets and increasing control of output by a variety of actors including governments, while researchers are being submitted to unprecedented and deleterious changes in their status, salaries and the independence of their investigations. The World Summit on the Information Society (WSIS) has helped to foster discussion worldwide on the need for unhindered and equal access to the means of communication and information content. The importance of information arising from high quality research in the humanities and the sciences has not, however, been sufficiently emphasized during the Summit. It has not emphasized the central role played by researchers in producing information, in promoting a better understanding of media and ICT systems and their content and functions, and in developing culturally relevant content and fostering communication in support of the attainment of inclusive and people-centred Knowledge Societies. Therefore, the International Association for Media and Communications Research (IAMCR) calls upon researchers worldwide to subscribe to the following Researchers’ Charter principles and recommendations for action:
IAMCR invites individual researchers, NGO's, organizations and caucuses to sign the charter.
The presentations from the workshop, Future Needs for Research Infrastructures in Biomedical Sciences (Brussels, March 16, 2005), are now online.
Elizabeth Breakstone, Librarians Can Look Forward to an Exhilarating Future, Chronicle of Higher Education, September 30, 2005 (accessible only to subscribers). Excerpt:
I expect my fellow librarians to be excited by changes that make information more accessible. But when I read articles about the future of the library, I often sense fear and anxiety rather than anticipation and enthusiasm. The panic that permeates public discussions about the future of libraries is absent when I speak with my friends from graduate school and my colleagues. Unfortunately, few people outside the field hear our perspective....I see optimistic conversations in library-related blogs and publications, but when I read articles that reach the general public, I groan. Take Michael Gorman, now president of the American Library Association, commenting in The Chronicle earlier this year about Google's plan to put library books online: "They say they're digitizing books, but they're really not, they're atomizing them. In other words, they're reducing books to a collection of paragraphs and sentences which, taken out of context, have virtually no meaning."...Most of us know that Google's digitization project, the open-access movement, the proliferation of blogs, and other recent developments increase both the availability of information and the challenge of finding what's relevant. The more sources that are available, the more important it is to be able to interpret and evaluate them. In understanding and exploring technological changes, librarians not only participate in the information revolution but help direct its course....Although I don't fear technology and its impact on the library's future, I do have some concerns. I worry about the economics of scholarly communication -- the combination of plummeting library budgets and skyrocketing journal and database prices. I fear that leasing digital collections of material, rather than owning them, will leave librarians dependent on the long-term benevolence of corporations....When I think about the library that I'll be working in 30 years from now (right before I retire, if all goes according to plan), I have no idea what my work environment will look like. But when I speak with friends who are also new in the field, I sense excitement and empowerment rather than anxiety. Like me, they find it exhilarating to work in a profession with such an open future -- an open future, mind you, that will be shaped by us.
Open Access a Must for Wellcome Trust Researchers, a Wellcome Trust press release, undated but released today. Excerpt:
Next week (October) the Wellcome Trust becomes the first scientific research funder to insist that papers emanating from its grant awards are placed in an open access repository. From the 1st October it will become a condition of funding, that papers will have to be posted on PubMed Central (PMC)– the free-to access, life sciences archive developed by the National Institutes of Health – and made accessible within 6 months of publication. To facilitate this, the Wellcome Trust has – with the help of NIH – established a manuscript submission system, through which papers accepted for publication in a peer-reviewed journal can be deposited in PMC. From the 1st October next year all existing Trust grant holders will have to deposit future papers into PubMed Central. This delay will allow existing grant holders time to adjust to the new policy and let us know what problems – if any – they may experience, affording us time to overcome them. During this time the Trust, working in partnership with other UK life sciences funders, plans to establish a UK version of PubMed Central – UKPMC.
Nick Webb, Digitalization, Google and all that...., EPS Google Debate, September 28, 2005. Webb is the former Marketing Director of Simon & Schuster UK. Excerpt:
If Google succeeds with this project – even if it starts only with public domain titles – it will be able to offer the most seductive information service on the planet....Why will the Google service be unbeatable? Because it will make the greatest archive on Earth searchable. That’s the archive called All the Books in the World....Universal digitization will make it easy to publish, but exceedingly difficult to pay for anything new to be written....Perhaps an author will eventually make as much from downloads as he or she would have earned from an advance – maybe more. But the money will come after the work is published....Come on, I hear you saying, publishers will still pay advances. Yes, but against what – and when? If traditional book sales fall – as surely they must – the paper version of the work will become a bibliophile’s indulgence, generating less income, though possibly at better margins....If GoogleWorld really takes off, authors will also have to ask themselves what publishers bring to the party. It will not be distribution. Individuals and great corporations will have the same access to a global network. Publishers will offer the cachet of their imprint, their editing skills and panache at marketing. Authors are often bitterly cynical about these virtues, especially the last. Will there be enough money in the GoogleWorld environment for proper marketing budgets?...GoogleWorld offers a potentially brilliant resource, one I would use in a heartbeat. But it’s short-term. By plundering history, it undermines the economic basis of a market that in its bumbling and inefficient way has served as a repository of culture and learning (and a deal of crap too, it must be said) for centuries.
(PS: Webb assumes without evidence --in fact, contrary to mounting evidence-- that Google Print will decrease rather than increase the sale of priced, printed books.)
KU Provost to Step Down After 13 Years, Kansas City InfoZine, September 29, 2005. Excerpt:
David E. Shulenburger, who has overseen impressive gains in the academic profile of University of Kansas students and a renewed emphasis on effective teaching, announced today that he will step down as executive vice chancellor and provost of the Lawrence campus at the end of June 2006....Shulenburger, a labor economist, will return to teaching in the School of Business. He leaves a long legacy of achievements as provost. Among them:...Leader of a national dialogue on the economics of scholarly communication in the digital era. For drawing attention to making scholarly publications affordable and his proposal to create a National Electronic Article Repository, Shulenburger received commendation from the Association of Research Libraries.
(PS: David Shulenburger is a first-generation leader of the OA movement, from his October 1998 proposal for a National Electronic Article Repository (NEAR) to the March 2005 University of Kansas resolution on OA and his accompanying memorandum urging Kansas faculty to deposit their research output in the Kansas ScholarWorks repository. Under his leadership, the University of Kansas became the first U.S. university to sign the Registry of Institutional OA Self-Archiving Policies. All the best to him in his retirement.)
There's a new Slashdot thread on repositories for multimedia in the public domain.
India to set up two more science institutes, New Kerala, September 28, 2005. Excerpt:
India will have two more institutes to promote science and technology in Pune and Kolkata on the lines of the Indian Institute of Science at Bangalore, Prime Minister Manmohan Singh announced Wednesday. "I have always felt that it is a pity that a country of a billion people has only one Indian Institute of Science," he said after giving away the CSIR Diamond Jubilee Technology Award 2004 and Shanti Swaroop Bhatnagar prizes for Science and Technology here.
(PS: The Indian Institute of Science is a leader in OA to Indian research, offering OA to its house journal back to its launch in 1914. Let's hope that the Pune and Kolkata institutes will share its commitment to OA.)
From a Cornell University press release (September 28, 2005):
A team of Cornell University researchers has been awarded a $2 million National Science Foundation (NSF) grant to develop advanced Web tools for social sciences research. Ultimately intended to assist in the detailed statistical and observational study of social and information networks, the project will involve a team of computer scientists and social scientists developing the means -- dubbed "cybertools" -- to extract and analyze information from vast collections of data. The project's primary source of data will be the Internet Archive, which is supported by the NSF and the Library of Congress, among others. One of the first steps in the project, which is funded through 2007, will be to transfer 30 percent, or 200 terabytes, of the massive archive to a computer server at Cornell for use by researchers.
Miriam Clinton, The Internet Library: rip, mix or burn? Open Democracy, September 28, 2005. Excerpt:
The powerful distribution mechanisms of the networked world, particularly peer-to-peer file sharing, present a unique challenge to the rule of law. But at present no one will meet that challenge. While filesharers will not compromise on ultimate freedom, corporations cannot see past the bottom line. The result is bad news for posterity....The rise of p2p, and of other systems of distribution between human peers (the peer in p2p refers to machines, not humans, and their position of equality in a network) enabled by the disintermediary powers of the internet, has been matched by an unprecedented strengthening in copyright law. In each phenomenon, there is an accountability deficit....What has gone unconsidered is the educational value of public domain archiving....At present, anyone can legally download works from organisations who are working with Free, Open Source or Creative Commons licences and material in the public domain to distribute artistic materials legally. But even these projects encounter strange barriers. Brewster Kahle is attempting one such project in the Internet Archive. Kahle has fought and won legislative rights to extract software code from dated technology, hardware from perhaps only a decade ago which is quickly becoming unreadable either due to material decay or technological obsolescence. As yet he is not permitted to publish such material, but was granted the rights to archive the software on contemporary hard disks provided that he adhered to conventional copyright law. Distribution may be made possible in the future via a lawsuit (Kahle v. Gonzales) currently in process. Yet the lengths which he has had to go to stay within the law when his desire is to archive a piece of software which would be valued at less than a few dollars in today's money, seem ridiculous....[W]e must go further and tear down the walls of worldwide dictatorship which have lead to the restrictive DMCA and EUCD legislation, until research and archiving are once again permitted, until such systems are held up to review or even outlawed due to their clear and direct obstruction of the recording of the events of this century. Projects such as the Internet Archive are essential, but accountability must extend further until a library is permitted to be built and referenced by the general public in order to prevent the coming of a digital dark age.
If you remember, Jimmy Wales' keynote address at the Wikimania conference in August listed 10 things that will be free. The list included encyclopedias, dictionaries, textbooks, music, and art, but not peer-reviewed research literature. However, Ethan Zuckerman reports that Wales presented a revised and expanded list to Harvard's Berkman Center on Tuesday. One addition to the list:
Academic publishing. Jimmy’s slowly but surely coming around to the Open Access model for academic publishing advocated by Peter Suber and others.
(PS: Now I can say that Wales' list captures the most significant targets! His influence and advocacy will be very welcome in the OA movement.)
Heather Morrison, Open Access: Good for Business! Imaginary Journal of Poetic Economics, September 28, 2005. Excerpt:
Providing open access to the scholarly research literature makes it readily available to the whole business community. All have access to the latest knowledge, on which to build new business ideas - based on the soundest knowledge we have. Picture open access to all of the literature relating to environmental sciences, for example. What opportunities will emerge for the creative entrepreneur, to find new ways of producing goods and services that help us to protect and enhance our environment? If any of our research finds new forms of producing energy, that are renewable and non-polluting - why not share them with everyone, so that we can devise means of applying the solutions as rapidly as possible? It makes a great deal of sense that business would have access to research which is funded through taxpayer dollars, and conducted at universities. Businesses, after all, do pay taxes.
John Blossom, Wikibooks Welcome Open-Source Textbooks to the Web, Shore News Commentary, September 28, 2005. Excerpt:
If ever there were some doubt that the open source movement will have an impact on the publishing industry one need only take a look at Wikibooks, a project spun off by the founder of Wikipedia, the open source online encyclopedia. Using the same technology and post-posting jurying of submitted content as Wikipedia, the Wikibooks project intends to build courseware for K-12 curricula in multiple languages over the next several years. As noted in CNET News there are only about 11,000 articles in the database so far, but it's a strong start to what promises to be a significant alternative for school districts trying to stretch their tax dollars effectively. Wikibooks would in effect become a "generic brand" competing against major text publishing houses, much as inexpensive generic drugs compete against major pharmaceutical companies that churn out slightly different and patentable (read: copyrightable) treatments to boost their margins. The difference in publishing, of course, is that there are only so many ways to learn third grade math that would justify all-new textbooks on any regular basis. Does this harbinger the death of profits in textbook publishing? Hardly so, yet the roads to profit may involve new angles. Open source textbooks may offer smart textbook printing services an opportunity to provide packaging for Wikibooks content to supply school systems wanting hard copies. Then again, if Answers.com can leverage Wikipedia content in a stable of reference materials, who's to say that Answers.com cannot become clever repackagers of open source and copyrighted course materials that complement their reference product - a product that is already popular in school systems. There's nothing wrong with healthy margins, but it is going to become increasingly hard for textbook publishers to design those margins focused on older content production, packaging and monetization regimes. Yet again, copyright is not going to be a protection for content that's not truly unique or cost-effective.
David Bollier, Greedy Publishers Spur the Open-Source Textbook Movement, On The Commons, September 28, 2005. Excerpt:
Now that my son has started college, I have become more aware of a growing problem for students and their parents: soaring textbook prices. Publishers have become increasingly adept at inventing insidious new ways to wring more money from beleaguered college students, who have little recourse but to buy the assigned textbook -- or enroll in another course. Publishers have two favorite tactics: make frequent and gratuitous updates of editions (requiring students to purchase new books rather than cheaper used ones), and “bundle” CD-ROMs and other instructional supplements with textbooks (forcing students to purchase of the whole package of materials rather than make cheaper à la carte purchases). The good news is that documentation of these abuses is now in hand. The U.S. Government Accountability Office issued a revealing report in July 2005. It reflects similar findings by the California Public Interest Group, which has tracked this issue for some time....The “value-subtracted” from e-textbooks is even worse than this, however. As Peter Suber of Open Access News reports, some publishers are selling electronic textbooks that “expire.” In essence, students “rent” the textbook rather than own it, thus eliminating the resale market entirely. Books can’t be returned. And they are “locked into” the computers that download them, so that they can’t be copied or distributed. All of this brings me to the heartening rise of open source textbook publishing. As Suber blogged on August 9, 2005, "there are now several full-blown open-access textbook initiatives underway. These include the California Open Source Textbook Project, CommonText, Libertas Academica, the Open Textbook Project, and Wikibooks." Suber reports that there are also hybrid initiatives like BookPower, whose ebooks are only free to developing countries....It’s hard to tell how open-source textbooks will evolve and go mainstream, but certainly every new ratcheting of the screw by textbook publishers will make open textbooks even more attractive. Yet another instance of the power of the online commons, which can subvert unresponsive markets, construct new communities of practice, and grow with self-reinforcing momentum in highly efficient ways.
Dan Reed, IPod maps draw legal threats, Wired News, September 26, 2005. Excerpt:
Transit officials in New York and San Francisco have launched a copyright crackdown on a website offering free downloadable subway maps designed to be viewed on the iPod. IPodSubwayMaps.com is the home of iPod-sized maps of nearly two dozen different transit systems around the world, from the Paris Metro to the London Underground....More than 9,000 people downloaded the map, which was viewable on either an iPod or an iPod nano, before Bright received a Sept. 14 letter from Lester Freundlich, a senior associate counsel at New York's Metropolitan Transit Authority, saying that Bright had infringed the MTA's copyright and that he needed a license to post the map and to authorize others to download it....Last week Bright received a similar cease-and-desist letter from officials with Bay Area Rapid Transit, or BART, demanding that Bright remove a map of the San Francisco rail system....The New York Times reported in June that the MTA has begun registering its colorful route symbols as trademarks and has sent more than 30 cease-and-desist letters to businesses that had been using the route symbols to sell such items as bagels, perfume, T-shirts and tote bags....BART's letter to Bright read in part, "There is a widespread belief that materials published by public agencies such as BART are in the public domain. This belief is incorrect."
PS: Dan Gillmor's comment is exactly right:
This is an outrageous abuse of governmental power, and it's difficult to believe it's even legal. Federal publications are, by law, not copyrighted -- for the good reason that taxpayers have already paid for the writing. Instead of wasting the public's resources on paying lawyers to go after people who are only helping promote the transit services, the New York and Bay Area governments should grow up and serve the people.
Michael Bugeja and Daniela V. Dimitrova, The Half-Life Phenomenon: Eroding Citations in Journals, The Serials Librarian, 49, 3 (2005). Only this abstract is free online:
The phenomenon of lapsed URLs, otherwise known as “linkrot,” has been acknowledged since the 1990s; however, a relative few studies addressed the impact of linkrot with respect to the footnote, the foundation upon which the scientific documentation is based. In this summary of research to date, the authors focus on three top journals in journalism and communication-Human Communication Research, Journal of Communication, and Journalism & Mass Communication Quarterly-testing some 416 online citations over four years. Of the total 416 citations, only 61% were still accessible. Additionally, 19% of the online footnotes contained an error in the URL, and 63% did not provide an access date in the published citation. Of those links that were still active, only 58% matched the cited content. The authors also introduce their concept of “the half-life of Internet footnotes,” or the time is takes for one-half of online citations in a journal to go dead, and make recommendations to extend the online life of Internet-based footnotes.
Anita Coleman, DLIST and DL-Harvest: Open Access for LIS, a conference presentation, September 26, 2005.
Abstract: This is a 30-slide presentation sponsored by the University of Arizona, School of Information Resources & Library Science, Library Student Organization (LSO) on Sept. 26, 2005 from 6 - 7:30 pm. This is essentially the story of DLIST from inception in 2002 and includes the establishment of an advisory board, the open access aggregator DL-Harvest in 2005, the unfolding of the goals, objectives and vision, and the people who have been involved including internships. The context of the Open Access movement is briefly explored. References and notes help increase understanding of the importance of open access and DLIST to LIS.
Act now to preserve research data for the future, warns head of DCC, a JISC press release, dated today. Excerpt:
Digital information collected by research teams is being lost because contextual data is not being properly recorded, says the head of the UK organisation set up to provide advice on storing information. Unless researchers act now to ensure the data they collect is properly recorded and classified, their work will be lost to future generations, says Chris Rusbridge, Director of the Digital Curation Centre (DCC)....Mr Rusbridge warns that some research data has already been lost. “It is important to act now because losses to this data are occurring already. Curation begins at creation. “This kind of information is valuable in many ways, including access to legally significant records, access to all kinds of cultural and societal information, and continued access to the expensive products of science and other research. “Digital curation is important because without it digital information will be lost or become useless.”
The Distributed Library: OAI for Digital Library Aggregation, Digital Library Federation, September 26, 2005. The final report from the OAI Scholars Advisory Panel Meeting (Washington DC, June 20-21, 2005). Excerpt:
The meeting achieved several interlocking goals: introduced the members of the Panel to each other and to the librarians taking part in the grant activities; allowed the librarians to explain what harvestable metadata is, how OAI works, and what its potentials for digital scholarship are; allowed the scholars to ask questions about their role in the project; and allowed the librarians to test assumptions against a sample audience and to receive feedback fairly early in the grant’s timeline. The bulk of the first half of the meeting was taken up with a series of discussions and demonstrations of existing OAI tools and prototypes, and to an explanation of where OAI has come from and where we hope to take it as a library service. The University of Michigan’s OAIster, Emory's Metadata Migrator, UIUC's Experimental OAI Registry, the CIC OAI collection, and The University of Waikato’s Greenstone were all demonstrated. The second half of the meeting on the following day started with a discussion of the OAI Best Practices work, MODS, and DLF Aquifer. There was early on an observation from a scholar that many of their Web projects are going into their second generations and this may be a good time to influence scholars to add metadata and use services such as OAI harvesting. It was also noted that there is an inherent tension between the individualism that drives much humanities work and the consistency across projects that a service based on harvested metadata requires....There was very high value placed on being able to find quickly archival material on a subject. This was seen to be the single best potential for OAI services of the sort that we demonstrated....There was a strong feeling that the services we demonstrated were too “library-focused” --they gave too much priority to the name of the institution from which an object came-- this was seen as good PR for the library but not a particularly useful thing for the scholar to know (at least, not of prime value earlier in the searching process). The item and its collection are of prime importance, not the institution that holds the item.
Tim O'Reilly, Search and Rescue, New York Times, September 28, 2005. An op-ed. Excerpt:
Authors struggle, mostly in vain, against their fated obscurity. According to Nielsen Bookscan, which tracks sales from major booksellers, only 2 percent of the 1.2 million unique titles sold in 2004 had sales of more than 5,000 copies. Against this backdrop, the recent Authors Guild suit against the Google Library Project is poignantly wrongheaded. The Authors Guild claims that Google's plan to make the collections of five major libraries searchable online violates copyright law and thus harms authors' interests. As both an author and publisher, I find the Guild's position to be exactly backward. Google Library promises to be a boon to authors, publishers and readers if Google sticks to its stated goal of creating a tool that helps people discover (and potentially pay for) copyrighted works. (Disclosure: I am a member of the publisher advisory board for Google Print. As the name implies, it is simply an advisory group, and Google can take or leave its suggestions.)...I'm with Google on this one. It would certainly be considered fair use, if, for example, I circulated a catalog of my favorite books, including a handful of quotations from each book that helps people to decide whether to buy a copy. In my mind, providing such snippets algorithmically on demand, as Google does, doesn't change that dynamic. Google allows click-through to the entire book only if the book is in the public domain or if publishers have opted in to the program....A search engine for books will be revolutionary in its benefits. Obscurity is a far greater threat to authors than copyright infringement, or even outright piracy. While publishers invest in each of their books, they depend on bestsellers to keep afloat. They typically throw their products into the market to see what sticks and cease supporting what doesn't, so an author has had just one chance to reach readers. Until now. Google promises an alternative to the obscurity imposed on most books. It makes that great corpus of less-than-bestsellers accessible to all. By pointing to a huge body of print works online, Google will offer a way to promote books that publishers have thrown away, creating an opportunity for readers to track them down and buy them....I'm sorry to see authors buy into the old-school protectionism of the Authors Guild, not realizing they're acting against their own self-interest. Their resistance can come only from a failure to understand the nature of the program. Google Library is intended to help readers discover copyrighted works, not to give copies away. It's a tremendous service to authors that will help them beat the dismal odds of publishing as usual.
Andhra Pradesh, SVU digital library has Harvesters for easy access to research works, NewIndPress, September 29, 2005. Excerpt:
The newly inaugurated digital library in Sri Venkateswara University has some unique features, which makes it unique from any other university library. The home page of SV University Library has two services - HDD and OAIster listed under a heading - Harvesters. HDD stands for Harvester of Digital Documents and it is a computer programme, which retrieves Meta Data (bibliographic information) from various digital repositories (databases) of a particular subject worldwide. According to A R D Prasad of Indian Statistical Institute in Bangalore, who developed the Harvester for SVU Digital Library, every university and research institute maintains a digital database of dissertations, research papers, thesis, scientific papers and journals of a particular subject, which are not [yet] retrieved by ordinary search engines like Google.
Jason Mazzone, Copyfraud, Brooklyn Law School, Legal Studies Paper No. 40, August 21, 2005. (Thanks to Klaus Graf.)
Abstract: Copyright in a work now lasts for seventy years after the death of the author. Critics contend that this period is too prolonged, it stifles creativity, and it undermines the existence of a robust public domain. Whatever the merits of this critique of copyright law, it overlooks a more pervasive and serious problem: copyfraud. Copyfraud refers to falsely claiming a copyright to a public domain work. Copyfraud is everywhere. False copyright notices appear on modern reprints of Shakespeare's plays, Beethoven piano scores, greeting card versions of Monet's water lilies, and even the U.S. Constitution. Archives claim blanket copyright to everything in their collections. Vendors of microfilmed versions of historical newspapers assert copyright ownership. These false copyright claims, which are often accompanied by threatened litigation for reproducing a work without the "owner's" permission, result in users seeking licenses and paying fees to reproduce works that are free for everyone to use. Copyfraud also refers to interference with fair uses of copyrighted works. By leveraging the vague fair use standards contained in the Copyright Act and attendant case law, and by threatening litigation, publishers deter legitimate reproduction of copyrighted works, improperly insisting on licenses and payment of fees. Publishers wrongly contend that nobody may reproduce for any reason any portion of a copyrighted work, without the publisher's prior approval. These circumstances have produced fraud on an untold scale, with millions of works in the public domain deemed copyrighted, and countless dollars paid out every year in licensing fees to make copies that could be made for free. Copyfraud stifles valid forms of reproduction and undermines free speech. Copyfraud also weakens legitimate intellectual property rights. Congress should amend the Copyright Act to allow private parties to bring civil causes of action for false copyright claims, and to specify as a statutory matter that copying less than five percent of a single copyrighted work is presumptively fair use. In addition, Congress should enhance more generally protection for the public domain, with the creation of a national registry listing public domain works, a symbol to designate those works, and a federal agency charged with securing and promoting the public domain. Failing a congressional response, there may also exist remedies under state law and through the efforts of private parties.
(PS: At a conference last year, I proposed civil damages for infringing the public's right to use the public domain, and I'm very glad to see a law professor take up the idea in all seriousness.)
Daniel Terdiman, Wikibooks takes on textbook industry, News.com, September 28, 2005. Excerpt:
[T]he Wikibooks project...[is the] attempt [by Jimmy Wales and the Wikimedia Foundation] to create a comprehensive, kindergarten-to-college curriculum of textbooks that are free and freely distributable, based on an open-source development model. Created in the same mold as the Wikipedia project --the open-source encyclopedia that lets anyone create or edit an article and that now has nearly 747,000 entries in English alone-- Wikibooks is still in its earliest stages. Yet because of Wikibooks' digital model, in which material written for the project can be as short or as long as needed, and be easily manipulated, read and edited, Wales and others believe it can pose a major challenge to the publishing industry's hold on the world of textbooks. "The purpose is really contained in the word 'freely licensed,' which is to make available to anyone in the world, in any language, a curriculum that they can copy, redistribute and modify, for whatever purpose they may have, for free," Wales said. The publishing industry is "going to have to recognize that there's a fundamental shift in the marketplace," he added. "Some of them will prosper. Some of them will figure out the new regime and find out ways to add value. Others will stick their heads in the sand and get slaughtered."...Today, Wikibooks contains 11,426 submissions. The topics covered range from biology to economics in New Zealand. Because the books are digital and open source, any teacher can decide to assign one and simply point students to PDFs they can print. But Wales is the first to acknowledge that the project is several years away from maturity. "It's still a young project," he said. "I would consider it to be mission accomplished when we could point and say, 'Well, you could teach yourself, or someone could teach you using these materials, (anything) from the kindergarten to the university level.'" Naturally, Wikibooks isn't the only effort to amass a vast collection of digital books. Google has been building its library and print projects since last year. But where Google's project is a digital database of often copyrighted works, Wikibooks' material is all work that has been made free to the public.
Invitrogen has announced an "Open Access policy" for its latest software. From today's press release:
Invitrogen Corporation..., a leading life sciences company with a broad portfolio of technologies to improve and accelerate biomedical research, drug discovery and commercial bioproduction, today announced the release of its latest technology and eScience contribution to the life sciences, Vector NTI Advance(TM) 10. Centering on its award-winning Vector NTI(R) sequence analysis software, Vector NTI Advance(TM) 10 is the latest Windows(R) version of this application. To coincide with the release, Invitrogen is also launching the Vector NTI(R) Open Access policy and the online Vector NTI(R) User Community. The Open Access policy allows researchers in not-for-profit laboratories to obtain their own annual, renewable licenses of Vector NTI(R) at no cost. The Vector NTI(R) User Community is designed to be the online meeting place for researchers to acquire the latest versions of the software, obtain technical resources, and ultimately communicate with Invitrogen and other users on all aspects of the software and its uses.
(PS: I don't normally blog news about free software. It's a good neighbor, but I already have more than enough OA news to cover. I blog this news, though, to show how the term "open access" is being used for software when "free" and "open source" don't quite apply.)
UNESCO's International Institute for Educational Planning (IIEP) will host a five-week online discussion of open course content in higher education. From the announcement:
From 24 October to 2 December 2005 the UNESCO International Institute for Educational Planning (IIEP) will hold an internet discussion forum on open course content. The forum will explore the concept of open course content, its context, current initiatives, and issues and implications of its use....The issue that this forum will focus on – open course content – is of great importance to higher education institutions because course elements and materials that are freely available on the Internet for consultation, use and adaptation constitute an important resource. If there is little or no awareness of availability, this resource cannot be exploited, and even with awareness of availability, there are challenges and barriers to its effective use.
EPrints, the open-source archiving software project from Southampton University, has launched EPrints Services. From the web site:
Many institutions do not have the resources necessary to build or maintain an institutional repository. The EPrints Services team offers a complete range of advice and consultancy to support institutions who have adopted, or who are looking to adopt, the EPrints solution. We can provide as much or as little support as you need to create and maintain a professional repository....From training through policy formulation to a complete build-and-host solution, our team of experts will ensure your EPrints repository becomes an effective information resource that reflects positively on your institution. We can help you by:  Training IT or library personnel to set up and run your repository,  Advising on policy matters, such as how to develop your institutional archiving policy,  Importing your legacy archives,  Customising the repository to your specifications,  Hosting and maintaining your repository,  Assisting with advocacy and promotion of the repository within your institution,  Providing ongoing technical support.
Quoting EPrints Technical Director Les Carr from today's press release:
The launch of EPrints Services is particularly timely. In the UK, the Research Councils (RCUK) have announced that all research council-funded research must henceforth be placed in an institutional repository. Around the world, the success of the open access movement is ensuring that academics and universities want or, increasingly, are required, to make their research universally accessible to the wider community. EPrints provides the original solution for institutional repositories. Now our wide-ranging experience and expertise gained over the years is being channelled into EPrints Services to ensure that new repositories are constructed to suit their institutions, and with appropriate policy and support systems designed as part of the package. We know that IRs increase citations and impact, and can therefore add powerful weight to research status and grant applications. They also enable data-sharing and enhance research opportunities, as well as accelerating the research cycle. But every institution is unique and EPrints Services will ensure that these special features can be translated into a repository that best mirrors the institution.
NB: The EPrints software is still free and open-source, and will remain so. Only the optional consulting services come with a fee.
Brock Read, A New Report Bemoans the State of Online Research on American Literature, Chronicle of Higher Education, September 28, 2005 (accessible only to subscribers). Excerpt:
Whether digitizing out-of-print novels or publishing their own criticism, a growing number of scholars are putting their research on American literature on the Web. But they're not getting much support from their colleges, according to a new report that also serves as a catalog of online literary research. The report, "A Kaleidoscope of Digital American Literature," was released on Tuesday by the Digital Library Federation and the Council on Library and Information Resources. It draws on interviews and case studies compiled by Martha L. Brogan, a library consultant who was once the director of collection development for libraries at Indiana University at Bloomington. Ms. Brogan's study is, first and foremost, a catalog that includes digital collections, bibliographies, oral histories, and other critical material. According to David Seaman, executive director of the Digital Library Foundation, the catalog is unprecedented, chiefly because scholarly projects on the Web pop up in a "disjointed" fashion. The report "provides us, I think for the first time, with a fairly comprehensive, current survey of what's out there," Mr. Seaman said, "and that's half the value of the report." The other half, he said, comes from Ms. Brogan's finding that too many book-digitization projects are maintained by scholars as "a labor of love," without any significant support from their college libraries or English departments. That is a disturbing trend, Mr. Seaman said, because it means that many influential scholarly sites have no agreed-upon standards for presenting material, no consistent source of outside funding, and no plan for what happens if a professor quits or suffers a computer breakdown. Perhaps because of those concerns, many humanities scholars say they have little use for online resources, according to the report. And many young professors are disinclined to conduct digital research projects because online scholarship, which is not often subject to peer review, is seldom considered in promotion and tenure evaluations. "Until the digital age," Mr. Seaman said, "the model in the humanities was one scholar, one carrel, one book. Compare that to other disciplines, where teams of people have co-authored papers all the time. The rise in a sense of community is still quite new in the humanities, and I think digital scholarship has contributed to that."
Harbor Research has published a new report, Designing the Future of Information: The Internet Beyond the Web, examining the Information Commons from Maya Design and Internet Zero from MIT. I'm having trouble downloading the report, so let me quote from a summary by Chris Jablonski in yesterday's ZDNet.
The Information Commons is a universal database to which anyone can contribute, and which liberates information by abandoning relational databasing and the client-server computing model, according to the white paper. It has been under development at Maya Design for over 15 years as the result of a $50 million research contract from DARPA to pursue "information liquidity," or the flow of information in distributed computing environments. Their goal is to build a scalable information space that can support trillions of devices. I spoke today with Josh Knauer, director of advanced development at MAYA Design about the Information Commons and how it is progressing. According to Knauer, Maya (which stands for Most Advanced Yet Acceptable) is using P2P technology --in the sense of information sharing and not file sharing-- to link together repositories of public and private datasets in Maya’s information space. These data and data relationships are stored in universal data containers called "u-forms," which are then coded with a UUID, or universally unique identifiers. These are the basic building blocks of the company's Visage Information Architecture (VIA), which allows data repositories to effortlessly link or fuse together to achieve "liquidity" (the paper has more details)....When hurricanes Katrina and Rita devastated the Gulf Coast recently, the system made its first bridging to the Information Commons as military officials needed to incorporate publicity available EPA data on toxic hazards in affected communities, said Knauer.
Update. If you had the same trouble I did downloading the report, then try this link. Thanks to several readers for the tip.
T. Scott Plutchak has blogged some personal observations of the meeting that produced the Salvador Declaration on Open Access. Excerpt:
There was a greater sense of urgency to the open access discussions here than I typically find in the US. It was a stark reminder of just how great the gulf in access to information between the developed and developing worlds still is, operations like HINARI notwithstanding. The delegates from Africa and Asia and throughout Latin America perceive themselves to be so far behind the US and Europe, and they see better access to information as a key element in economic and social development. A system of open access seems even more self-evidently the right thing to do for many of them than it does to the PLoS folks. And when you look at it from their perspective, the need is pretty compelling. What I did not see discussed, however, was much in the way of practical approaches to getting there, and of course that's the hang up. The Salvador declaration calls upon "governments to make Open Access a high priority in science policies" and on "all stakeholders in the international community to work together to ensure that scientific information is openly accessible and freely available to all, forever." The devil is in the details, and there is a far greater sense here that government ought to just "solve" the problem than one sees back home.
Mia Garlick, Getting a Reasonable IP Education, Creative Commons Blog, September 28, 2005. Excerpt:
A lecture on Creative Commons will form part of the induction training programme for incoming graduate research students at Goldsmith's College, University of London, this week. Andrea Rota, who is a member of the Liquid Culture project at Goldsmith's College, will be giving the lecture on "A range of protections and freedoms for researchers, authors and artists" as part of the scheduled activities for new graduate research students in induction week. Of course, the lecture materials themselves are available under a Creative Commons Attribution-ShareAlike 2.5 license. The induction training programme includes sessions dedicated to copyright and IP issues for research students....A welcome initiative bringing balance back into education about intellectual property issues.
(PS: This is a great idea. All graduate departments ought to have a similar workshop on open access, focusing on how OA helps authors and how authors can provide it for their own work.)
Outsell has published its latest MarketView report, Scientific, Technical and Medical Segment 2005 - Global Industry Faces Growth Challenges. Save the $3,000 cost: it does not discuss OA, at least according to the detailed table of contents.
CODATA is proposing a Global Information Commons for Science (September 1, 2005). Excerpt:
The Global Information Commons for Science Initiative would be a multi-stakeholder undertaking that would be launched as an outcome of the second and final phase of WSIS. It would leverage the strengths of a diverse coalition for the purpose of raising awareness on the part of the actors and increasing the effectiveness of the activities directed to facilitating various methods of open access and re-use of publicly-funded scientific data and information, and to promoting cooperative sharing of research tools and materials among researchers. The Initiative would not duplicate existing efforts. Rather, it would provide a shared global platform for members to promote existing initiatives, broker new ones where more effort is needed, build partnerships and share experience, and develop and publicize principles, guidelines and best practices. The Initiative would be open without fee to active participation from government agencies, universities and other institutions of higher education, non-governmental organizations and not-for-profit research institutes, the private sector, international and intergovernmental organizations, and civil society. To become a partner, organizations would need to make a commitment at the CEO/leader level to undertake one or more activities that contribute to the stated goals of the Initiative. Partners would contribute according to their own circumstances and capacities; each partner would maintain equal and independent status, while working toward agreed common objectives. The key elements and actions necessary to establish and implement this initiative will be identified and developed by interested stakeholders between now and the Tunis summit. Separate funding would be sought to support a secretariat and related information dissemination and coordination services. Actual establishment and implementation of the proposed initiative would commence in 2006.
CODATA drafted the proposal in light of responses to the workshop, Creating the Information Commons for e-Science: Toward Institutional Policies and Guidelines for Action (Paris, September 1-2, 2005). The workshop was organized by CODATA with sponsorship from ICSTI, ICSU, INASP, TWAS, and UNESCO, and collaboration of the OECD. Kathleen Cass, CODATA's Executive Director, says that the proposal will be further developed between now and the November WSIS meeting in Tunis, and welcomes comments.
The participants at the 9th World Congress on Health Information and Libraries, Commitment to Equity (Salvador, Bahia, Brazil, September 20-23, 2005) have issued two declarations.
Some of the potential implications of the Open Access movement on the operation of academic libraries is explored in a current article. Nothing is online yet, but ACRL members may retrieve fulltext when the issue is posted to the College & Research Libraries website.
Krista D. Schmidt, Pongracz Sennyey, and Timothy V. Carstens. "New roles for a changing environment: implications of open access for libraries." College & Research Libraries 66(5):407-16 Sept. 2005.Abstract: This article examines the likely implications of open access on library operations. The context of the examination takes place assuming that the traditional model of publication and open access coexist. Open access presents numerous challenges and opportunities, but entrepreneurial libraries will find new ways to serve their patrons in the new mixed open-access-traditional (MOA) environment. In order to do so, these libraries will need to redesign their organization and this can be expected to stretch both monetary and human resources.
Thanks to Ray Corrigan for finding and rekeying this terrific passage from William St. Clair's book, The Reading Nation in the Romantic Period, Cambridge University Press, 2004 (last page of the last chapter):
The argument that intellectual property is a privilege granted for a limtied period in order to reward and encourage innovation that is valuable to the society that grants it is as valid today as it was in Adam Smith's time. The conditions within which the privilege should be granted are therefore an issue of public policy, which ought to be decided, not in accordance with dogmas about the rights of property, but with eyes open to the public interest in the likely consequences. When, for the first time in history, copies of texts of all kinds can be reproduced and circulated instantaneously in limitless numbers at infinitesimal cost, it is perverse that much of the technological and business effort of the text copying industries is devoted to preventing copying and to keeping up the price of access.
Gavin Shear and Karim Kassam, Transferring Structures from PubChem to ACD/ChemSketch, Advanced Chemistry Development (ACD), n.d. Detailed instructions for working chemists who need to find or download structures from PubChem, plus this more general observation:
The ability to easily and accurately transfer structures from the PubChem database, as well as other sources into ACD/ChemSketch provides increased access to chemical information. The chances of costly errors due to inaccurate structure translations are greatly reduced. As well, the fact that chemical information can easily be managed, exploited, and distributed via ChemSketch and other ACD/Labs software is conducive to improved productivity. Once structures are in the ACD/ChemSketch interface, they can be utilized readily by the multiple prediction and databasing tools offered by ACD/Labs as well as with other applications such as other cheminformatic systems and common reporting applications like Microsoft Word or Adobe® PDF.
The AZo Journal of Materials Online --AZojomo for short-- is a new peer-reviewed, open-access journal from AZoM.com. For more details, see the press release. What's most notable about AZojomo its use of the Open Access Rewards System (OARS). Basically, readers pay no subscriptions or access fees and authors pay no processing fees. But the journal content is only free for non-commercial uses. Revenue from advertising, sponsorships, and commercial use will be divided among the authors (50%), referees (20%), and site operators (30%). It's a neat idea. But why did AZoM.com patent it? Is that compatible with the kind of sharing to which AZoM and AZojomo are clearly committed?
Annaïg Mahé, Libre accès à l'information scientifique : contexte et enjeux. A detailed, link-filled survey of OA issues and initiatives (in French). Written in June 2005, archived today.
Richard Ackerman at Science Library Pad is blogging summaries of selected presentations at the Info Grid conference (Copenhagen, September 26-27, 2005). He's already done a few that are OA-related, such as Carl Lagoze speaking on the NSDL and the evolution of digital libraries; Sandy Payette speaking on Fedora's service-oriented repository architecture; and Anurag Acharya speaking on Searching scholarly literature with Google scholar.
Yesterday on CSpan, Brian Lamb interviewed Jimmy Wales, the founder of Wikipedia. The video and transcript are now online.
The Institute for the Future of the Book has launched next\text, a project "to encourage the creation of born-digital learning materials that enhance, expand, and ultimately replace the printed textbook." The web site doesn't say so, but the focus seems to be on learning materials that are both multimedia and open access. From the about page:
There are two main components of the next\text project, this website being the first: an ongoing showcase of some of the most significant digital learning projects in the field. The experiments represented here are varied. Some do not consitute complete "textbooks" in themselves, but rather, individual strains of development, that, when taken together, begin to create a new idea of what a textbook can be. This idea contains many facets, which include, but are in no way limited to: "expanded" multimedia textbooks; open textbooks continually improved by teachers and students; dynamic, networked textbooks with live or regularly updating components; and multi-user playspaces and games. The curated site will serve as the planning stage for the second component: an invite-only meeting to be held some time in the coming year where the innovators behind these projects will meet and engage....But first, we'll focus on laying out the pieces here on this site, and for this we need your help. In order to get the fullest sense of what is possible, next\text hopes to draw on the collective intelligence of the community - educators, publishers, designers, students - to not only identify the most important developments in digital learning, but to grapple with the big questions that must be asked as we make this shift. How will the textbook of the future be owned and distributed? What new mechanisms must be developed for warranting authority in a more fluid matrix? How do we build a critical framework for mulitimedia scholarship?...Readers are invited to comment on showcased work, to suggest other projects of interest, and to join discussions and introduce topics on our forums. We encourage readers to bring their insight to bear on this process.(Thanks to Academic Commons.)
OECD report identifies policy options to promote digital scientific publishing, CORDIS News, September 26, 2005. An unsigned news story. Excerpt:
A report from the Organisation for Economic Cooperation and Development (OECD) has outlined a number of policies and initiatives which, it says, could enhance the digital delivery of scientific and technical information....While the report notes that the scientific publishing industry has taken a lead in the digital delivery of content (with an estimated 75 per cent of scholarly journals available online in 2003), it also recognises that certain advances in digital technology could conflict with some existing business practices and models. 'The key issue is whether there are new opportunities for science communication systems to better serve researchers [and] communicate and disseminate research findings to users,' say the authors....After analysing the three main emerging digital delivery business models -aggregated journal content paid for by the subscriber (the 'Big Deal'), author-paid open access publishing, and open access institutional archives and repositories - the report says that in the immediate future there is likely to be 'a period of experimentation around the 'author pays' version of open access publishing, combined with the emergence of a range of hybrids based around mixes of subscription-based and different forms of open access.' However, it goes on to predict that any changes in the current system of research journals and peer review will depend on factors such as the changing needs of researchers and the impacts of eScience, the opportunities offered by rapidly developing ICTs, and the underlying economic characteristics of information. Nonetheless, there may be opportunities to develop new systems that serve researchers, research users and research funders more effectively, and which increase returns on R&D investment and enhance innovation, and the report identifies a number of areas where governments and other stakeholders can help to maximise such opportunities. For example, it stresses that: 'Access to public and government-funded research content is a crucial issue, and there is considerable potential for governments to provide a lead in enabling digital delivery and enhanced access to publicly funded scientific and technical information.'
Stevan Harnad, From Hypertext to Hyperloquy, a preprint.
Abstract: The human mind is capable of something even better than hypertext: “hyperloquy.” Hypertext was the idea that texts and parts of texts could be linked to one another in the online medium. It led to the miracle of the worldwide web, but there is still something static about a hyperlinked textual world. Something is missing that comes close to the essence of the human capacity for thought, language, and colloquy: Quote/commentary brings the text alive and puts one into public dialogue with the author. Its interactive flavor -- plus the publicly visible and accessible nature of the interaction, with the everpresent possibility for any other skyreader to join in the skywritten quote/commentary – engage the ancient oral interactive powers for which our brains were specifically adapted in ways that neither the oral nor the written medium has ever before been able to engage: skywriting at the speed of thought, with near real-time interactions between not only minds and minds, but minds and texts: hyperloquy.
Xeni Jardin, You authors are saps to resist Googling, Los Angeles Times, September 25, 2005. Excerpt:
Perhaps the Authors Guild members would prefer that search companies pay them for the right to build book search services. If Google has its way, their logic goes, we'll lose control over who can copy our work, and we'll lose sales. But Internet history proves the opposite is true. Any product that is more easily found online can be more easily sold. Amazon.com's "look inside" feature works similarly. And, surprise, the Authors Guild has squabbled with it too. If the paranoid myopia that drives such thinking penetrates too deeply into the law, search engines will eventually shut down. What's the difference, after all, between a copyrighted Web page and a copyrighted book? What if Internet entrepreneurs could sue Google for indexing their websites? What if the law required search engines to get clearance for every Web page? Even a company as large and well-funded as Google couldn't pull that off because what's on the Internet, and who owns that content, changes constantly. As one author told me, "fear of obscurity, not digital indexing, is what keeps most authors awake at night." Technology that makes it easier to find, buy and read books is good for everyone — even the authors suing Google.