News from the open access movementJump to navigation
Astara March, Journals offer NIH wider research access, UPI, October 29, 2005. The gist of the proposal is for the NIH to link to articles at publishers' sites rather than host its own copies. Excerpt:
Martin Frank, DCPRinciples' coordinator, told United Press International the offer was made because the NIH in February 2005 requested that all scientists supported by the agency's funding send their articles to an NIH archive after the articles were accepted for publication in a scientific journal, but before the journal had a chance to copy-edit the work....Frank said DCPRinciples is concerned that problems will result from edited and unedited versions of the same research existing at the same time; that copyright violations will occur, because the process to ensure articles are not published before their embargo date is not strict enough, and that creating an unnecessary archive will divert funds from needed research. All of the participating non-profit journals offer free access to their contents, from right away to 12 months after publication, and these datelines would not change, but if the NIH accepts the organization's offer, only copy-edited articles would be released and there would be no problem with copyright issues....Dr. David Lipman, director of the National Center for Biotechnology and Information of the National Library of Medicine, which runs the NIH's PubMed and PubMed Central, said he was thrilled by the DCPRinciples proposal. "I think it's wonderful," Lipman told UPI. "What could be bad about it? There's no other side. Of course, we'd rather have the copy-edited article. Our only concern is to make our archive as large as possible and crosslink it in every way we can. Lipman called this "such an exciting time in science." He said cross-linking information provides "amazing" capabilities. "We want to make sure that scientists can not only find what they're looking for, but find that extra thing they didn't know about that allows new connections to be forged and new discoveries to take place," he said. "To make that easier, we are trying to add sidebars to our databases like the ones in Google and Amazon that suggest similar books or products of interest. We hope to suggest related data that might interest scientific investigators." Lipman said PubMed, which contains article abstracts, and PubMed Central, which contains full-text articles from both national and international journals, are currently unique in the world. He added, however, that the NIH has been approached by the governments of Britain, Italy and South Africa, all of whom want to create archives of their own and coordinate them with the one at NIH. "PubMed and PubMed Central host 1.5 million users and distribute 2.25 terabytes of data every day," he said. "DCPRinciples' offer will increase both those figures considerably. We welcome these new journals."
(PS: If I'm reading the two positions correctly, the DC Principles Coalition wants links instead of copies at the NIH, and David Lipman wants links as well as copies at the NIH. They're not yet on the same side.)
Walaika K. Haskins, Microsoft Writes a New Chapter with Online Book Search, Top Tech News, October 28, 2005. Excerpt:
Not to be outdone by its competitors, Microsoft announced plans this week to launch its own book search engine. The company's MSN Search, like Yahoo before it, has joined the Open Content Alliance to make publicly available print materials accessible on the Web....Analysts say that MSN is doing more than just tweaking Google's nose. The ability to archive, search, and retrieve information from offline content is in high demand among businesses and consumers, said Mukul Krishna, program manager for digital media at Frost & Sullivan. "It is a very valuable tool that is moving from a nice-to-have to a must-have for enterprises and [it is] beginning to be a need for consumers." At the same time, however, Krishna suggested that Microsoft stepped into book searching in part because of the legal battle Google faces before it can get its Library Project up and running. "This is a market that [Microsoft] definitely wants to pursue and the cream is that they might get one-upmanship over Google," Krisha said.
The International Publishers Association (IPA) and PEN USA have issued a joint declaration condeming Google Library (October 20, 2005). The online version is an image file and I don't have time to rekey any of it. The basic complaint is similar to that of the Authors Guild and AAP: permission is required for this kind of copying. The joint declaration does not assert that the project will harm sales.
The Soft Skull Press, a small publisher and member of the AAP, has released an open letter to the AAP explaining why it cannot agree with its position on Google Library. (Thanks to Boing Boing via Issues in Scholarly Communication.) Excerpt:
I have a bit of a dilemma here actually as I vehemently disagree with the AAP’s position. This happens to be an issue I’ve studied very closely (we have a number of books under contract on the subject of intellectual property)...and it’s not even, unfortunately, something I can just keep quiet about disagreeing with either. But I don’t want to put you or the SIP Committee in a difficult situation --in part because I believe the AAP’s position on this is particularly harmful for small and independent publishers....Is it sufficient, when publicly declaiming and doing the things I do, to simply not refer to my membership of the committee? How does the AAP handle this kind of situation (I’m sure I’m hardly the only AAP member who has disagreed with the AAP’s position on a given topic)....Over the last year, I have made a substantial commitment of my company’s time and energy to fighting what I consider to be an intellectual property land grab more significant than the actual 19th century land grab (recognizing that the expansion of trademark protections are probably more egregious than those in copyright). Fair Use is withering and its defenders are relatively few and dramatically under-resourced --I am adding Soft Skull to their number, for what that’s worth. I have several books under contract dealing with different aspects of intellectual property rights, and they would largely be aimed at, inter alia, defending fair use, expanding the use of licenses such as the Creative Commons, and returning the copyright term to the original 28 years, a la the Founder’s Copyright movement. Given this, it would be impossible for me to remain silent when my peers are adopting an approach completely contrary to what our books will be advocating. It is incumbent on me to find whatever soapboxes I can find, and try to make as strong as possible a case as I can to persuade that majority to change their position. I’ve inveighed against both the music and film industry for their shortsightedness in interviews and panel discussions in the past—I would be justifiably branded a hypocrite for failing to do so in my very own industry....[T]he long-term future of American publishing depends fundamentally on the quality of the content that we produce and sell. An excessive focus on the ownership of that content, and on restricting how others might make alternative uses of that content will seriously impinge on the value of that content over decades. A hyperbolic but nevertheless accurate example: Shakespeare’s plays would be impossible to publish under the present conditions....The fundamental goal of copyright in the Constitution is not to confer an absolute property right but rather to stimulate cultural production: a limited property right being a means to that end, rather than an end in itself. Thus we are always intrinsically talking about relative values, trade-offs, balancing acts, etc. Having the world’s books available in searchable and granular format online is a tremendous boon to the culture, and will result in more and better books. Again and again, in comments issued by publishers and authors, by the AAP and the Authors Guild, there is a profound failure to perceive that such rights are not absolute property rights, but relative property rights, issued provisionally to achieve a larger social purpose....I’m not going to address the business issues here except to note that I would love to see empirical data that suggests that the value of our intellectual property would be diminished by its availability in the proposed Google Print for Libraries format...the Amazon Search Inside and Google Print for Publishers both seem to suggest the opposite....I accept that the AAP has to represent the expressed interest of a majority of its members, I do hope that it will not be represented to the public that the AAP is riding to the rescue of its smallest members—it would be just a little too over-the-top.
The Research Libraries Group (RLG) has joined the Open Content Alliance (OCA). Excerpt from its press release (October 27):
RLG, a not-for-profit organization of over 150 research libraries, archives, and museums announced today that it will be a contributor to and partner with the Open Content Alliance (OCA) (www.opencontentalliance.org), a consortium that is building a permanent archive of digitized text and multimedia content. Generally, textual material from the OCA will be free to read, and in most cases, available for saving or printing using formats such as PDF....RLG's immediate role in this initiative will be to supply the bibliographic information needed to aid in materials selection and description for these searches. RLG's Union Catalog is the premier source of bibliographic descriptions for use in research and research collections management. With records for over 48 million titles, it provides coverage across subjects and material types in almost 400 languages. Brewster Kahle, digital librarian for the Internet Archive, says, "RLG's help will bring critical expertise and relationships to this ambitious project. Using the RLG Union Catalog to keep track of what has been digitized, cataloging it well and then making it available to all will make sure that the joint efforts of many libraries are widely shared." James Michalko, president of RLG, adds, "We are committed to the OCA vision. RLG's member institutions want to build out the collective digital library to ensure that scholarship and research is innovative and productive in the future. RLG has a long history of working with its own members to further broad-based initiatives."...Says Michalko, "The OCA can be a rallying point and delivery focus for the long-term efforts of cultural institutions to create a resource that reflects the needs of scholars and students and honors the values of research."
ALPSP and the Kaufman-Wills Group have released a post-publication addendum to their October 11 report, The Facts About Open Access. The addendum is devoted to peer review and contains new data and analysis not on the original report. Among other things, it acknowledges that BMC journals use external peer reviewers (pp. 4-5), correcting the original report. Moreover, "Once ISP journals were removed from the calculations, the percentage of Full Open Access journals relying on editorial staff for peer review [rather than external reviewers] dropped from 28% to 4%. Thus, without ISP journals in the mix, the percentage of Full Open Access journals relying on editorial staff for peer review was not significantly different than any other cohort" (p. 2).
Siva Vaidhyanathan, Derek responds on Google: 'not a library, but so what?' October 27, 2005. Excerpt:
First, let me assert once again that Google of 2025 most certainly will not resemble the Google of 2005. It might not even exist....Ok. I will go farther than Derek on one point: copyright holders would suffer absolutely no harm from Google Library. But that's not the point. Courts don't care about real harm. They care about potential harm to potential markets and they take the word of the plaintiff to be the last word on the question....Saying "where is the harm?" (the Google corporate refrain) reveals an unwillingness to recognize the extent to which real-world effects influence most federal judges in copyright cases: not at all. Congress has made it clear that it does not care about real-world effects. Courts have as well (see the 9th Circuit in Napster or the Supreme Court in Grokster). All Congress cares about is the trump cards it can deal out to copyright holders. And courts tend to respect that even if it makes no sense or harms the public (see Eldred). So just as the widespread worship of Google baffles me, the widespread faith in the reasonableness of courts (especially SDNY and the 2d Circuit) baffles me more. Have we not learned any hard lessons from the last few years? How often since Feist has a federal court shown that it "gets it?" Certainly, in Kelly. But Kelly is no longer settled law. Google Print/Library might kill Kelly....I can't believe I have to remind anyone of this: DRM, nondisclosure, and patents destroy competition. That's why we have them. They are what Google depends on to do its job. These are not trivial problems. These are not neutral technologies. There are great complications and problems here. We should not be blind to them. Again agreeing with Derek -- "a Google loss would choke off competition." Exactly. Before Google loses, there is a crowding-out effect. After it loses, there will be a chilling effect. Meanwhile, publishers fear that a market that Amazon created for them: "search inside the book" licensing, will evaporate. Worse, of course, is possible. A bad loss threatens everything we hold dear about the Internet. And I am still waiting for anyone (Derek, Michael, Larry?) to come to terms with the privacy problems here. As Julie Cohen and Sonia Katyal have shown us, digital copyright and surveillance are intricately linked. What is Google doing to prevent anyone from snooping on our reading habits?...So to review: a Google win (unlikely as it is) would choke off competition. A Google loss would choke off competition. And we are unlikely to get the really cool public library text-search index we deserve in any case. This remains a good dream and a bad deal all around.
Dave, google: there can be only one...? simply blog, October 27, 2005. Excerpt:
to be honest: the open source & open access movements aren't happening because "the Internet wants to be free", nor because of any other flowery notion of transparency or other Web 2.0 buzzwords. rather, openness has power because competition & innovation happen faster & better with open access to structured data, which in turn leads to more desirable web applications & destinations for users to experience.
Ben Charny, Google Won't Shelve Google Print, PC Magazine, October 28, 2005. Excerpt:
Google Inc.'s online book project will take an important step forward next week despite an increasingly nasty legal fight over the company's plans. On Tuesday, the Internet search giant will resume scanning into its database a large number of library books that subject to copyright laws. It stopped doing so in August, following threats by publishers and to give copyright holders ample time to remove the works if they saw fit. Google has also since changed its own policies to let copyright holders opt out any time....Google is serving notice of its intent to see its Google Print project through despite even legal obstacles, which has profound impact on other online library projects and the form of future copyright law. "We're so determined to pursue Google Print, even though it has drawn industry opposition in the form of two lawsuits," Google Vice President David Drummond recently wrote on Google's Weblog. "We're dedicated to helping the world find information, and there's too much information in books that cannot yet be found online. We think you should be able to search through every word of every book ever written, and come away with a list of relevant books to buy or find at your local library," he adds.
Tara Calishain, Microsoft Jumps on the Book Indexing Bandwagon, ResearchBuzz, October 26, 2005. Excerpt:
Microsoft has announced that they're going to launch MSN Book Search. It'll launch next year. In conjunction with that, MSN has announced its intention to join the Open Content Alliance, which would align them with Yahoo, etc. (You know this reminds me of? The beginning of a Risk game where you're deciding what countries you want and placing all your armies.)...There are so many other types of offline content that would benefit from this. Census and other genealogy-related records. Government publications and records. Out of copyright periodicals. Rankle: "MSN Book Search will help address the fact that over 50 percent of people's online queries go unanswered today on search engines, according to internal Microsoft(R) research." Peh. This statement presupposes that the questions that are going unanswered will be found in public domain books, which I'm not sure is true. (I'm not even sure that they're going to "help".) And if that is true, how come there still isn't anybody who's coming forward to work with the masses of old books which have already been digitized and made available online? Cornell alone has done massive amounts with mathematics and home economics books. Why are all the digitzation initiatives starting over and reinventing the wheel? WHY? I'm sure that these folks have the best intentions in mind. Even so these declarations and big plans remind me, uncomfortably, of 1998. And the questions they generate remind me of a soap opera. Can MSN and Yahoo work together within the Open Content Alliance? Will Google be isolated and marginalized over this one issue? (Seems unlikely.) Is Ask Jeeves destined to become the Switzerland of online content generation? Will the smaller search engines be swallowed up or will they take stances? The OCA's open to everybody, isn't it?
Maria Anna Jankowska, A Library's Contribution to Scholarly Communication and Environmental Literacy: The Case of an Open-Access Environmental Journal, The Serials Librarian, 49, 4 (2006). Prepublication. Not even an abstract is free online, at least so far.
BioMed Central has publicly released its October 27 letter to Lord Sainsbury of Turville, Science Minister of the UK. Excerpt:
Last week, when giving testimony to the House of Commons Science & Technology Committee, you were asked for your opinion of the proposed position statement on open access from Research Councils UK, a document that expresses strong support for a move towards open access. In your response, you [suggested] that open access was in decline, saying: "I think we have seen a peak in the enthusiasm for open access publishing and a fall-off in people putting forward proposals for it because some of the difficulties and costs are now becoming clear." This suggestion of a decline in interest in open access publishing is not at all supported by the available evidence, and simply does not reflect what is happening in scientific publishing. BioMed Central Limited is the world's leading open access publisher. In the third quarter of 2005, BioMed Central's manuscript submissions were up 56% compared to the previous year, a growth rate far exceeding that of the science publishing industry as a whole. Public Library of Science, a leading US-based open access publisher, has experienced similarly rapid growth. Every month, new groups of scientists and societies approach BioMed Central to start open access journals, or to convert their existing journals to an open access model....Blackwell Publishing introduced Online Open, an open access experiment for 30 journals, in February 2005. Oxford University Press, which has already converted some journals to open access, launched Oxford Open in May this year. Springer, the world’s second largest STM publisher, has offered an open access option (Springer Open Choice) for its 1,450 journals since May 2004, and just two months ago hired Jan Velterop as its Director of Open Access....
The Bioinformatics Organization "is seeking nominees for the 2006 Benjamin Franklin Award in Bioinformatics." More from today's press release:
This will be the fifth presentation of the bioinformatics humanitarian award, which is given annually by the members of the Organization to an individual who has worked to promote open access to the materials and methods used in the field. The Award is named for Benjamin Franklin (1706-1790), one of the most remarkable men of his time. Scientist, inventor, statesman, Franklin freely and openly shared his ideas and refused to patent his inventions. It is the opinion of the founders of Bioinformatics.Org that he embodied the best traits of a scientist, and we seek to honor those who share these virtues. The ceremony for the presentation of the Award will be held during the Sixth Annual Meeting of Bioinformatics.Org, to be held in conjunction with Bio-IT World's 2006 Life Sciences Conference + Expo in Boston, on April 3-5, 2006. It involves a short introduction, the presentation of the certificate, and the laureate seminar. Past laureates include Ewan Birney, Lincoln Stein, James Kent, and Michael Eisen. Nominations can be made by any member of the Bioinformatics Organization, and membership is free.
Jeffrey Young, Microsoft, Joining Growing Digital-Library Effort, Will Pay for Scanning of 150,000 Books, Chronicle of Higher Education, October 27, 2005 (accessible only to subscribers). Excerpt:
With a $5-million commitment, Microsoft's MSN Search division is joining universities and its online-search rival Yahoo in a consortium dedicated to scanning millions of public-domain books. The company's pledge will pay for the scanning of 150,000 volumes. The consortium, called the Open Content Alliance, was announced three weeks ago....Since then, 14 more universities have also joined, promising to contribute money, books, or services to the project. The original members of the alliance include the University of California, the University of Toronto, and several archives and technology companies....Danielle Tiedt, a general manager at MSN, said the company believed it was a good idea to join with rivals in the alliance to scan books, so that books would not be scanned repeatedly by competing companies and so that Microsoft could focus its energies on improving its search technology. The company also announced that it would create a new service, called MSN Book Search, that is scheduled to begin next year....The company has not yet decided which books it will scan, Ms. Tiedt said, adding that the books might belong to institutions that are not members of the alliance. "We're committed to 150,000, but we're completely free to choose where those books come from," she said. "We're not constricted to just working with people within the Open Content Alliance." That rival companies are joining the alliance is evidence that the project is "fundamentally a mechanism of sharing," said Brewster Kahle, director of the Internet Archive, a nonprofit digital library that is coordinating the book scanning for the alliance. "An open library allows lots of people to participate," he added.
The Welsh Informing Healthcare program has launched an Access to Knowledge (A2K) project. Excerpt:
A2K is a project working to ensure healthcare staff in Wales have easy access to healthcare knowledge and evidence, and have the tools they require to retrieve such evidence. Healthcare staff in hospitals, GP surgeries and NHS Dental practices will be able to electronically access the latest and the best evidence to support them as they care for patients....The project is currently building up a picture of what health professionals need in terms of electronic databases, journals, guidelines and where there are gaps in this need. This will inform the procurement of new resources for a National e-library for Wales, building on the current HOWIS e-library. It has now been agreed that the e-library will be built incrementally in four cycles from Sept 2005 – to Sept 2006, and the project approach altered to meet this more immediate delivery cycle.
(PS: Good project, bad name. Informing Healthcare should understand that there is already an Access to Knowledge project and that it too uses the initials A2K. It's part of the Development Agenda at WIPO and includes a draft treaty that would require OA to publicly-funded research. It's facing enough obstacles without the new burden of having to explain just which A2K it is.)
Denise Howell, Search Or Seizure? Bag and Baggage, October 26, 2005. This is a very thorough blog posting, by an IP lawyer, surveying the arguments that Google's opt-out policy for Google Library is protected as fair use. She quotes my analysis but also that of more than a dozen others. Clear, careful, and comprehensive.
Lesley Perkins has blogged some notes on John Willinsky's talk on OA last night at the Who Owns Knowledge? conference (Vancouver, October 24-29, 2005). Excerpt:
UBC Professor and Distinguished Scholar John Willinsky spoke,among other things, about the importance and advantages of open access for the public. An engaging and entertaining speaker, and of course, a well-recognized authority and expert on this topic, John summarized open access as "your right to know, particularly about research." John said he hoped to instill in the audience a "sense of entitlement and expectation." When asked what he recommends as ways to convince authors to publish OA, he said we (librarians) can appeal to authors on these 4 grounds:  Economically -- with OA we can distribute information much more cheaply,  Legally -- copyright protects the author,  Ethically -- OA fits with the human right to know,  Vanity -- "It will make their mother prouder if more people know about their research!" If you have an opportunity to hear John Willinsky speak about open access, take it!
Bourree Lam, Blogging opens new medium for academics, Chicago Maroon, October 25, 2005. (Thanks to Issues in Scholarly Communication.) Excerpt:
Nobel laureate Gary Becker and federal judge Richard Posner, both law professors, launched the high-profile Becker-Posner blog less than a year ago. With two such distinguished academics aboard the blog bandwagon, the question of whether blogging is legitimate academic work and outreach becomes unclear. Becker, who blogs about five hours a week, feels that if blogging takes a lot of time then it will surely interfere with scholarly academic achievements: “Blogs are at best good op-ed pieces,” Becker said. “They are not substitutes for scholarship and research.” His co-blogger, Posner, had similar sentiments. “I think it’s a professional mistake for an untenured academic to do a blog,” Posner said. “It is not academic writing, and it takes time away from academic writing, which should be the entire focus of the untenured in what is currently a highly competitive academic market.” Perhaps blogging is a luxury only for the tenured. Recently, the [U of Chicago] Law School launched its own faculty blog geared toward high-end legal and intellectual content, which is meant as an interactive page rather than a standard blog. “We are in the business of spreading new ideas, and provoking readers,” Saul Levmore, dean of the Law School, said. “A blog seems like an excellent means of advancing this mission.” The question of academic blogging really depends on what kind of blog you keep. “Our blog is not gossipy or informal,” Becker said. Rather, the Becker-Posner blog is an attempt at serious dialogue on important current issues.
Barbara Quint was only able to participate by phone in Tuesday Googlebrary panel at the Internet Librarian 2005 conference in Monterey. So she sent Paula Hane a short note sketching her take on the issues:
Way back in 2006, Google -- irritated at those lawsuits from the Authors Guild and the Association of American Publishers -- set up Google Press. It then urged the authors of the world to check their publisher contracts, find the books that had gone out of print with publishing rights, come back to the authors and send copies of the books (or the ISBNs to match with Google library holdings) to Google Print. In return, Google would promise to delivery saleable e-books back to the authors and direct all interested users to them for sales -- no royalty percentage, ALL the money for the sale. Five years later (2011?), Google Press had become Google Full Court Press with imitator services available from the Open Content Alliance, Amazon, et al. Print-on-demand services and outsourced editorial staffs had made Google a major new avenue for book authors. All libraries received one free access seat for all books in the program, plus one free P-O-D copy on request. Book publishers were scrambling to hang on to their authors and re-negotiating royalty payments as all authors gained leverage from the developments.
Stefanie Olsen, An open-source rival to Google's book project, News.com, October 26, 2005. Excerpt:
When it comes to digitizing books, two stories appear to be unfolding: One is about open source, and the other, Google. Or so it seemed at a party held by the Internet Archive on Tuesday evening, when the nonprofit foundation and a parade of partners, including the Smithsonian Institution, Hewlett-Packard, Yahoo and Microsoft's MSN, rallied around a collective open-source initiative to digitize all the world's books and make them universally available. Google was noticeably absent from the cadre of partners, considering that the search behemoth has a high-profile project of its own to scan library books and add them to its searchable index...."We want to digitize all human knowledge...and we can't risk having it privatized," said Doron Weber, an executive of the Alfred P. Sloan Foundation, a philanthropic organization that has contributed more than $3 million to the Internet Archive since 2003. Citing the importance of an open library for educational purposes, he called on private companies to "rein in their impulses" while urging libraries to "embrace the future." Still, a Google executive in attendance downplayed the perceived rivalry. "I think (the project) is great," said Alexander Macgillivray, Google's senior product counsel, following a presentation on the book-scanning effort. "It's a shame it's being portrayed as a battle between the two projects because the efforts are complementary."...[T]o make the millions of books in the world available online is a Herculean task. Issues of publisher copyrights, data storage and backup, and labor costs must still be hashed out. It would take 6 petabytes to digitally store just 1 million books, according to the Internet Archive. By comparison, Google reportedly has stored nearly 10 million Web documents, requiring between 1.7 and 5 petabytes of storage....Though it has been working on the effort for years, the Internet Archive recently jump-started its effort by introducing the Open Content Alliance....Yahoo and MSN Search are also notable members, given their investments in Web search and driving traffic to their proprietary services. The two companies boasted the openness of the project Tuesday night, but their allegiance to the open-source project surely is a strategic counterbalance to Google's project....Last week, the Internet Archive launched Open Library, a Web site that will eventually house all the world's books, according to the nonprofit. It now demonstrates the project with 15 digitized works....For now, people can download 15 demonstration books from the Open Library site and print them for free at home. Visitors can also purchase bound copies from Lulu.com for $8 each. The service even lets people create their own book covers and art, and then have the books printed with them. Users can search inside the works and see tabs on pages where the terms occurred. With the move of a cursor, visitors can see which page they will turn to before clicking on it. Volunteers from LibriVox, an open-source effort trying to make books freely available in audio, have also made audio recordings of the books so that people can listen to them via the Open Library Web site....[Omitting interesting details on the book-scanning technology.] The Internet Archive currently has 10 scanning machines, but it is ramping up to build 10 more in the next year. "This is one of the great things we've ever done," said Kahle. "It's up there with the Library of Alexandria and putting a man on the moon."
Fred M. Beshears, Viewpoint: The Economic Case for Creative Commons Textbooks, Campus Technology, n.d. (Thanks to Open Business.) Excerpt:
Talk to virtually any student about the cost of textbooks and you will likely hear loud complaints about the expense associated with course texts. According to a recent General Accounting Office report: "... the average estimated cost of books and supplies per first-time, full-time student for academic year 2003-2004 was $898 at 4-year public institutions, or about 26 percent of the cost of tuition and fees. At 2-year public institutions, where low-income students are more likely to pursue a degree program and tuition and fees are lower, the average estimated cost of books and supplies per first-time, full-time student was $886 in academic year 2003-2004, representing almost three-quarters of the cost of tuition and fees." While there are explanations that attempt to justify textbook prices in the report, there are few suggestions to contain or reduce the cost to students....Inspired by [Ira] Fuchs's vision, it is possible to conceive of a similar approach to acquire and distribute high-quality creative commons content that could be used in any of the following combinations: (a) as the basis of an online course, (b) as an electronic textbook, or (c) as a customized printed textbook for use in a traditional college course. We call this approach OpenTextbook. OpenTextbook's business model would be simple: traditional colleges and universities would agree to pay membership dues to purchase content from one or more open universities, such as the British Open University (UKOU). OpenTextbook would not develop the content; it would purchase content in bulk. In this sense, OpenTextbook would be similar to consumer cooperatives and buying clubs that pool member resources to gain purchasing power in the market....If the OpenTextbook coalition distributed this cost evenly to each of its members, the annual membership would come to $75,000 a year per campus. For a school with an enrollment of 25,000 first-time, full-year students, this comes out to three dollars per student per year: a bargain compared to the current $898 per year cost of textbooks....In addition to saving money, OpenTextbook's objective would also be to give faculty the freedom to customize creative commons content, and use it as a substitute for mass-produced commercial textbooks. It is also possible for campuses to encourage instructors to use open textbook content by providing faculty stipends as well as paid student and staff support to help customize course content. Some schools might support these costs by establishing a course material customization fee that could be far less than the current cost of commercial textbooks. To the extent faculty choose to substitute OpenTextbook content, the cost savings for students could be substantial. Even when allowing for the extra expense of customized content, course materials could be substantially less expensive than the traditional textbook model....OpenTextbook has yet to take hold as a formal consortium. Rather, we are in an exploratory phase, introducing the concept to stimulate discussion of a number of different but interrelated cost savings issues, each representing a different lever that policy makers could move separately or together. Some schools, for example, may want to treat the open textbook content simply as a library resource. Others may want to provide faculty with financial incentives and resources to customize the coalition's content. Also, any specific policy proposal would need to address licensing issues governing how said customized content would be owned. And, finally, different means of distribution (electronic vs. print) would entail different costs that would have to be addressed. The main point, however, is that a creative commons textbook initiative may not only save students money, it could also give faculty more freedom to customize the content of their courses.
The psychology journal Dissociation was published from 1988 to 1997. Now its entire run is OA, thanks to the University of Oregon Library and Psychology Department, which digitized the issues and deposited the files in Scholars' Bank, the university's OA repository. The project had the full support of the International Society for the Study of Dissociation (ISSD), which published the journal still holds the copyrights to the issues. For more details see the press release (October 25).
Eric Auchard, Microsoft joins Yahoo on digital library alliance, Reuters, October 26, 2005. Excerpt:
The OCA, unveiled earlier this month by a group of digital archivists and backed by Yahoo, H-P and Adobe, says it has also signed up Microsoft Corp. and more than a dozen major libraries in North America, Britain and Europe. Danielle Tiedt, general manager of Microsoft's MSN Search, said the world's largest software maker would fund the digital duplication of 150,000 old books over the next year. "This is just the start," Brewster Kahle, founder of the Internet Archive and the organising force behind the OCA. "One hundred and fifty thousand books is just an initial test for Microsoft," he said...."It's interesting to see everyone jumping on the digital library bandwagon," said Doron Weber, a program director at the Sloan Foundation in New York, which provides funding for the Internet Archive, the original organisers of the OCA. Many university libraries have had separate projects to digitise out-of-print works, but progress has been slow. That changed when Web powerhouse Google unveiled plans last year to work with publishers and five major libraries on dual projects to make many of the world's great books searchable on the Web. "Google's push has galvanised everyone else," Doron said. At the OCA's first public meeting, Kahle spelled out his vision for joining libraries, publishers, printers and hi-tech suppliers to create a universally available digital library. "If we go and bring universal access to all human knowledge it will be remembered as one of the great things humankind has ever done," Kahle said, comparing the potential of the project to the Gutenberg printing press or putting a man on the moon...."This is really hard. There are reasons why people have never done it. It will take all of the energies of the companies assembled here and many more who have yet to join," said project supporter Gart Davis, president of Lulu Inc., a publisher of out-of-print books that is working with OCA....Backers of the Google Print project have expressed their disappointment that the two groups are not working together. But leaders on both sides say it is only a matter of time before the two library projects find common ground. "I think it's only a matter of time before we reach agreement," said Rick Prellinger, board president of the Internet Archive and the director of the newly formed OCA.
Gary Price, Microsoft Announces MSN Book Search; Joins Open Content Alliance, Search Engine Watch, October 25, 2005. Excerpt:
Here are a few facts that I just learned via a news release and a call with Microsoft's GM of Search Content Acquisitions, Danielle Tiedt....MSN will launch MSN Book Search (MSNBS) sometime in first half of 2006. In the early stages, MSNBS will be found as a separate vertical on the MSN Search page (just like Image, News, etc.) but eventually MSN hopes to include book results in web results pages. The material that MSNBS will come from the Open Content Alliance (OCA) that Microsoft is formally joining today....According to Tiedt, Microsoft has currently committed to fund the scanning of 150K books. In the case of these books (public domain content), Microsoft is making deals on their own with libraries (we don't know which ones) who will provide the content. Then, some (but not all of this material, depending on the library and the actual content) will be available as part of the OCA database. Every library that provides a copy of the book for scanning will also recieve a file for local use....OK, that's the public domain OCA stuff but MSN's plans are wider ranging. As noted earlier, Microsoft also wants to scan the full text of in-copyright books (a list of participating libraries is not available) and make it available online. Sound familiar? Looks like some direct competition for Google Print. Btw, this initiative goes beyond books and MSN also has plans for content from academic publishers, periodicals, aggregators, etc. Of course, getting the right business model in place and getting players to agree will be a challenge. Yes, in this case it looks like some competition for Google Scholar and the Yahoo Subscriptions program. Making everyone happy and then keeping them happy is going to be a very tough job. Tiedt suggested that business models for access to in-copyright content that might be considered include pay-per-page, pay-per-chapter, monthly subscriptions, etc. We'll just have to wait and see. One thing I hope MSN does (that Google is already doing and doing well) is working with libraries (of all types), who are already paying (via institutional subscriptions) for access to massive amounts of articles, etc. and then make them available (for free via MSNBS) to anyone with access to that specific library usually using a library card.
Elinor Mills, Microsoft to offer book search, News.com, October 25, 2005. Excerpt:
Microsoft has committed to paying for the digitization of 150,000 books [for the OCA] in the first year, which will be about $5 million, assuming costs of about 10 cents a page and 300 pages, on average, per book, said [Danielle Tiedt, general manager of search content acquisition at MSN]. Yahoo has said it will pay for digitization of 18,000 books, according to Tiedt. Internet Archive, a nonprofit formed to offer access to historical collections that exist in digital format, will digitize the material. Microsoft's MSN Web site will launch its MSN Book Search service next year and will experiment with different business models, such as pay per page, monthly subscriptions, selling e-books and advertisements, Tiedt said. "The business model will change, depending on whether (the book) is out of copyright or in copyright," she said. MSN will offer more than the simple search of the books. For instance, the company may offer services such as allowing people to annotate works, create discussion groups and move text into productivity applications, Tiedt said. Microsoft and Yahoo may or may not share books with each other that have been digitized, she said. "We are working on global collections."
Katie Hafner, Microsoft to Offer Online Book-Content Searches, New York Times, October 25, 2005. Excerpt:
Microsoft announced Tuesday that it planned to join the online book-search movement with a new service called MSN Book Search. And in a nod to the growing influence of a recently formed group called the Open Content Alliance, Microsoft announced its plans to join it. The group is working to digitize the contents of millions of books and put them on the Internet, with full text accessible to anyone, while respecting the rights of copyright holders. Microsoft is making the largest contribution to the alliance to date - $5 million - which is enough to scan about 150,000 books. In aligning with the Open Content Alliance, MSN is joining forces with its archrival Yahoo, which announced its support of the project this month. Several universities, including the University of California, Columbia University and Rice University, as well as the Internet Archive and the National Archives of Britain, have joined the alliance. MSN Book Search will go online in test form early next year. Although the content of out-of-copyright books will be accessible at no charge from MSN Book Search, Microsoft is talking with publishers about how it might charge for books under copyright - perhaps per page, perhaps per chapter. "We're thinking through a whole host of business models for the in-copyright stuff," said Danielle Tiedt, general manager of search content acquisition at MSN...."We're rolling now, and very few institutions will say no," said Rick Prelinger, administrator of the Open Content Alliance. The alliance is the brainchild of Brewster Kahle, the founder of the Internet Archive, a nonprofit organization in San Francisco that is building a vast digital library. Mr. Kahle has said repeatedly that one of his greatest hopes is to have Google join the project. Mr. Kahle said Tuesday that talks with Google seemed to be progressing toward an agreement. Nathan Tyler, a Google spokesman, confirmed Tuesday that Google was speaking with Mr. Kahle about joining the alliance, but there was nothing yet to announce.
Yesterday the Open Content Alliance marked its official launch with a party in San Francisco. As part of the occasion, it unveiled the Open Library, a working demo of the OCA vision. Click on a book cover to see the full text and options for printing, searching, or hearing an audio version.
When the OCA was first announced three weeks ago, it had 10 institutional members. Since then, 14 others have joined, including the Biodiversity Heritage Library, the Smithsonian Institution Libraries, Columbia University, Emory University, Johns Hopkins University Libraries, and McMaster University, Rice University, York University, and the Universities of British Columbia, Ottawa, Pittsburgh, and Virginia.
MSN Search today announced its intention to launch MSN Book Search, which will support MSN Search’s efforts to help people find exactly what they’re looking for on the Web, including the content from books, academic materials, periodicals and other print resources. MSN Search intends to launch an initial beta of this offering next year. MSN also intends to join the Open Content Alliance (OCA) and work with the organization to scan and digitize publicly available print materials, as well as work with copyright owners to legally scan protected materials.
Brewster Kahle announced that MSN is "committed to kick off their support by funding the digitization of 150,000 books in 2006!"
The DC Principles Coalition has issued a press release (October 25) on its latest effort to roll back the NIH public-access policy. Excerpt:
Fifty-seven of the nation’s leading medical and scientific nonprofit publishers today announced they have offered a proposal to Elias Zerhouni, M.D., director of the National Institutes of Health (NIH), that would allow the NIH to bring vast amounts of research findings to the public efficiently and at no cost. In a joint letter to Dr. Zerhouni [PS: probably this letter from October 17], the group detailed a plan that would allow the NIH to provide online access to articles on their journal websites using the existing system of links from abstracts that are indexed on NIH’s Medline. The transparent linking system would make it easier for the public to view more than 1 million research articles and would avert the need to create a new taxpayer-funded publishing infrastructure within the NIH.
(PS: This proposal, which has repeatedly been offered to the NIH and repeatedly rejected, would not let the NIH integrate the article texts with NIH's many OA databases, would undercut the NIH's ability to shorten embargoes and even let publishers lengthen them, and would not guarantee the public continuing free online access to the articles.)
James Campbell, Reactions to the Enclosure of the Information Commons: 2000-2004, Bulletin of the American Society for Information Science and Technology, October/November 2005. Excerpt:
SPARC, as well as many other organizations, encourages the development of open access journals, publications that make their articles available at no cost to users and that typically allow users at the very least to make copies in digital form and often confer a wider set of usage rights. Those efforts have had some notable success. For example, the Directory of Open Access Journals website (www.doaj.org) lists over 1600 open access journals containing over 62,000 searchable and downloadable articles. In addition, funders such as the U.S. National Institutes of Health and the Welcome Trust in the United Kingdom are calling for research conducted through their funding to be openly available after some limited “hold back” period. There are many important obstacles for open access publishing to overcome to be a full-fledged market alternative to commercial publishing, including building sustainable economic models and changing the culture of academia to value open access and traditional publication credits equally when considering tenure and promotions. Nonetheless, open access scholarly publishing is already having an effect on the marketplace and, through market mechanisms, has already begun to expand the information commons....[T]he information commons movement has yet to take its place in the mainstream of political discourse. Instead, efforts to guarantee access to the information commons in the face of recent changes in law and technology continue to proceed through the different approaches outlined in this report: legislate, litigate, limit, create competing systems, legally reinterpret, and philosophize and mobilize. We still await the unifying framework that could place the intellectual environment of the information commons on a par with the physical environmental movement in matters of public debate and policy.
Bruce Slutsky, Editor's Note, CINF E-News, Fall 2005. (Thanks to Information Overload.) Excerpt:
The CINF E-news is now an open access publication. In August the Executive Committee felt that the E-News would help the Division reach out to other chemical information professionals.
(PS: This is the shortest announcement I've seen by a journal or newsletter converting to OA. Nevertheless, kudos to the ACS, which publishes the newsletter, for recognizing that OA to chemical information is a good way to "reach out to...chemical professionals." That's why the NIH provides OA to PubChem.)
The Sheridan Libraries at Johns Hopkins University are launching A Pilot Program for Electronic Theses and Dissertations. From the web site:
As part of its mission to support the learning, research, dissemination and preservation activities of the University, the Sheridan Libraries proposes to assist with an evaluation of the policies, and lead the development of the processes, software and systems to support electronic theses and dissertations (ETD). The Libraries already provide submission, dissemination and preservation services for the University’s print theses and dissertations. Many institutions have implemented ETD programs and noted that this has increased discovery and use of graduate students’ research. Additionally, ETDs offer an opportunity to include content such as datasets, simulations, hypertext links, audio, animations, and video. With approval from the Graduate Board, the Sheridan Libraries, in collaboration with academic departments, wishes to conduct a pilot program in ETDs in Fall 2005 and Spring 2006. By working with a diverse set of departments, it will be possible to explore the range of issues including the following:  How can such a program support dissemination of and access to students’ theses and dissertations?  What types of content, fonts, media, etc. will we need to support for ETDs?  What are the training needs for ETDs?  What software or systems, including the institutional repository, best support the students’ needs?
JHU faculty who would like to participate in the project should let the libraries know by November 28, 2005.
The National Consumer League has sent letters to the House and Senate condemning the Google Library project. It also issued a press release (October 25). Excerpt:
In a letter to the chairmen of the House and Senate Judiciary subcommittees overseeing intellectual property issues, the nation's oldest consumer advocacy group raised concerns about a forthcoming ambitious effort to catalogue the entire collections of four major American libraries. The letter, signed by National Consumers League President Linda Golodner, acknowledges the tremendous potential value in Google Inc.'s bold vision for the new initiative, in which the complete collection of works at the university libraries of Stanford, Michigan, and Harvard, and of the New York Public Library, would be scanned and made available electronically to the public. The Washington-based advocacy group warned, however, that the project, which will resume scanning on November 1, 2005 poses dramatic threats to the principle of copyrights; fairness to authors; and cultural selectivity, exclusion, and censorship....Due to the fact that a significant portion of the volumes in the collections remain under copyright, having been written after 1923 and not legally considered a matter of public domain, the advocacy group warned that clearly, Google should be required to obtain appropriate rights before reproducing the works of others. Google's current plan would require authors to "opt-out" of its program, which places and inappropriate burden on copyright holders...."We do not doubt Google's good intentions," wrote Golodner. "But any database which represents itself as being a 'full' or 'complete' record of American culture as reflected in the collections of four major research libraries must, in fact, be complete. The sheer scope and cumbersome nature of the project may force Google to cut corners at some point, raising inevitable questions. To the extent that Google finds itself drawing lines for inclusion or exclusion based even indirectly on content --style, political slant, format, author, and so on-- it makes itself a censor of our history and culture."
Comment. Two quick replies. (1) As I argued earlier this month, the simple critique of opt-out "is deceptive because it assumes without proof that the Google copying is not fair use. Hence it begs the question at the heart of the lawsuit. If the Google copying is fair use, then no prior permission is needed and the opt-out policy is justified." (2) Google never said that its book-scanning would be "a 'complete' record of American culture," though I'm sure it would be glad to approach that goal asymptotically. The NCL objection not only starts from a false premise, but would abort any project that cannot reach completeness in one step. Does the NCL object to libraries because they don't carry every book? If the NCL really cared about comprehensive coverage, then it would praise this gigantic step toward wider access and monitor the project to see that it doesn't rest on invidious criteria for inclusion. But by objecting to the project's incompleteness as such, without pointing to any evidence of "cultural selectivity, exclusion, and censorship", the NCL implies that no literature should be easy to access until all of it is.
Editorial: The not-so-Open Content Alliance, Varsity Online, October 24, 2005. Varsity Online is the University of Toronto student newspaper. Excerpt:
[The University of Toronto] in very good company after recently joining the Open Content Alliance, a consortium of internet and software companies, national archives, and academic institutions dedicated to making digital content available and free online....The Open Content Alliance has positioned itself in contrast to a similar scheme to scan and post books online, the robust but controversial Google Print. Google, the closest thing we have to the collective consciousness of the human race, has caught hell from book publishers for scanning copyrighted material without their explicit permission....Copyright holders have pushed aggressively to extend their control over how their work may be used in the culture, and though it has made a lot of people rich, it has made us all culturally poorer....The Open Content Alliance is supposed to throw open the doors of human knowledge, but it apparently has no critique of how that knowledge is controlled and disseminated in the culture. By meekly agreeing to play by the copyright rules --no matter how idiotic and damaging those rules are-- the Alliance is neglecting an important part of its job as an advocate for, and protector of, the cultural heritage to which everyone is heir. U of T has noble goals in joining the Open Content Alliance, and its participation is a step in the right direction. But if it does not challenge the cultural monopolists who believe that every scrap of human knowledge has a price tag, it is failing in its mission as an educator and as a steward of the intellectual ecosystem on which all learning relies.
Comment. I agree with the praise for OCA, the praise for Toronto for participating in it, and the critique of copyright. So why do I think it's a cheap shot to criticize the OCA for not adding a valuable critique to its valuable action? First, because you don't have to talk the talk when you walk the walk. Second, because this particular critique isn't the only good reason for the OCA to do what it's doing. The OCA can build a larger and more effective coalition by welcoming everyone who agrees on the goal than by laying down a party line on the rationale. It's a strategy mistake to hobble a project that you admit is good just to see its leaders agree with you about why.
New IBM Initiative Advances Open Software Standards In Healthcare and Education, an IBM press release from October 24. Excerpt:
IBM's healthcare and education practices today announced a major initiative to improve interoperability and information-access through the development of open software standards. Under this initiative, IBM is pledging royalty-free access to its patent portfolio for the development and implementation of selected open healthcare and education software standards built around web services, electronic forms and open document formats. Industry growth and service delivery in healthcare and education currently are hampered by the proliferation of incompatible document formats and proprietary technology, making it difficult to find, retrieve and share data such as standardized medical records and educational resources. IBM believes its new initiative can help address the complex ecosystem across which information must be accurately, securely and efficiently shared and assist our clients in these two vital industries as they work to improve the quality and lower the costs of services they deliver to patients, physicians, students and teachers around the world. Standards can foster interoperability and dramatically improve the ability to communicate data and information among and between companies and throughout communities....IBM's work with the healthcare and education industries follows IBM's pledge of 500 software patents to the open source community earlier this year. Since then, other companies and organizations have made similar pledges helping to create an open source "patent commons."
Also see IBM's official statement of the new policy. Excerpt:
Each year, IBM generates more than a $1 billion of intellectual property income, and leads the world in U.S. patents issued. This income and pipeline are vitally important to our ability to continue to innovate. At the same time, opening access to our patents allows us to treat IP as intellectual capital that IBM invests in specific industries for them to improve services and reduce costs while helping them innovate and grow.
David H. Holtzman, Share the Knowledge, Expand the Wealth, Business Week, October 25, 2005. Excerpt:
The computer is now the factory of the Information Age -- optimized not for automation but collaboration, and requiring a different legal framework. But key industries, most notably entertainment and software, have bamboozled Congress and much of the public into believing that their wares deserve the same protection that was awarded to say, a patent for blast furnaces in the mid 20th century. Foragers in the Information Age use computers as their implements. Unlike the Earth, there are no raw materials in the cyber world waiting to be picked at like veins of ore. The Industrial Age was about discovery. The Information Age is about invention....Digital product development is less about aggregation and ownership of raw materials and more about manipulation of facts and ideas, concepts and values, pictures, sounds, video, and numbers -- much of which must be borrowed from the work of others and transformed in some new and hopefully lucrative way. But without free and easy access to a wide variety of these intellectual resources, our information-based trade goods will be less competitive in the global marketplace because of increased time to market (compliance), fear of innovation (something new like file-sharing networks might be made illegal), and because our regulated goods will cost more -- like Californian cars requiring extra-emissions control gear....Digital factories need to be supplied with large quantities of license-free supplies to grow, just as manufacturing industries needed plentiful and cheap raw material.
John Blossom, Fair Game: German and American Book Publishers Wrestle with Google Print, Shore Communications Commentary, October 24, 2005. Excerpt:
[T]he biggest event for books during the [Frankfurt Book Fair] was the alignment of camps in the fight over Google Print. American publishers are suiting up for a fight on copyright issues, while German publishers seem to be more wiling to let Google be Google and to get on with building stronger online presences for searching and consuming books. Given the history of other recent wars on copying premium content guess who's likely to be the richer of these two camps in a few years' time? It's time for all publishers to embrace fair use of book content for searching and to focus on how they're going to make money in a search-enabled world....In the broader picture the suits against Google are largely moot. Book publishers have tarried far too long with old models and channels while online technology has bloomed all around them. The intelligent response to that technology is to embrace it aggressively and to recognize that both publishers and authors of books will benefit from the widest access possible to the most affluent and content-hungry audience available - the online audience. That access requires a willingness to place content in the best available channels, including search engines, so that it meets an audience's interests and needs when the moment and context is right. Anything less will be commercially self-defeating....Lesson learned [from the file-sharing wars in music]: effective and legal searching is content's best friend. Corollary lesson: enabling users to help other users find content is one of the most effective marketing methods available....[Google's] display of copyrighted materials seems in general to fall squarely into "fair use" territory. As we noted in our earlier news analysis on copyright the notion of copyright in a digital era has less to do with restricting electronic reproduction than it does with being able to define what's done with a digital object once it's been replicated. Focus on the rights, not the copy....There may be some important points to clarify in the application of copyright law in the digital age that will benefit from a judge's input, but if those clarifications stifle effective search engines then it will be to the detriment of all publishers....It was naive for book publishers to think that their "Google moment" would come on their own terms at their own pace. This may be in part because of the success and sophistication of Amazon.com as an online retail presence that had sheltered booksellers from the wider search and content consumption environment. But the days of central online storefronts as the primary vehicles for locating and consuming content are dwindling. In their place is an evolving commercial environment in which monetization follows the distribution of digital objects rather than distribution following monetization.
Rosalio Ahumada, UC Joins Digital Library, Merced Sun-Star, October 22, 2005. (Thanks to ResourceShelf.) Excerpt:
Searching endless shelves for a 19th century classic might be a thing of the past with digital libraries making full-text books accessible online and for free. The University of California libraries, including the Kolligian Library at UC Merced, have joined a partnership to build a freely accessible digital library with materials drawn from around the globe....The more than 100 UC libraries will contribute books and resources to build a digitized collection of out-of-copyright American literature. [UC Merced Librarian Bruce Miller] said the 10 UC campuses have about 32 million books on their shelves. The materials will be available from the Web site of the Open Content Alliance. Full text of literature will be available for free to anyone who visits the Web site. Miller said the UC libraries are currently in the selection process to decide which books are the first to be digitized. With the support of Yahoo! Inc., UC library books will be digitized using technology that scans books at the cost of 10 cents per page. Before, the costs to scan archival photographs and documents typically began at $20 per page....Ann Wolpert, president of the Association of Research Libraries, said working with the alliance, academic and research libraries can provide greater access to high value materials and contribute expertise in developing reliable and authoritative collections. "This is an exciting step in the ongoing development of open access solutions for citizens, students, scholars and researchers worldwide," Wolpert said in a news release. "Libraries, publishers, educational institutions, and others must collaborate around initiatives like the OCA to effectively serve their communities in the 21st century." But digitized books aren't anything new to the University of California. In 1997, the university created the California Digital Library to support researchers and students. The difference with Open Content Alliance is that the material will be available free to everyone, not just the university, said Daniel Greenstein, UC associate vice provost and university librarian for the California Digital Library.
Jeffrey Perkel has an article on The Future of Citation Analysis in the October 24 issue of The Scientist. It's not OA, so I can't read or excerpt it. But it quotes Péter Jacsó and Jacsó has posted some background details to explain his quotation in Perkel's article. Excerpt:
This is a background piece for the interview made with Jeff Perkel for the article in The Scientist. Considering the limitations of the print edition, it is understandable that only a small part of my argument could be included. I provide here some background illustrations and comments to my correctly quoted remark that Google Scholar (GS) does a really horrible job matching cited and citing references. The interview started with an innocent question about my opinion of the article Citation Counts [by Kathleen Bauer and Nisa Bakkalbasi] published in D-Lib Magazine. I said that the comparison of the citedness scores of a single year of the Journal of the American Society for Information Science (JASIS) which showed that on the average GS detects 4.5 more citing references than Web of Science (WoS) shouldn’t serve as a proof of GS superiority. The test results of the other sample year (1985), which showed that WoS had on the average 7.8 more citing items than GS for the selected 1985 JASIS articles. As the news is always if the postman bites the dog, this did not get the same attention as the other sample....It should have been a warning sign. Google always played fast and loose with its numbers in reporting its hits, and so did its competitors. In the scholarly world this may not fare so well after the honeymoon period with GS is over, and serious users start taking a closer look at the a) hits which appear in the result list, b) the reported citedness scores, and c) the items purportedly citing the ones in the result list. I knew that I must use some tailor-made examples to get my message through....Suffice it to repeat here what The Scientist quoted from me: GS often can’t tell apart a page number from a publication year, part of the title of a book from a journal name, and dumps at you absurd data, such as the record of an article which GS happily serves up when looking for upcoming articles on semiconductors to be published in 2006 (possibly available already in the publisher’s archive to which GS has a free pass)....Mind me, GS has access to the neat and clean metadata of millions of articles, courtesy of the grateful publishers, labeling the data elements as foods are labeled in a senior citizen home, but it does not help.
Charles W. Bailey, Jr., The Google Print Controversy: A Bibliography, DigitalKoans, October 25, 2005. A useful list of about 60 OA articles with links, focusing on legal issues.
Michael J. Kurtz and five coauthors, The Effect of Use and Access on Citation, a preprint, January 2005. (Thanks to Stevan Harnad.)
Abstract: It has been shown (S. Lawrence, 2001, Nature, 411, 521) that journal articles which have been posted without charge on the internet are more heavily cited than those which have not been. Using data from the NASA Astrophysics Data System (ads.harvard.edu) and from the ArXiv e-print archive at Cornell University (arXiv.org) we examine the causes of this effect.
Gary D. Byrd, Shelley A. Bader, and Anthony J. Mazzaschi, The status of open access publishing by academic societies, Journal of the Medical Library Association, October 2005. Excerpt:
[T]he academic societies serving clinicians, faculty, and researchers in the basic and clinical health sciences will play a central role in determining the ultimate success of “open access” alternatives to commercial publishing. Academic societies provide health sciences students, faculty, clinicians, and researchers with their natural international community of peers and collaborators....The following is a brief report on the results of two recent studies conducted in partnership with the Association of Academic Health Sciences Libraries (AAHSL) and designed to look at the changing publishing practices of academic societies. Carried out from July 2003 through December 2004, these studies looked at the characteristics of journals published by academic societies affiliated with the Association of American Medical Colleges (AAMC), the Association of Learned and Professional Society Publishers (ALPSP), and High Wire Press as well as titles listed in the Directory of Open Access Journals (DOAJ). The first study was cosponsored by AAHSL and AAMC through its Council of Academic Societies (CAS), which included some ninety-four member societies representing academic disciplines taught in schools of medicine. The primary goal of this study was to help these societies, as well as AAMC member institutions and their libraries, understand the problems and opportunities faced by the CAS society journals as they shift from paper to electronic publishing. The second study was cosponsored by ALPSP, High Wire Press, the American Association for the Advancement of Science, and AAMC and was conducted by the Kaufman-Wills Group in Baltimore, Maryland. Called “Variations on Open Access,” this study sought to determine the potential impact of open access publishing on the business, editorial, and licensing practices of scholarly society journal publishers.
(PS: For details on the Kaufman-Wills study, see the final version published earlier this month.)
The UK House of Commons Science and Technology Committee held a hearing last Wednesday (October 19) in which Lord Sainsbury of Turville testified on the UK's Office of Science and Technology and the draft RCUK policy. Lord Sainsbury is the UK Science and Innovation Minister. Here's an excerpt from the uncorrected transcript, which is now online:
Comment. Three quick replies. (1) Lord Sainsbury still seems to think that the draft RCUK policy makes OA journals primary and OA repositories secondary, rather than the other way around. (2) His objection that the current draft requires RCUK-funded researchers to negotiate with their publishers may be answered in two very different ways. Publishers may adopt policies for RCUK-funded authors that permit no negotiation, as we've seen them do in response to the NIH policy. The current draft invites publishers to take this approach. Or the RCUK could revise its draft policy to make publisher consent unnecessary, as the Wellcome Trust policy has done. (3) Lord Sainsbury's claim that there has been a peak in the enthusiasm for OA is a sign that he's been listening to publisher lobbyists more than OA proponents themselves.
Remedios Melero, Acceso abierto a las publicaciones: científicas: definición, recursos, copyright e impacto, El profesional de la información, July-August, 2005. (Thanks to Heather Morrison.) In Spanish but with this English-language abstract:
Open access to scientific publications: definition, resources, copyright and impact. Abstract: The Open access movement has attracted increased support over the past few years from both institutions and members of the scientific community. There has also been a growing interest in projects linked to open access initiatives. This article analyses the significance of open access --in accordance with the BOAI, Bethesda and Berlin declarations-- to scientific publications on the internet. An overview is provided on issues related to the impact of open access resources and to the implications of copyright concessions in an open access environment. Finally, the future perspective of open access is evaluated from the standpoint of governmental policies.
Wu Chong, Global forum for free sharing of research data planned, SciDev.Net, October 24, 2005. Excerpt:
A forum to promote the free exchange of information in the global scientific community was proposed last week at the annual meeting of the International Council for Science (ICSU). The International Scientific Data and Information Forum — or SciDIF — would help ensure that scientists in poor countries can access information as easily as those in the North, said Roberta Balstad, chair of the ICSU Priority Area Assessment on Data and Information. Balsted stressed, however, that the forum is "only an idea so far". To take things further, ICSU will set up a committee that during the next three years will define how the forum should work. An ICSU report published on 20 October raises some of the issues. It recommends that all scientific data, whether produced commercially or through public-private partnership, should be provided free or at low cost for research and education purposes in both developed and developing countries. One way to make this possible, says Shuichi Iwata of the ICSU Committee on Data for Science and Technology (CODATA), would be through an open-access database. According to the report, the poor access scientists in low-income countries have to scientific publications makes it difficult for them to learn about research in other parts of the world, and to find an outlet for their own research results....Even those who have computers at research institutes face "exorbitant" costs in accessing information through the Internet and must put up with an unstable electric supply, said David Mbah, executive secretary of the Cameroon Academy of Sciences. Balstad also said it was also important for researchers in developed countries to be able to access information produced in developing countries. "Scientists in poorer countries can seldom build a strong digital database to facilitate the flow of information," she said. "So we place great emphasis on extending new technology, training and capability building in developing countries."
(PS: I couldn't find this ICSU report when I blogged another note about it on October 21, and I still can't find it. It doesn't seem to be at the ICSU site. If anyone has a pointer, I'd appreciate the help.)
Institutions in Latin America may now have one year of free online access to two journals of Latin American Studies: the Latin American Research Review (LARR) and the Bulletin of Latin American Research (BLAR). Other institutions, and Latin American institutions after the grace year, may subscribe to both journals at a reduced rate. For details, see the LARR Online page. (Thanks to Peter Ward.)
SPARC has launched a discussion list on Open Data. From the press release (October 24):
SPARC (The Scholarly Publishing and Academic Resources Coalition) has launched the new “SPARC-OpenData” e-mail discussion list, which will explore issues of access to digital data associated with peer-reviewed science, technical and medical (STM) research. According to the list’s founder and moderator, Peter Murray-Rust of the Unilever Centre for Molecular Sciences Informatics at the University of Cambridge (UK), “The emerging Open Data movement shares many goals with the Open Access and Open Source movements, but encompasses its own distinct issues that are in need of examination by the scientific community. This list is intended to facilitate that important discussion.” Many advocates of Open Data believe that, although there are substantial potential benefits from sharing and reusing digital data upon which scientific advances are built, today much of it is being lost or underutilized because of legal, technological and other barriers. The new discussion list will enable participants to debate issues of access to and re-use of research data that researchers or funders wish to see available for use by others. The list’s emphasis is on defining the scope of Open Data and collecting examples of desirable and undesirable practices. “SPARC’s interest in Open Data is an extension of our interest in open digital archives such as institutional repositories, Open Access publishing of research articles, and library support of science in the digital environment,” said Heather Joseph, Executive Director of SPARC. “We are pleased and honored to be able to work with Dr. Murray-Rust to broaden the discussion of Open Data.”
See the list site for details on subscribing, unsubscribing, posting, and reading the archive.
Tarleton Gillespie, Between What’s Right and What’s Easy, Inside Higher Ed, October 21, 2005. Excerpt:
Sometimes our tools are our politics, and that’s not always a good thing. Last week, the Copyright Clearance Center announced that it would integrate a “Copyright Permissions Building Block” function directly into Blackboard’s course management tools. The service automates the process of clearing copyright for course materials....With the help of new database technologies and the Internet, the CCC has made it much easier for people to clear copyright, solving some of the difficulty of locating owners and negotiating a fair price by doing it for us. The automatic mechanism being built into Blackboard goes one step further, making the process smooth, user-friendly, and automatic. So, if fair use is merely a way to account for how difficult clearing copyright can be, then the protection is growing less and less necessary. Fair use can finally be replaced by what Tom Bell called “fared use” — clear everything easily for a reasonable price. If, on the other hand, fair use is a protection of free speech and academic freedom that deliberately allow certain uses without permission, then the CCC/Blackboard plan raises a significant problem. The fact that the fair use doctrine explicitly refers to criticism and parody suggests that it is not just for when permission is difficult to achieve, but when we shouldn’t have to ask permission at all....Faculty and their universities should be at the forefront of the push for a more robust fair use, one that affirmatively protects “multiple copies for classroom use” when their distribution is noncommercial, especially as getting electronic readings to students is becoming ever cheaper and more practical. Automating the clearance process undoes the possibility of...challenging this slow disintegration of fair use. Even if the Blackboard mechanism allows instructors simply not to send their information to CCC for clearance (and it is unclear if it is, or eventually could become, a compulsory mechanism), the simple fact that clearance is becoming a technical default means that more and more instructors will default to it rather than invoking fair use. The power of defaults is that they demarcate the “norm”; the protection of pedagogy and criticism envisioned in fair use will increasingly deteriorate as automatic clearance is made easier, more obvious, and automatic....Technologies have politics, in that they make certain arrangements easier and more commonplace. But technologies also have the tendency to erase politics, rendering invisible the very interests and efforts currently working to establish “more copyright protection is better” as the accepted truth, when it is far from it....The automation of copyright clearance now being deployed will...shoehorn scholarship into the commercial model of information distribution, and erase the very question of what fair use was for — not by squelching it, but simply by making it easier not to fight for it and harder to even ask if there’s an alternative.
Dan Thies, Danny Sullivan on Google Print, Sitepoint, October 23, 2005. Excerpt:
I used to work at FedEx Kinko’s, a world leader in document management solutions. I know how much businesses are willing to pay to get their legacy documents into a searchable electronic format. What I can’t understand is why publishers aren’t doing cartwheels when they see Google doing the job for them, for free.
Ted Bergstrom and Preston McAfee have created a Journal Cost-Effectiveness calculator. Enter a journal by title or ISSN and get back its its price per article, its price per citation, and rank (on these prices) relative to other journals in the fields of your choice. It does not cover all journals, but is remarkably useful for those it does cover.
Theoretical Economics is a new peer-reviewed, open-access journal published by the Society for Economic Theory. The inaugural issue will appear in March, though the web site already lists six papers that will appear in it. From the site:
Theoretical Economics publishes research in all areas of economic theory. The standard for acceptance is the same as that of the leading field journals in economic theory. The full content of the journal is freely accessible online without payment or password. Authors of accepted papers grant us a nonexclusive right to publish their work, but otherwise retain full control over their work. Open Access enables authors to obtain the maximum possible exposure for their work. Besides the obvious convenience to researchers and teachers, freely available papers are more likely to be linked to and referenced by websites, and as a result are much easier to find. As an experiment, enter a research topic into a search engine like Google and see how many links you obtain to papers published in traditional journals. You will find that most references are to working papers, not to published papers, because working papers are freely available. We believe that with the advent of the web, Open Access is the right way to disseminate scientific information. Existing specialty journals obtain revenues from selling subscriptions, primarily to libraries, and access to the research they publish is consequently limited. The attractive revenue stream that such subscriptions provide makes it unlikely that these journals will convert to Open Access. Thus a need exists for new refereed Open Access journals to replace existing journals.
Can you trust Wikipedia? Guardian Newspaper 24 October 2005.
Reviews of a few wikipedia entries by subject authorities. Rating ranges from 0/10 for Haute couture to 8/10 for Bob Dylan, with most reviews broadly favourable.
Kevin Drum, Google vs. the World, Washington Monthly, October 22, 2005. Excerpt:
For many people, especially writers who benefit from copyright but would also benefit from Google's project, this case is excrutiatingly hard to form an opinion about. On the one hand, it's a truly stupendous undertaking, a boon to both popular and scholarly research that's hard to overestimate. What's more, Google's restriction of search results to small snippets demonstrates considerable sensitivity to the rights of the original authors. As a matter of public policy, it seems like a no-brainer that something like this should not only be legal, but positively encouraged. On the other hand, it's true that this isn't a use that authors had in mind when they originally published their books. And as with other database-driven collections, there's a big difference between an author excerpting one book for the purpose of illustration or criticism and a huge corporation excerpting millions — and making money off it. If it were up to me, I'd vote with the public interest. I sometimes feel that if the increasingly expansive view of copyright asserted today had been around a couple of centuries ago, the Supreme Court would have ruled that lending libraries were illegal. But just as circulating libraries have a social value that far outweighs the minimal intrusion they produce in an author's ability to control the distribution of her work, the same is true of Google's project. The technology has changed, but the principle is the same. At the same time, it's too bad this has to be decided by the courts. It's really a job for Congress, after all. Unfortunately, both Republicans and Democrats appear to be so thoroughly bought and paid for by the content industry that it's pretty much inconceivable they'd do the right thing if it were brought to a vote. So it's off to court we go, with the hope that existing law will be enough. I hope Google wins.
MedRounds subsidizes its OA journals and books with revenue from an ecommerce portal. It has now generalized the portal system and is offering it to other non-profit organizations as a fund-raising device. See Andrew Doan, Fund Raising Opportunity for Schools, Churches, and Non-Profit Organizations, MedRounds Blog, October 21, 2005. Excerpt:
MyFundRazor.org will create your own e-commerce portal that sells the exact same items with the exact same prices found on major e-commerce websites like Amazon.com, Yahoo!, Buy.com, Sony.com, and other name brand suppliers. Because Internet commerce is shifting to a horizontal distribution of wealth via affiliate and associates programs, companies like Yahoo! makes billions of dollars yearly by directing traffic to major retailers and distributors without packaging or shipping a single product...MyFundRazor.org...is owned and operated by MedRounds Publications, Inc....[which] was created by academic entrepreneurs to use advertising and marketing revenue to support FREE academic publishing.
Sun Microsystems --which is committed to open-source software and open-access educational resources-- is teaming up with an Elsevier subsidiary to "create global digital repositories and preservation technologies which protect critical educational, cultural and historical information." From the press release (October 19):
Sun Microsystems Inc., and Endeavor Information Systems, the leading provider of library management software and a wholly owned subsidiary of Elsevier, today announced an expanded partnership to create global digital repositories and preservation technologies which protect critical educational, cultural and historical information. Institutions such as universities, libraries and museums are tasked with preserving massive, almost incomprehensible amounts of content, as well as ensuring its availability to the appropriate audiences. Without digital preservation, many of the critical materials for research, education and cultural benefit could be lost due to data format changes, media migration, bit loss, application/operating environment shifts and unavailable access....This unique combination of Sun's infrastructure, Endeavor's software and Elsevier's heritage in managing large scale content repositories will enable libraries, universities and museums to more easily and securely convert, store, manage and distribute their informational assets....Representing Endeavor's first step toward achieving its archiving and repository mission is the recent release of ENCompass for Journals Onsite (EJOS), which locally stores and provides access to e-journal content from any publisher in one location. EJOS currently serves many of the largest local repositories of STM journal literature, including the Canada University of Toronto. "EJOS is an essential part of the electronic journal service of the University of Toronto," said Peter Clinton, director, Information Technology Services, University of Toronto. "It offers high-powered search and retrieval capabilities that makes accessing full-text content convenient for students, faculty and researchers at all 20 Ontario universities." "The vast majority of our customers-at least 85 percent-run on Sun operating systems. Over the years, the partnership between Endeavor and Sun has delivered proven security and scalability to meet our customers' needs," said Roland Dietz, president and chief executive officer, Endeavor Information Systems. "The next step in our partnership will create secure, easy-to-use solutions to enable federated access to content and digital repositories. And as we develop future architectures, we will continue to embrace open standards to ensure our customers that new solutions can work with legacy systems, as well as emerging technologies."
(PS: This project is clearly more about long-term preservation than open access. So Elsevier won't compete --yet-- with the OA repository outsourcing services from BMC, Bepress/ProQuest, and Eprints. But digital repositories built for long-term preservation could easily be tweaked to support OA. With a little more work they could support OAI-PMH. If Elsevier took steps in that direction in the coming years, to diversify and hedge its bets, would you be surprised? OA momentum is so strong that I wouldn't be surprised to see any kind of company explore the OA infrastructure and enhancement business.)
Tom Turvey, On Google Print: Don't Fear the Web, The Book Standard, October 21, 2005. Turvey is a strategic partner development manager for Google, and part of his work is marketing Google Print to book publishers in the UK. Excerpt:
[A]s the web makes it easier for people to find more diverse content, we believe authors and publishers will be among this program's primary beneficiaries. Google Print, for instance, puts backlist titles --which comprise the majority of books in print but just a fraction of publishers' marketing budgets-- one search away from discovery, perhaps purchase. When Cardinal Ratzinger became Pope, for example, millions of people searching on his name saw the Google Print listing for his book In the Beginning, thousands viewed a few of its pages, and clicks on the "Buy this Book" links increased tenfold. We expect similar benefits from the Library Project. Today, a lack of full-text search makes it difficult to explore the extraordinary collections at Harvard, Michigan, Stanford, Oxford and the New York Public Library. All five institutions, though, have joined our Library Project. Imagine the cultural impact when their millions of volumes exist in a comprehensive card catalogue, every word of which is searchable. This vision will be realisable only if we protect the intellectual investment that lies behind every copyright. That's why users who find in-copyright books from our Library Program see only basic information, and why we'll continue to balance user benefit with author and publisher protections. "The potential book market," writes Wired editor Chris Anderson, "may be twice as big as it appears to be, if only we can get over the economics of scarcity." That's precisely what the web is best at, and precisely what we hope Google Print will accomplish.
Dana Blankenhorn, Economic Lesson of Google Print, Corante, October 21, 2005. Excerpt:
I have been reluctant to dive into the Google Print controversy because all the rhetoric is phony. The rhetoric is about principles, fair use vs. copyright. The reality is this is about money, about monetizing something that had no previous value and the obligation that places on the person doing the monetizing. The plain fact is that everything Google has done, and everything Yahoo did before it, is based on monetizing fair use. The concept of fair use arose based on the idea it had no economic meaning, that it represented a necessary intermediate step on the way to meaning (and money). But now we find, 10 years after the Web was spun, that fair use has enormous economic value. Through the magic of databasing, finding is now more valuable than having. What then is the obligation of those who extracted this value to the holders of the data providing the raw material? The legal question has been answered, there is none. If publishers can stop Google from offering books online without payment, they can stop Google from linking to books without payment, because Google is only going to offer extracts that represent fair use free. It's the physical equivalent of the "deep linking" proposition we dealt with in the 1990s. If a book isn't read because it can't be located it makes no sound. The moral question is something different entirely. If Google extracts a profit from Google Print, I think it does have a moral obligation to spend some of that money on activities that benefit writers and other content creators....Lining up for money before the risk is proven is like the pig, the duck, and the dog lining up at the Little Red Hen's oven, waiting for bread, when she hasn't yet sown the seed. If publishers offered their help to Google, instead of their lawsuits, they might have some moral right to the bread coming out of Google's virtual ovens. And the same attitude is hereby recommended to the rest of the copyright industries.
Alexander Wolfe, Google Not Straight About Book Search, or 44 Pages in 5 Minutes, Wolfe Platform Blog, October 21, 2005. Excerpt:
Google is not being completely forthright in making the argument that it should be allowed to scan and digitize millions of books without permission from publishers. Let me amend that (this is an opinion piece): Google is being downright devious. (As you'll see below, I was able to view 44 pages of a $133 book, all in about five minutes with not much effort.)...The mechanism by which they're allowing users to search through the books they're digitizing, doesn't restrict you to little snippets, unless you're a technological idiot. (CYA disclaimer: I can only claim this to be true for the specific book I searched through, for which I'm about to describe my amazing results. Your results may vary; indeed, for all I know, Google may indeed restrict searches of every other book to "fair use" snippets.)...(Disclaimer number two: For all I know, in this specific case, the publisher, Oxford University Press, may have allowed Google to scan its book. They apparently have given permission to Amazon, which allows users to browse the crystallography text via its "Search Inside" feature. My point is not about this one particular book; it's about Google trying to pull the wool over people's eyes by making it sound like only small portions of any book it scans are accessible.)
Danny Sullivan, Indexing Versus Caching & How Google Print Doesn't Reprint, Search Engine Watch, October 21, 2005. Excerpt:
I've written before that legal concerns about book indexing and Google Print may have repercussions for web indexing. Kevin Werback and David Winer look at this again, afresh. A look at this, plus the crucial difference between indexing (making something searchable) and caching (reprinting content). Google's library scanning program makes things searchable in Google Print but [does not reprint them]....When any search engine visits a web page, it effectively makes a copy of that page which is stored in the index. But the index literally breaks apart the page. It stores where words were located, were they in bold, what other words were they near, were the words in a hyperlink and so on. Nothing in the index is anything you as a human being could read....The ability to opt-out of the index is another reason why we really haven't had a major search engine sued over web search indexing. In addition, site owners as Dave notes generally want to be indexed, so they can get traffic. In fact, the reason so many are upset over the current indexing update at Google is that they feel changes are causing them to lose traffic. But whether it is LEGAL to do this type of indexing (as opposed to caching) still really hasn't been tested....Here's the thing. Google is NOT, repeat NOT, republishing copies of books that it scans out of libraries. This is a fundamental mistake that many people seem to be making. Google is scanning books into an index, just as it spiders web pages and adds them to its index. It is making the books searchable by doing this, but that process does not republish the books in a way you can read. Think about it in web search terms. You can find a matching book, but there's NO hyperlink to click on that will take you to an online version of the book itself. There's just a snippet -- maybe -- of the text surrounding the words matching what you looked for. Want the actual book? Google Print won't give it to you. Instead, you have to go someplace and buy it or find it in a library. Google Print merely tells you the book may be what you're looking for. The only exception to this is if a publisher OPTS-IN. Not opt-out. If a publisher chooses, then -- and only then for books that are in copyright -- will Google display some of the actual book. The exact amount is left up to the publisher....[B]ook search is actually more opt-in than web search is. Books themselves aren't cached or shown. But they are made searchable without permission....Postscript: Ray Gordon writes to say he has filed a complaint arguing that web search on an opt-out basis is in violation of copyright. You can read the filings here. I've skimmed them, and he seems more concerned about usenet material (rather than web material) that can't be removed, apparently because others may have reprinted his own posts.
Kevin Werbach, Breaking Apart at the Seams, Werblog, October 20, 2005. (Thanks to Rob Hof.) Excerpt:
The Net works because of a series of informal agreements...[Examples]...And websites allow search engines to copy their content into indexes, even though at some level that action raises copyright concerns. The Google Print lawsuit puts the last of these practices in question. On some level, copying a Web page to facilitate searching isn't all that different from copying a book to facilitate searching. And copying an RSS feed to put content onto another site isn't so different either. Unravel the notion that some content sharing benefits everyone, and therefore should be acceptable despite the nominal boundaries of intellectual property, and the Internet economy, especially the Web 2.0 economy, comes crashing down. What's worrisome to me is that, just as the informal practices for sharing online content are being challenged, the informal practices for sharing Internet traffic and addressing are under stress as well....Years from now, will we look back at this as the period when the Internet came apart at its seams?
Max Chafkin, Google Scrambles to Defend 'Google Print for Libraries' Initiative, The Book Standard, October 21, 2005. Excerpt:
While Google’s biggest search competitor, Yahoo, had declined to comment on the lawsuit or to offer details on the future of the Open Content Alliance, a non-profit book-scanning program to which it has made a small contribution, OCA founder Brewster Kahle, whose Internet Archive is heading up the project, said that the publishers’ lawsuit was counterproductive and could hinder the development of digital protocols and distribution methods. “This horserace is something you might want if you were a lawyer, but it’s not what you’d want if you were a businessperson,” he said. “This could get really messy in a way that will damage progress.” Kahle added that the OCA welcomes Google’s project despite the fact that the AAP lawsuit mentions OCA as one of the various means publishers have developed to make electronic copies “consistent with their exclusive rights under copyright.”
Fedora version 2.1b was released on October 21. From the site:
Fedora 2.1b is initially being released as beta, with a final release soon to follow. This is a very significant release of Fedora since it introduces the new Fedora security architecture (with pluggable authentication and XACML-based policy enforcement), the Fedora Service Framework, and many other new features....Also, coinciding with this release is the publication of the Community-Developed Tools page on the Fedora web site...This is a clearinghouse for community-developed, open-source applications, tools, and services that work with Fedora repositories. There are already many tools that are available, so check it out!...For version 2.1 we have decided to move from Mozilla Pulic License 1.1 (MPL) to the Educational Community License 1.0 (ECL). The main difference between the two licenses is that the ECL imposes no compulsion to contribute changes to source code....Fedora 2.1b introduces a configurable security architecture, that provides options for SSL and authentication at the Fedora web service API level....New as part of the Fedora Service Framework, is the new Fedora OAI Provider Service, which is a stand-alone web service application that is a highly configurable OAI provider. The PROAI service can be set up to harvest any type of datastream or dissemination of digital objects in a Fedora repository. It also supports OAI sets. (Note: PROAI can be used outside the context of Fedora too, by writing a custom adapter.) Note also that the simple OAI provider interface on the core Fedora repository is still available. The new OAI service provides much more functionality and better performance....The Fedora Rebuild utility is an interactive utility that can be used when the Fedora repository is somehow corrupted. The utility rebuilds the repository by crawling the repository storage directories where the FOXML digital objects reside. The utility can rebuild the SQL database (registries, search tables, and other tables), the Resource Index, or both.
Yesterday's Washington Post ran a debate on the Google Library project.
For Google, Mary Sue Coleman, Riches We Must Share.... Coleman is the President of the University of Michigan, which is letting Google scan all seven million volumes in its library, including books under copyright. Excerpt:
Beyond the specific legal challenges emerging in the wake of such a sea change, there are deeply important public policy issues at stake. We must not lose sight of the transformative nature of Google's plan or the public good that can come from it. Throughout history, most of the world's printed knowledge has been created, preserved and used only by society's elites -- those for whom education and power meant access to the great research libraries. Now, groundbreaking tools for mass digitization are poised to change that paradigm. We believe the result can be a widening of human conversation comparable to the emergence of mass literacy itself....For those works that remain in copyright, a search will reveal brief excerpts along with information about how to buy the work or borrow it from a public library. Searches of work in the public domain will yield access to complete texts online. Imagine what this means for scholars and the general public, who, until now, might have discovered only a fraction of the material written on a subject. Or picture a small, impoverished school -- in America or anywhere in the world -- that does not have access to a substantial library but does have an Internet connection....Libraries and educational institutions are the only entities whose mission is to preserve knowledge through the centuries. It is a crucial role, one outside the interest of corporate entities and separate from the whims of the market. If libraries do not archive and curate, there is substantial risk that entire bodies of work will be lost. Universities and the knowledge they offer should be accessible by all....[W]e believe deeply that this endeavor exemplifies the spirit under which our nation's copyright law was developed: to encourage the free exchange of ideas in the service of innovation and societal progress. The protections of copyright are designed to balance the rights of the creator with the rights of the public. At its core is the most important principle of all: to facilitate the sharing of knowledge, not to stifle such exchange.
For the Authors Guild, Nick Taylor, ...But Not at Writers' Expense. Taylor is the President of the Authors Guild. Excerpt:
I am a writer.... have invested a small fortune in books chronicling the period and copies of old newspapers, spent countless hours on Internet searches, paid assistants to dig up obscure bits of information, and then sat at my keyboard trying to spin a mountain of facts into a compelling narrative. Money advanced by my publisher has made this possible. Except for a few big-name authors, publishers roll the dice and hope that a book's sales will return their investment. Because of this, readers have a wealth of wonderful books to choose from. Most authors do not live high on their advances; my hourly return at this point is laughable....So my question is this: When did we in this country decide that this kind of work and investment isn't worth paying for? That is what Google, the powerful and extremely wealthy search engine, with co-founders ranking among the 20 richest people in the world, is saying by declining to license in-copyright works in its library scanning program, which has the otherwise admirable aim of making the world's books available for search by anyone with Web access. Google says writers and publishers should be happy about this: It will increase their exposure and maybe lead to more book sales. That's a devil's bargain. We'd all like to have more exposure, obviously. But is that the only form of compensation Google can come up with when it makes huge profits on the ads it sells along the channels its users are compelled to navigate? Now that the Authors Guild has objected, in the form of a lawsuit, to Google's appropriation of our books, we're getting heat for standing in the way of progress, again for thoughtlessly wanting to be paid. It's been tradition in this country to believe in property rights. When did we decide that socialism was the way to run the Internet?...The value of Google's project notwithstanding, society has traditionally seen its greatest value in the rights of individuals, and particularly in the dignity of their work and just compensation for it. The people who cry that information wants to be free don't address this dignity or this aspect of justice. They're more interested in ease of assembly. The alphabet ought to be free, most certainly, but the people who painstakingly arrange it into books deserve to be paid for their work. This, at the core, is what copyright is all about. It's about a just return for work and the dignity that goes with it.
Comment. Nick Taylor's piece shows that he's as clueless as I feared. First, he doesn't understand what socialism is. Second and more important, he complains that the Google project will deprive him of revenue but doesn't offer a single reason to think so. On the contrary, he concedes the increase in exposure. Is he saying that the increase in exposure will boost sales but that the boost in sales isn't good enough and that he also wants a cut of Google's "huge profits"? If so, he should ask for a cut and stop contradicting himself with the talk about ceasing to be paid for his hard work. Is he saying that the increase in exposure will decrease sales by satisfying readers who would otherwise have bought his book? If so, he should say so, offer whatever evidence he has, or at least show some willingness to study the evidence, which includes mounting evidence against him. If he thinks that Google's snippets are too large, then either he should scale back his lawsuit to the demand that they fall within fair use or he should forthrightly call for the repeal of fair use. Couldn't a good writer do a better job with Taylor's thesis, whatever it is?
VTLS has released VITAL, another open-source, OAI-compliant institutional repository software package. From the web site:
VITAL is an institutional repository solution designed for universities, libraries, museums, archives and information centers. This software is designed to simplify the development of digital object repositories and to provide seamless online search and retrieval of information for administrative staff, contributing faculty and end-users. VITAL provides all types of institutions a way to broaden access to valuable resources that were once only available at a single location and to a finite number of patrons. By eliminating the traditional limitations information seekers encounter, this technology grants access to materials for all authorized end-users, from professional researchers to recreational learners. VITAL provides every feature-ingesting, storing, indexing, cataloging, searching and retrieving-required for handling large text and rich content collections. VITAL takes advantage of technology standards such as XML, TEI, EAD and Dublin Core to easily describe and index an assortment of electronic resources. VITAL leverages the benefits of open-source solutions such as Apache, MySQL, McKOI and FEDORA. VITAL conforms to common Internet data communications standards such as TCP/IP, HTTP, SOAP and FTP. Additional standards utilized include WSDL Web Services, OAI-PMH, Dublin Core, MARCXML, JHOVE, MIX (Metadata for Images in XML Schema), SRU/SRW and Z39.50.
For more details see the press release (October 21).
(PS: VITAL is a Fedora plug-in. I would have blogged VITAL the other day, but I confused it with VALET, the other open-source Fedora plug-in from VTLS (a submission module for ETD's), which I blogged earlier this month. But don't make my mistake: VITAL is a new and different product.)
Shoshannah Holdom, E-Journal Proliferation in Emerging Economies: The Case of Latin America, Literary & Linguistic Computing, July 29, 2005. Only this abstract is free online, at least so far:
In recent years, Latin America has been one of the world's fastest growing areas for Internet connectivity. While numerous studies have examined the factors contributing to this communications explosion, this article concentrates upon one of its effects --the proliferation of freely available, scholarly, peer-reviewed electronic journals in the fields of literary, cultural and area studies. This article argues that in the field of Latin American studies, the majority of e-journals are being produced in Latin American countries, rather than in the US or the UK for example. It is Latin American academics, rather than their US and UK counterparts, who are embracing new technologies and the opportunities facilitated for effective dissemination of research. In order to understand this marked move towards electronic scholarly journals, this article outlines the state of Internet connectivity in the region, the financial and material constraints and other restrictions placed upon academic publication, and the lack of international visibility of Latin American scholarly print journals. While questions need to be addressed as to the future sustainability and preservation of these free journals, many of them managed by individual academics and funded by their universities, this article argues that electronic publishing offers Latin American academics an unprecedented opportunity to disseminate their research. Furthermore, this model gives international academics immediate, free access to important research that is emerging from the continent, which is the subject of study. Such access has the potential to revolutionize the way that international academics approach Latin American studies and to encourage a greater degree of international academic debate.
Digital Humanities Quarterly is a new peer-reviewed, open-access journal published by the Alliance of Digital Humanities Organizations (ADHO) and currently calling for submissions. From the site:
DHQ is also a community experiment in journal publication, with a commitment to:  experimenting with publication formats and the rhetoric of digital authoring,  co-publishing articles with Literary and Linguistic Computing (a well-established print digital humanities journal) in ways that straddle the print/digital divide,  using open standards to deliver journal content,  developing translation services and multilingual reviewing in keeping with the strongly international character of ADHO.