Open Access News

News from the open access movement

Tuesday, September 11, 2007

DBpedia upgrade

DBpedia 2.0, Open Knowledge Foundation Weblog, September 10, 2007.  Excerpt:

DBpedia recently released the new version of their dataset. The project aims to extract structured information from Wikipedia so that this can be queried like a database. On their blog they say:

The renewed DBpedia dataset describes 1,950,000 “things”, including at least 80,000 persons, 70,000 places, 35,000 music albums, 12,000 films. It contains 657,000 links to images, 1,600,000 links to relevant external web pages and 440,000 external links into other RDF datasets. Altogether, the DBpedia dataset now consists of around 103 million RDF triples.

As well as improving the quality of the data, the new release includes coordinates for geographical locations and a new classificatory schema based on Wordnet synonym sets. It is also extensively linked with many other open datasets, including: “Geonames, Musicbrainz, WordNet, World Factbook, EuroStat, Book Mashup, DBLP Bibliography and Project Gutenberg datasets”.

This is probably one of the largest open data projects currently out there - and it looks like they have done an excellent job at integrating structured data from Wikipedia with data from other sources. (For more on this see the W3C SWEO Linking Open Data project - which exists precisely in order to link more or less open datasets together.)

Comment.  DBpedia harvests from Wikipedia because Wikipedia is large and free.  But something similar could be done with unfree databases.  The trick (apart from access) is to extract uncopyrightable facts and paraphrased assertions, not copyrighted expressions.  Wikipedia may be the inexpensive way to prove the concept, but the concept is of much wider application.  See some examples of DBpedia fact and assertion harvesting, and let your imagination run free.