Saturday, December 02, 2006

Open Search and The Nearctic Spider Database - almost there

As announced on TAXACOM, David Shorthouse has added an Open Search interface to his really nice Nearctic Spider Database. As I've noted previously (see Adding sources to iSpecies and OpenSearch and iSpecies ), OpenSearch seems an obvious candidate for a simple way to add search functionality to biodiversity web sites.

The interface is generated by some software called Zoom Search, and the interface is here. As an example, here is a query for the spider Enoplognatha latimana.

But...

Having an easy way to search a site using a URL API such as Open Search is great, but the feed is RSS 2.0, and as a result has very little information. For example, here's an extract:


<item>
 <title>The Nearctic Spider Database: Enoplognatha latimana Hippa & Oksala, 1982 Description</title>
 <link>http://canadianarachnology.dyndns.org/data/spiders/7561</link>
 <description>THERIDIIDAE: Enoplognatha latimana taxonomic and natural history description in the Nearctic Spider Database.</description>
 <zoom:context> ... Descriptions Home Search: Register Log in Enoplognatha latimana Hippa& Oksala, 1982 Temporary ... 2007 Arachnid Calendar FAMILY: THERIDIIDAE Sundevall, 1833 Genus: Enoplognatha Pavesi, 1880 ...</zoom:context>
 <zoom:termsMatched>2</zoom:termsMatched>
 <zoom:score>1804</zoom:score>
 </item>


This information is intended to be displayed in a feed reader, and hence viewed by a human. But, what if I want to put this information in a database, or combine it with other data sources in a mashup, such as iSpecies? Well, I have to scrape information out of free formatted text. In other words, I'm no further forward than if I scraped the original web page.

If we want to make the information accessible to a computer, then we need something else. RDF is the obvious way forward.

The difference that RDF makes

To illustrate the difference, let's search for images of the same spider (Enoplognatha latimana) using my Open Search wrapper for Yahoo's images search (described in OpenSearch and iSpecies). Here is the query. This feed is formatted as RSS 1.0, and I can view it in a feed reader, such as NetNewsWire.



But, because the feed is RSS 1.0 and therefore RDF, the feed contains lots of information on the image in a form that can be easily consumed.


<foaf:Image rdf:about="http://www.spiderling.de/arages/
Fotogalerie/Enoplognatha_latimana_1024.jpg">
 <dc:type>image</dc:type>
 <dc:title>Enoplognatha_latimana_1024.jpg</dc:title>
 <dc:description></dc:description>
 <dc:subject>Enoplognatha latimana</dc:subject>
 <dc:source>http://www.spiderling.de/arages/
Verbreitungskarten/ENO_LAT0.HTM</dc:source>
 <dc:format>image/jpeg</dc:format>
 <foaf:thumbnail rdf:resource=
"http://re3.mm-a1.yimg.com/image/206564554"/>
</foaf:Image>


In this example, I use the FOAF and Dublin Core vocabularies. these are widely used, making it easy to integrate this information into a larger database, such as a triple store. To my mind, this is the way forward. We need to move beyond thinking about making data only accessible to people, and making it accessible to computers. Once we do this, then we can start to aggregate and query the huge amounts of data on the web (as exemplified by David's wonderful site on spiders). And once we do that, we may discover all sorts of things that we don't know (see Disconnected databases, and Discovering new things).

Monday, November 06, 2006

Identification service

Nick Hobgood emailed me asking whether iSpecies supports requests for identifications. In other words, is this fish Rudarius minutus?


iSpecies doesn't support requests, but it strikes me a useful idea if there was a place where such requests could be directed. The TAXACOM mailing list is one place I've seen requests made, but a mailing list is probably not the best forum. An interesting idea to pursue...

Tuesday, September 05, 2006

OpenSearch and iSpecies

I've mentioned OpenSearch in an earlier post, in the context of adding additional sources to iSpecies. But it's slowly dawned on me that what i should be doing is wraping the sources I currently use in OpenSearch as well. Hence, any data source would have a consistent query interface, and a consistent return format. If we ensure the later is RDF, then we get aggregation "for free".

So, I've made a start. First up is Yahoo's image search, which I've wrapped as http://darwin.zoology.gla.ac.uk/cgi-bin/yahoo.cgi. You just append "q=" and the search terms to get a result. Try an example search for images of the ant Atta mexicana. Note that I currently just support the return format, not the query format (that'll come later). The query result is RSS 1.0 because it contains RDF (RSS 2.0 and Atom don't, and hence for my purposes are beside the point). The upshot is that I can now use this search in other projects, and making a better iSpecies becomes simply a case of adding a bunch of OpenSearch sources together.

Generating the RSS proved "fun", but the feed now validates as RDF, although Feed Validator grumbles slightly. It's all a bit of a black art, but I had to nest the RDF payload in <content:item> tags, like this:
<content:item>
<foaf:Image rdf:about="http://www.par...x.draw.JPEG">
<dc:type>image</dc:type>
<dc:title>Atta.mex.draw.JPEG</dc:title>
<dc:description>Leaf-cutter ants (Atta mexicana ) ... </dc:description>
<dc:subject>Atta mexicana</dc:subject>
<dc:source>http://www.parasiticplants.siu.edu/Viscaceae</dc:source>
<dc:format>image/jpeg</dc:format>
<foaf:thumbnail rdf:resource="http://mud.mm-a5.yimg.com/image/2050519657"/>
</foaf:Image>
</content:item>

Friday, September 01, 2006

More DOIs


Following on from an earleir post, I've now added DOI extraction for SciELO, which hosts Brazilian publications, and Taylor and Francis. This was motivated by searching iSpecies for the ant Trachymyrmex opulentus, for which only papers hosted by these two publishers appear in the search results.


Again, we are reduced to screen scraping (sigh). Why oh why don't the people who design these web sites get their act together and embed useful information in the HTML, rather than assume that only humans will make use of these pages?

One provider that is clued up is Ingenta. For example, take a look at the HTML for the article "Influence of Topography on the Distribution of Ground-Dwelling Ants in an Amazonian Forest" (doi:10.1076/snfe.38.2.115.15923) on the Ingenta site (Firefox and Camino users can see the source here). Embedded in the <meta> tags is all sorts of metadata, including the DOI:

<meta name="DC.identifier" scheme="URI"
content="info:doi/10.1076/snfe.38.2.115.15923"/>

The use of consistently formatted tags makes data extraction much easier. Of course, it's no surprise that Ingenta do this well (check out their blog).

Tuesday, August 29, 2006

Extracting DOIs



DOIs are pretty cool, so I spent a little time this evening working out how to extract DOIs from Google Scholar results for journals hosted by Springer, JStor, and J-Stage I've also added code to extract Serial Item and Contribution Identifiers (SICIs) from JSTor URLs. SICI is NISO standard Z39.56.

The point of this exercise is to try and get DOIs for as many articles as possible, because DOIs are the GUID of choice for publications, and we can extract metadata for a DOI, either directly using CrossRef's OpenURL resolver, or via Connotea. This will make life easier for the next step, namely aggregating literature into a triple store.

Wednesday, August 09, 2006

Add to Connotea

Finally go around to adding a "Add to Connotea" button to the Google Scholar results, based on code from Postgenomic. The code is a simple bit of Javascript:


<a style="cursor:pointer;" onclick="javascript:
u='http://dx.doi.org/10.1111/j.1744-7429.2005.37_04_01.x';
a=false;
x=window;
e=x.encodeURIComponent;
d=document;
w=open('http://www.connotea.org/addpopup?continue=confirm
&uri='+e(u),'add','width=660,height=300,scrollbars,resizable');
void(x.setTimeout('w.focus()',200));">
<img src="images/connotea.png" border="0"
alt="add bookmark to connotea" align="absmiddle"></a>


where http://dx.doi.org/10.1111/j.1744-7429.2005.37_04_01.x is the URI of the article being added.

Now a click brings up Connotea and you can add a paper you've found using iSpecies. At present this only works for papers where I've extracted a DOI.

Saturday, July 08, 2006

Adding sources to iSpecies

One issue which comes up every so often is how to add data sources to iSpecies. At present iSpecies queries NCBI, Yahoo images, and Google Scholar, each source requiring different code to make the query and handle the response. If adding new sources requires writing code specific to that source then iSpecies would rapidly become a nightmare (leaving aside the issue that until iSpecies is multithreaded, adding additional sources slows everything down -- see my earlier post about the need for speed).

One solution is to develop a standard search interface and ask data source to adopt that. The obvious candidate is OpenSearch, which I've already touched on over at iPhylo. OpenSearch is appealing because it is no more difficult than serving RSS feeds, and because it is based on RSS it can be integrated into a range of tools, such as Amazon's A9, and Internet Explorer 7.

At a minimum, it would be useful if sources supported OpenSearch. It would also be useful if they supported RSS to serve individual records. This is handy because NCBI links to numerous sources via LinkOut, and hence we could avoid the overhead of doing a search if we can retrieve the record directly (i.e., if NCBI has a link then I already now the information exists).

In say "RSS", I should stress that I really mean RSS 1.0 (i.e., RDF). RSS 2.0 and Atom are a lot less useful in the long run, because RSS 1.0 can be integrated into a triple store, which opens up a world of cool things (i.e., aggregating data and performing queries on that data).

Monday, May 22, 2006

Why Google is good for science...maybe


I just noticed this piece written by Jeff Perkel in January who, after "poking around" the iSpecies blog, wrote Why Google is good for science. Well, yes and no. On the one hand it's fabulous, but on the other hand they can play rough. For example, iSpecies used Google Scholar to find scientific papers for a species name. The traffic was pretty minimal in the scheme of things, but Google have now blocked iSpecies (and as a consequence my whole University - gulp!) from accessing Google Scholar.


Before anybody says, "but you got what you deserved because you broke Google's Terms of Service", I think in this case they are simply being lazy. If Google truly cared about making Google Scholar useful, they'd create an API. Because they haven't I had to resort to screen scraping their unbelievably awful HTML (and I'm not the only one). The cost of setting up an API along the lines of the one available for the main Google search engine would be trivial.

After venting my spleen, the reason -- as I should of guessed -- is "intellectual property". Google Scholar's agreements with the publishers that they index prevents Google from making it available other then through the web site. Thanks to Rebecca Shapley for clarifying this. Once again, scientists are being ill served by our publishers. Perhaps somebody needs to set up an Open Source/Open Access equivalent of Google Scholar.

This is what I originally wrote, which perhaps is another reason publishers don't want Google Scholar having an API:

There would also be a potential market. In the UK we rate our research based on a number of factors including journal impact factor, as part of the gargantuan Research Assessment Exercise. Impact factors are supplied by ISI, and Google Scholar results compare well with that source. Just think of the possibilities of a service that used Google Scholar to rate scientists' output. It could even be part of a service like LinkedIn, whioch I stumbled on via Pierre's blog on geotagging RSS feeds (which is a whole separate issue).




Monday, April 24, 2006

iSpecies down

As reported on iPhylo, the machine running iSpecies was hacked last week. It's taking a while to rebuild things, but iSpecies is now running on another machine until the hacked machine can be rebuilt. My apologies for any inconvenience. Apart from issues of backing up, things take time to restore because the original machine ran an old version (4.2) of PHP, and the new machine uses PHP 5.

Sunday, March 19, 2006

Building the encyclopedia of life


iSpecies is very limited in the sources it uses, and also in what it extracts from its sources. The sources it does query contain a wealth of information. As an example, GenBank sequence AF131710 from Ligophorus mugilinus has the following information about this animal:


FEATURES Location/Qualifiers
source 1..374
/organism="Ligophorus mugilinus"
/mol_type="genomic DNA"
/specific_host="Mugil cephalus"
/db_xref="taxon:92200"
/country="France"


Note the tags "/specific_host" and "/country". By parsing this record we learn that this organism is found in France, and is hosted by Mugil cephalus.

In the same way, the Google Scholar results could be more effectively used. In many cases we could follow the links to get abstracts of articles, then use literature data mining techniques (e.g., Hirschman et al.) to extract information on the organism's ecology, etc.

Extracting this sort of information would be an one way to automate the construction of an encyclopedia of life.

Towards a faster iSpecies: building libxml and libxslt on Mac OS X

iSpecies is written in PHP, and calls a Perl CGI script (to query Google Scholar). This works, but is a bit slow. It also puts limits on what we can do. For example, it would be cool to make the search multithreaded so that the different sources are queried at the same time. This becomes a major issue if we want to "drill down." For example, if a taxon exists in NCBI, it would be useful to visit all the LinkOut resources and collect whatever information they make available. Likewise, Google Scholar results contain links to publishers that could be explored further (such as extracting bibliographic information from RIS files, or RSS feeds such as those available for Ingenta-hosted journals). All of this would delay displaying search results to the user, especially if we have to visit one link after another.

Multithreading would help, but PHP doesn't do this, hence I'm toying with moving to C++ and building a "proper" application (I don't do Java). This means I need to get XML, XPath, and XSLT libraries for C/C++, and this has been, ahem, interesting. Was going to use Sablotron (which I use in my PHP 4 and Perl work), but its documentation is just awful (where are some nice examples?). Will probably use libxml and libxslt. These come with Mac OS X 10.3.9 (I do my development on a G4 iBook, before moving stuff to a Linux box), but Apple hasn't compiled libxml with XPath support (sigh). I built libxml 2.2.63 OK, but libxslt 1.1.15 needed a little hand holding because of the presence of Apple's libxml. The following does the trick:


./configure --with-libxml-prefix=/usr/local


This tells configure to use the version of libxml I installed in /usr/local. Now, once I get my head around libcurl I'll try and build something and see if we can speed up iSpecies.

Monday, February 27, 2006

Silobreaker

Silobreaker looks to be a very cool way of exploring information. Facetted browsing is an old idea, but this looks like it might actually make it fun.



(Via Really Simple Sidi (RSS).)

Tuesday, February 07, 2006

Tag cloud

I've added a simple "tag cloud" showing frequency of the top 30 searches in iSpecies. It's a bit ugly, but you get the idea. You see the tag cloud if you go to iSpecies directly (i.e., no search term).

I made use of a nice article on scaling tag clouds by Anders Pearson. He describes a simple function to scale the tags. I put this into an Excel spreadsheet as a quick hack (in other words, the tag cloud isn't dynamic yet).

Programmable Web


iSpecies makes it onto Programmable Web. This site has all sorts of useful information on Web 2.0 and mashups, see also Mashup Feed. Spotted by Simon Rycroft.

Sunday, January 29, 2006

Automatic extraction of references from a paper

One goal for iSpecies would be integrating taxonomic literature into the output. This has been motivated by Donat Agosti's efforts to make the taxonomic literature for ants available (see his letter to Nature about copyright and biopiracy doi:10.1038/439392a). For example, we can take a paper marked up in an XML schema such as the TaxonX Treatment Markup, extract the treatments of a name, and insert these into a triple store that iSpecies can query. For a crude example search iSpecies for the "Google ant" Proceratium google.

Now, marking up documents by hand (which is what Donat does) is tedious in the extreme. How can we automate this? In particular, I'd like to automate extracting taxonomic names, and references to other papers. The first can be facilitated by taxonomic name servers, particularly uBio's FindIT SOAP service. Extracting references seems more of a challenge, but tonight I stumbled across ParaCite, which looks like it might do the trick. There is Perl code available from CPAN (although when I tried this on Mac OS X 10.3.9 using cpan it failed to build) and from the downloads section of ParaCite. I grabbed Biblio-Citation-Parser-1.10, installed the dependencies via cpan, then built Biblio::Citation::Parser, and so far it looks promising. If references can be readily extracted from taxonomic markup, then this tool could be used to extract the bibliographic information and hence we could look up the references, both in taxon-specific databases such as AntBase, but also in Google Scholar.

Wednesday, January 25, 2006

Google Maps Mania: North American Bird Watching Google Maps Mashup


Google Maps Mania: North American Bird Watching Google Maps Mashup notes the very slick combination of Google Maps and Flash to display ranges of North American birds at GeoBirds.com. The mashup uses data from the USGS Breeding Bird Survey and the Audobon Society's Christmas Bird Count.

Monday, January 23, 2006

Google, Yahoo, and the death of taxonomy?

I posted this on my iPhylo blog, but since it is more relevant to iSpecies, and indeed the talk is the reason I built iSpecies, maybe it belongs here (see, I'm so self-absorbed I've started to blog my blogs - sad).

Wednesday December 7th I gave a talk at the Systematics Association's AGM in London, with the slightly tongue in cheek title Google, Yahoo, and the end of taxonomy?. It summarises some of the ideas that lead me to create iSpecies.org.

For fun I've made a Quicktime movie of the presentation. Sadly there is no sound. Be warned that if you are offended by even mild nudity, this talk is not for you.

The presentation style was inspired by Dick Hardt's wonderful presentation on Identity 2.0.

Friday, January 20, 2006

Identifiers for publications

Despite my enthusiasm for LSIDs, here are some thoughts on indentifiers for publications. Say you want to set up a bibliographic database. How do you generate stable identifiers for the contents?

There's an interesting -- if dated -- review by the National Library of Australia.

The Handle System generates Globally Unique Identifiers (GUIDs), such as hdl:2246/3615 (which can be resolved in Firefox if you have the HDL/DOI extension). Handles can also be resolved with URLs, e.g. http://digitallibrary.amnh.org/dspace/handle/2246/3615 and http://hdl.handle.net/2246/3615. DSpace uses handles.

DOIs deserve serious consideration, despite costs, especially if the goal is to make literature more widely available. With DOIs, metadata will go into CrossRef, and publishers will be able to use that to add URLs to their electronic publications. That means people reading papers online will have immediate access to the papers in your database. Apart from cost, copyright is an issue (is the material you are serving copytighted by sombody else?), and recent papers will already have DOIs. Having more than one is not ideal.

If Handles or DOIs aren't what you want to use, then some sort of persistent URL is an option. Their content can be dynamically generated even if they look like static URLs. For background see Using ForceType For Nicer Page URLs - Implementing ForceType sensibly and Making "clean" URLs with Apache and PHP. To do this in Apache you need a .htaccess file in the web folder, e.g.:

# AcceptPathInfo On is for Apache 2.x, don't use for Apache 1.x
<Files uri>
# AcceptPathInfo On
ForceType application/x-httpd-php
</Files>

You need to ensure that .htaccess can override FileInfo, e.g. have this in httpd.conf:

<Directory "/Users/rpage/Sites/iphylo">
Options Indexes MultiViews
AllowOverride FileInfo
Order allow,deny
Allow from all
</Directory>

This would mean that http://localhost/~rpage/iphylo/uri/234 would execute the file uri (which does not have a PHP extension). The file would look something like this:

<?php

// Parse URL to extract URI
$uri = $_SERVER["SCRIPT_URL"];
$uri = str_replace ($_SERVER["SCRIPT_NAME"] . '/', '', $uri);

// Check for any prefixes, such as "rdf" or "rss" which will flag the
// format to return
// Check that it is indeed a URI
// Lookup in our database
// Display result
?>

Lastly, ARK is another option, which is essentially a URL but it copes with the potential loss of a server. It comes from the California Digital Library. I'm not sure how widely this has been adopted. My sense is it hasn't been, although the Northwest Digital Archives is looking at it.

If cost and hassle are a consideration, I'd go for clean URLs. If you wanted a proper bibliographic archive system I'd consider setting up a DSpace installation. One argument I found interesting in the Australian review is that Handles and DOIs resolve to a URL that may be very different to the identifier, and if people copy the URL in the location bar they won't have copied the GUID, which somewhat defeats the point. In other words, if they are going to store the identifiers, say in a database, they need to get the identifier, not the URL.

Creative Commons Welcome Pack


Creative Commons Welcome Pack
Originally uploaded by Carlfish.
I want one!

Using RSS feeds to notify clients when data changes

iChating with David Remsen earlier this morning, and he suggested using RSS feeds as a way of data providers "notifying" clients if their data has changed (e.g., if they've added some new names). Nice idea, and frees the client (such as a triple store) from having to download the entire data set every time, or to compute the difference between the data held locally and the data held by the remote source. Turns out that HTTP conditional GET can be used to tell if something has changed.

So, the idea is that a data source timestamps its data, and when data is modified it adds the modified records to its RSS feed. The data consumer peridically checks the RSS feed, and if it has changed it grabs the feed and stores the new data (which, in the case of a triple store can be easily parsed into a suitable form).

Tuesday, January 17, 2006

Says it all...


Why I avoid discussions of standards like the plague...

Monday, January 16, 2006

EXIF tags


Some images come with embedded metadata, such as EXIF tags or XMP. Images from AntWeb are a good example. These tags can be viewed by various programs, such as Adobe Photoshop, or utilities such as EXIF Viewer, seen here.

So, an obvious step would be (assuming we start using a triple store as a backend for iSpecies, and/or provide the results of a query in RDF) would be to extract metadata from EXIF tags. For example, the image http://www.antweb.org/images/casent0100367/casent0100367_p_1_low.jpg of Proceratium google in AntWeb has the following metadata:


File name: casent0100367_p_1_low.jpg
File size: 17811 bytes (0x0, infbpp, 0x)
EXIF Summary:

Camera-Specific Properties:

Camera Software: EXIFutils V2.5.7
Photographer: April Nobile

Image-Specific Properties:

Image Created: 2005:09:27 09:54:34
Comment: Attribution-NonCommercial-ShareAlike Creative Commons License

Other Properties:

Exif IFD Pointer: 196
Exif Version: 2.20


Hence, we could extract the relevant bits (author, date, copyright) and store those. This could be done in bulk using a tool such as ExifTool.

The example of AntWeb does show one weakness of free-text metadata. The image is licensed under "Attribution-NonCommercial-ShareAlike Creative Commons License". I'm assuming this is Attribution-NonCommercial-ShareAlike 2.0, but without a URL it is a faff to work this out. Ah, looking at the AntBase pages for individual specimens, it's actually 1.0. Yes, it's pretty obvious, but it still requires string matching. These things need to be computer readable as well, and versioned (for example, which version of this license was intended?).



For the photographer (such as AntWeb's April Nobile - seen here), it might be useful to create a FOAF file to link to, so that we have metadata about the creator of the images.

Monday, January 09, 2006

From the Blogsphere

Some interesting comments on iSpecies and related matters:

Hip Hop Offers Lessons on Life Science Data Integration
(Great title)

Ontogeny: iSpecies.org and Census of Marine Life Update
I searched on Solenopsis invicta and thought the return information was great.


open data and open APIs enable scientific mashups

The biodiversity community is one group working to develop such services. To demonstrate the principle, Roderic Page of the University of Glasgow, UK, built what he describes as a "toy" — a mashup called Ispecies.org (http://darwin.zoology.gla.ac.uk/~rpage/ispecies). If you type in a species name it builds a web page for it showing sequence data from GenBank, literature from Google Scholar and photos from a Yahoo image search. If you could pool data from every museum or lab in the world, "you could do amazing things", says Page.

Loomware

This is too cool - some interesting examples of "Scientific mashups" from Richard Akerman. I followed the iSpecies link and was blown away by the data that was returned. I studied a damselfly called Argia vivida for my graduate degree and way back then finding data on the bug was not always easy. Searching the species in iSpecies bring up a TaxID number to the NCBI Taxonomy Browser, a list of papers from Google Scholar and images from Yahoo Images. This is an excellent example of what we have all been waiting for with the promise of the Web and web services in particular. The concept of a page for every species known is a dream come true for science, so this one is worth watching.

Friday, January 06, 2006

AntWeb-Google Earth Map


AntWeb- Google Earth Map
Originally uploaded by GISuser.com.
This is a nice example of the kind of thing that can be done when georeferenced specimen data are readily available. Need to think about doing this for iSpecies.

For more on AntWeb and Google Earth vist here .

Wednesday, January 04, 2006

Nature on mashups

Nature has an article by Declan Butler on "mashups", which mentions iSpecies, and also Donat Agosti's work on AntWeb and Antbase.