Posts tagged SHARE

OWL Domain Models as Abstract Workflows

The publication from the Wilkinson lab:  OWL Domain Models as Abstract Workflows  describes how we use SADI and SHARE to generate a workflow based on a biological domain model.  The upshot is that, by creating a biological model of a piece of data of interest, SHARE can cobble-together a series of SADI services that will find/generate data that matches that model.

SPARQL Assist 0.1.3

SPARQL Assist 0.1.3 is up on Google code.  The code is still in SVN.  This release fixes a few small bugs, but the big change is that there’s now a WAR distribution that includes a servlet that will load any ontology referenced in a FROM clause and make those terms available for auto-complete.  This functionality was previously only available in the SADI extension to SPARQL Assist used by the CardioSHARE query client. Download the SPARQL Assist 0.1.3 servlet distribution.

As usual, if you find a bug, please report it at the SADI Google Code site or the SADI Google Group.

SPARQL Assist 0.1.2

New version of SPARQL Assist. See SPARQL Assist 0.1.3 for details.

SPARQL Assist 0.1.2 is up on Google Code.  The code is in SVN now, too.This release fixes a bug where namespaces would not be suggested in the first position of triples in the WHERE clause. If you find a bug, please report it at the SADI Google Code site or the SADI Google Group.

Paper: SADI, SHARE, and the in silico scientific method

The SADI, SHARE, and the in silico scientific method paper by Mark D Wilkinson, Luke McCarthy, Benjamin Vandervalk, David Withers, Edward Kawas and Soroush Samadian is available online at

Leveraging of Semantic Web Services for Practical Application: Creating Transferable Methodology with SADI Case Studies

A presentation of several use cases for the SADI framework and the SHARE client by Alexandre Riazanov at AWOSS2010

SPARQL Assist Language-Neutral Query Composer

I’ll be presenting a demo of our SPARQL Assist Language-Neutral Query Composer at SWAT4LS 2010 in Berlin next week.The slides are available on SlideShare: SPARQL Assist Language-Neutral Query Composer.

Download the SPARQL Assist distribution. You can also explore the customized CardioSHARE-specific demo at

New version of SPARQL Assist.  See SPARQL Assist 0.1.2 for details.

SHARE & The Semantic Web - This Time it’s Personal!

Luke’s publication at the 2010 OWLED:

OWL : Experiences and Directions
Seventh International Workshop
San Francisco, California, USA
21-22 June 2010

The slides are available on SlideShare: SHARE & the Semantic Web — This Time it’s Personal.

CardioSHARE walkthrough

Take a look at this query, which can be executed in the experimental OWL 2 CardioSHARE client.

PREFIX rdf: <>
PREFIX patients: <>
PREFIX bmi: <>
SELECT ?patient ?bmi
  ?patient rdf:type patients:AtRiskPatient .
  ?patient bmi:BMI ?bmi

We’re going to walk through what the CardioSHARE client does when this query is executed. Apologies if the anthropomorphic, intentional phrasing bothers you, but it simplifies the language considerably.

  1. The client is initialized with an empty knowledge base backed by an OWL reasoner. In this case, we’re using Pellet because the query refers to a class that uses an OWL 2.0 construct that isn’t supported by the other reasoners available to us.
  2. The client examines the FROM clause and notices that the named graph is a URL. It fetches the URL — using content negotiation to request RDF/XML — and stores the result in its knowledge base.
  3. The client attempts to order the query clauses so as to minimize the number of service calls and the amount of data that must be transferred over the network. For more detail on the query optimization process, consult Ben Vandervalk’s Master’s thesis.
  4. In this particular case, the client processes the ?patient rdf:type patients:AtRiskPatient clause first. This is an rdf:type clause, so the client assumes the object is the URI of an OWL class. There is no information in the client’s knowledge base about the AtRiskPatient class, so the client fetches the class URI using content negotiation as above. If the class URI was not also a URL (and so couldn’t be fetched) it would have to be defined in a document specified in a FROM clause.
  5. The client decomposes the AtRiskPatient class into its component restrictions. In this case, there is only one restriction: that some values of the property BMI are greater than 25.
  6. The client queries the SADI registry for services that can attach the BMI property. It finds one service,
  7. The client would like to use the candidates for the ?patient variable as input to the calculateBMI service, but this is the first time it has encountered that variable and there are no candidates. That being the case, the client examines its knowledge base for instances of the service’s input class, loading the class definition if necessary as above. SADI requires that input and output classes are identified by URLs that resolve to the appropriate definition, so we know this will work. The client adds the instances it finds to the candidates for the ?patient variable.
  8. The client invokes the calculateBMI service, using the candidates of the ?patient variable as input. It assembles the minimal RDF needed to satisfy the service’s input class definition for each input and POSTs that RDF to the service URL. The RDF that the service returns is added to the client’s knowledge base.
  9. The client moves on to process the ?patient bmi:BMI ?bmi clause. It queries the SADI registry for services that can attach the BMI property and finds the same service calculateBMI as above.
  10. The client invokes the calculateBMI service, using the candidates of the ?patient variable as input. Actually, no it doesn’t, because I glossed over a part of the procedure the last time the client did this: when it invokes a service, the client tracks which individuals it sent to that service; before it assembles the RDF that it’s going to send to a service, the client excludes any individuals it has already sent. So what actually happens here is that the client excludes all of the individuals it was going to send and just moves on.
  11. At this point, the client has run out of query clauses to process, so it turns the populated knowledge base over to a conventional SPARQL reasoner that executes the original query.

Questions? Comments? Pop over to the CardioSHARE Google Group.