« Describing Resources, the Web Way | Main | Linked Data Services »

Thursday, May 10, 2012


Feed You can follow this conversation by subscribing to the comment feed for this post.


Congrats Eric, I've been finding the LDBP and related OSLC work most interesting of late, IMHO it would definitely be worth keeping in touch the the RWW CG during this effort, since they are covering things like ACL and Patching RDF data - ad also have several projects using these approaches, including ones which are tied in with OSLC work via Sebastian.




False dichotomies availeth nothing.

BI data integration is a point along a spectrum that also includes AI-like ontology inference.

In fact, in our work for customers and in our software product development in this area going back to 2004, we repeatedly observe and exploit a virtuous feedback cycle between using the RDF-SPARQL-OWL family of technologies to perform what you call "BI-like data integration" and then to perform "AI-like ontology/inference"...which invariably drives a new requirement for *more* BI-like data integration. Etc and etc.

In other words, semtech is badass because it powers integration *and* analytics. It will be less badass if people continue to insist upon rhetorical strategies and standardization moves that lop off half of the value proposition. Integration *and* analytics is better than integration alone.

In my view, overplaying and over-emphasizing this distinction is a common mistake and a bad approach. It's especially troubling if the distinction is improperly reflected in the charter of this new WG.

We're a vendor that has product offerings along many points of this spectrum and our ability to implement the standards that come out of this WG is bounded by the degree to which the WG's work enables *all* points along the spectrum, not just some of them.


i guess you were misreading my post a little bit, @kendall. i was trying to be clear (apparently not clear enough) that i am simply talking about how to design and implement interactions, not about the data model itself (quote: "while it will be important to make sure that the data model itself is maintained at the payload level, many of the issues listed in the charter can be readily addressed by choosing existing and well-established standardized components"). linked data is (as the name implies, i'd argue) a data model, whereas REST is all about driving interactions with a certain style of exchanging representations. look at atom and atompub and you can see how there are control structures (the syntax and semantics of atom's data model and the associated interactions in atompub), but the data model is mostly a question of the application itself. i would bet that a tiny fraction of atom/atompub services actually use atom's "data model" (which is very simple) internally. instead, they combine their (often very complex and sophisticated) back-end data model and services with an established and well-supported interaction model, thereby getting the most out of their sophisticated data handling, and atom's availability as a standard. trying to reinvent these mechanisms in a triple-only world not only would be a waste of time and effort, it also would disconnect that recreated ecosystem from the vast ecosystem of services and tools that are out there. linked data and all the wonderful things that can be done with it need to get more connected to those 99% of the world that are not linked data, and i think we have an excellent opportunity to move in that direction, should we choose to take the path of pragmatism over purism.

The comments to this entry are closed.