the new W3C Linked Data Platform (LDP) Working Group had its first informal gathering this week at the SemTech 2012 conference in san francisco, and as i wrote when we group was officially announced, i think we have a great opportunity ahead of us. personally, i think the most beneficial goal we can possibly target is to open the goodness that can be implemented by using linked data to those who are not using it. this is why i wrote about making linked data services available to the 99% of users and applications that do not use or even know linked data. and the most important point is that they shouldn't need to know linked data to be able to benefit from what it can do. how can that possibly work?
it's simple: if i am interested in a service that will not just give me raw data that has been collected somehow, but provides added value by sensemaking machinery that, for example, allows me to find out that a certain datapoint could be seen as being correlated with another, as a user, the value lies in consuming that service that tells me about the correlation. whether that service is using linked data and has aggregated all kinds of datasets and then used advanced reasoning, or whether it's based on Big Data and MapReduce or similar machinery really is irrelevant for me. i am not interested in how the service helps me with sensemaking, i am only interested in what it can do for me (i.e., finding correlations).
in his www2012 keynote, chris welty highlighted (based on watson) that semantic technology plays an important role in sensemaking and advanced information processing today. but it is not the only component and thus has to be combined with other components from the growing arsenal of data processing components. this topic has been mentioned consistently in practical case studies in recent years: semantic web technologies are relevant pieces in information processing ecosystems, but they need to be easily combinable with other pieces, so that solutions can be engineered as simply as possible.
coming back to the LDP topic from the beginning (and i am making the following terms up): the question that we will have to answer at some point in time is, are we interested in building a platform of Linked Data Services, or one of Linked Data as a Service? the former are services that are easily consumable by the 99%, and do not require the consumer to be part of the Linked Data world: they try to expose the value of Linked Data without exposing any implementation details. the latter are services that are interesting for those 1% of people engineering in the Linked Data world, and they can benefit from having access to services firmly placed in the Linked Data world: by building on top of certain design patterns, it becomes easier for them to compose and connect Linked Data scenarios.
i think that both scenarios make sense, and that both scenarios will be of value for the Linked Data community. but where REST comes to shine is when it is all about the surface models of services: describing the what of services based on state-transfering interactions without describing the how. as i wrote in my initial post, my understanding of the LDP effort is that this is what we are attempting to do: how to use REST and thus loose coupling to allow any client to tap into the value that can be delivered by Linked Data Services. i am sure that within the working group we will have diverging views on how we will build the Linked Data Platform that we are chartered to build. my guess is that the most value can be created by going the REST path of loose coupling and hiding implementation details, but i am also sure that we will have some interesting discussions in the working group when it comes to deciding which way we are going. i am looking forward to these discussions, and the good news are that in both cases, we will be doing work that will be useful to a certain set of consumers.