« Resource Profiles | Main | PATCHing AtomPub »

Monday, March 12, 2012


Feed You can follow this conversation by subscribing to the comment feed for this post.

Ferenc Mihaly

I'm wondering what do you mean by layering? Obviously saying that HTTP is "layered" on top of the REST architecture style has a very different meaning than saying that payload semantics is "layered" on top of a document format like XML. Perhaps because each of your layers describe very different things: architectural style,web protocol, data format, semantics, and application. This is very different from layered software architectures, where each layer is software, or a layered protocol stack where each layer is a protocol.


@ferenc, you're right that maybe "layer" is a too strong term to use here, because it often is used in fairly well-defined and structured ways. feel free to propose better terminology, but regardless of the term, my main goal was to show that when you look at the design and implementation and deployment of RESTful architectures, then there are different activities at different points in time, and it might make sense to try to clearly separate these activities into different, well, "steps", "levels", "aspects", "issues"? it's hard to think of a term that does not have some specific meaning somewhere, but like i said, i am definitely open for suggestions.

Ferenc Mihaly

I think "aspects" or "concerns" would be better. We can talk about application architecture, core technologies, or application semantics independently from each other. I can say that I'll build a client-server application using a request-reply style protocol without mentioning what the application will do or what technology I'll use to build it. I can also describe message semantics before deciding whether to encode it as XML, JSON, or some other format.


@ferenc, i am fine with "aspects" or "concerns"; my main issue was not to claim that there is a clear layering structure as there is in other areas of IT architectures, but to point out that while at some initial design stage it makes sense to focus on the most fundamental aspects such as media types and link relation types, once you get closer to application and deployment issues/aspects/concerns, it actually may make sense to focus on aspects such as exposed URIs, and document a service this way so that "follow your nose" style explorations can start from this starting point. if such a description was machine-readable, in essence it would just be another resource linked from a service's entry point, in the same way as a sitemap is (implicitly, because by URI-based convention) linked from a server's resources. at that point, the description/documentation format becomes just another media type.

Ferenc Mihaly

I read your 2010 paper "From RESTful Services to RDF" over the weekend. I think I better understand now what you are proposing. I think something like it could be very useful.

The hypermedia thinking about REST reminds me a bit of the early model of the Web which expected users to browse from page to page using links. Today we have plenty of evidence that this does not work. The link click rates on the average web page are well below 1%, which of course means that without search engines, social media sites, and the like to keep generating traffic the web would get a whole lot quieter within days. Even assuming that machines are much better at following links than humans the browse model is questionable.

The analogy comparing your proposal to site maps confused me. Your article clearly talks about meta models (Fig. 1) and seem to describe resource types, not resources, link relations, not individual links. To me, this means a higher level of abstraction than site maps, which still contain links to pages, not page types. Do we agree here?

I still think you should find a suitable analogy, because the best way to send cold shivers down someone's spine is to talk to him about metamodels. I just don't think that sitemap is a very good analogy for this.

While I was reading your article I kept thinking of search engines. Search engines are essentially an optimization for link navigation. Instead of following link after link trying to find information about Tutankhamun, there is this a virtual link goggle.com/q=tutankhamun from which all the interesting pages are guaranteed to be only one clink away.

Something like this will be also important for REST, because most REST resources are virtual, are only computed on demand. It doesn't always make sense to list the links to all these resources in some request. I can see the ReLL service used in a similar way to a search engine. Say I want stock quotes for AAPL. Instead of trying to browse to it, I go one level of abstraction up, find the resource type for stock quote, the list of services which offer this resource type, build a request for the resource with the ticker symbol AAPL, and get the result directly.

The alternative by browsing would be listing all the exchanges of the world, picking the option browse by industry in all, go down to technology, then go to computer/software. Or in AAPL case, perhaps find the Fortune 100 list, then get AAPL from there. Either way, browsing requires just as much client intelligence, if not more, than using something like ReLL.

So am I anywhere near what ReLL is supposed to be? This is a long response, but your article wasn't a quick read either.

The comments to this entry are closed.