one of the important value propositions of REST is loose coupling, which is a notoriously fuzzy terms that means different things to different people, even though one can a least try to look at it in a slightly discplined way. a simple definition of loose coupling may be that it allows independent evolution of service providers and consumers: services can evolve without breaking clients, evolved clients can use earlier services without breaking.
this of course is easier said then done, and one of the central challenges of any SOA approach: how to make it well-defined and useful and easy-to-use, and yet make it loosely coupled to real the benefits of an evolving ecosystem, instead of building a rigid system. there are various aspects to this, including topics such as how to do versioning.
assembling and combing robust design patterns for this in general would be a bit ambitious for a blog post... but we are following certain patterns internally, and while going through the exercise of willingly breaking stuff internally so that it doesn't break once we ship it as a product, it seemed to me that there might be quite a bit of value in trying hard to break REST clients.
the scenario is as follows: we design and expose services that we think follow good design patterns (such as having well-documented extension points in representations). now customers come along and ignore these extension points, because they are not actually used (yet), right? then a year or two later we start using them, customer code breaks, and guess who's taking the blame and the support calls?
so the idea would be to provide stress testing REST as a service (STRaaS?), so that customers would have an easier way to make sure their clients will probably not break. we cannot make these mandatory (well, we actually could by randomly switching STRaaS mode on in deployed services, but that may not make customers very happy), but we can make them compelling and beneficial to use. for example, using them may simply make the life of client developers easier, because they are embedded in other server-side tooling to help with client development.
when thinking about this idea, i was wondering what would be the things you would want to throw at those to-be-tested clients? here's an incomplete list, and i would be very interested to hear more opinions what would be a useful thing to test for such an "i like the smell of broken clients in the morning" approach...
- break URI patterns: we're trying to serve pretty URIs, but of course that makes it tempting to assemble them client-side. by serving different patterns, clients doing that will directly go to 404 hell.
- send representations that use all possible media type extension points to the fullest extent: send random link relations in generic linking constructs; send content in places where extensions are allowed; send new URI parameters in URI templates.
- every now and then, serve 5xx errors and see how gracefully clients are able to still do at least something useful. do they crash and burn, or try to recover gracefully? do they have reasonable "let's try again a little later" strategies?
- serve new media types and check for the same ability to handle this as gracefully as the 5xx errors.
- terminate the TCP connection occasionally and see how clients handle that. it always amazes me to see smartphone apps that generate all kinds of wrong
error messages
, when clearly the problem is just connectivity.
like i said, this list very likely is incomplete. it might be interesting to make it a little bit more complete, and then think about how much of that actually could be added relatively easily to server-side frameworks, so that they would have a test mode
switch to make do all these nasty things...
Very good and valid points. Breaking URI schemes is probably almost sufficient to break most clients. Reminds me of the Chaos Monkey and the whole simian army of Netflix. :-)
Posted by: graste | Thursday, September 12, 2013 at 12:42
yup, i guess breaking URIs on purpose already will wreak havoc on many clients. an ideal client might bookmark a URI for more efficient operations. but it still should be able to re-navigate to that resource from whatever a service says the home document location is, and then refresh the bookmark. it would be interesting to know how many clients would be able to do just that. probably not all that many...
Posted by: dret | Thursday, September 12, 2013 at 15:49