Links are for Humans Only? I Don't Think So.
After referencing Paul Downey’s great EPR-on-a-bus picture, Jonathan Marsh writes:
For human interaction, for searching and cataloging based on simulating human navigation patterns, pages full of links are good. Especially when the content of that page isn’t terribly useful it’s nice to be able to click virtually at random and escape into the soothing world of advertising pitches. But for a particular user, the majority of links in a page never get used. What is a machine going to do with a bunch of random links? In machine-to-machine communication, links are undoubtedly going to be much fewer in quantity, but much higher in quality. And if the service is doing something useful on behalf of a user that doesn’t require the transmission of lots of links, is that therefore a bad service? A service should return precisely the number of links necessary for it to do useful work. No more, and no less.
This is in response to Nick Gall’s claim:
Nowhere in the vast multitude of WS-* specifications or the articles or papers describing them is there any imperative or even any emphasis that a Web Service should return an XML document that is populated references to other Web resources, ie URIs. But it is a fundamental principle of the Web that good Web resources don’t “dead end” the Web; instead, they return representations filled with URIs that link to other Web resources.
I totally fail to see how Jonathan brings up any argument against this — as long as Web services don’t actually add something more to the Web than a single endpoint per service, they are not “on the Web”, but are indeed dead ends.
For a good example of links being used in a machine-processable way, see Atom Service documents.
“Semantics are a byproduct of the act of assigning resource identifiers and populating those resources with representations. At no time whatsoever do the server or client software need to know or understand the meaning of a URI—they merely act as a conduit through which the creator of a resource (a human naming authority) can associate representations with the semantics identified by the URI.” The web applications can also try to understand the links, although they do not need to. This needs special prescript of their functions. As you pointed out, this can also be done in XML for machine to machine interaction, e.g. Atom publishing. The condition is that the application accessing the XML knows Atom protocol. Similarly, the web services supporting WS-Addressing is not dead ends, they can reply with ‘links’ to other services. Again, the condition is that the service consumers know WS-Addressing.
I’m not against “good examples” like Atom Service documents. And don’t get me wrong, links are links.
My point is that Nick seems to be conflating pages with data blobs. A page without links is indeed a dead end. I don’t think that same metric applies to data blobs.
For example, if I build a stock quote service which takes a stock symbol (a string) and returns a price (a decimal), there are no URIs exchanged. Doesn’t Nick imply that this service is bad because it’s a dead-end? What if I implemented the service RESTfully? How would this be any different? Maybe I’d use a URI instead of a token (stock symbol) to identify the stock. But that is equally true in both SOAP and HTTP.
The difference between the number of links in a resource representation is indicative of whether the resource is designed for human or machine consumption, not whether it is RESTful or not.
I think I understand your point, I just happen to disagree with it :-) Embedding links in representations enables consuming applications to follow them instead of constructing URIs or using out-of-band metadata, which means that things like one can change servers, redirects work, and most importantly, that the resources referenced by these links can be used in different scenarios. An application that “contributes” a link for each of its major resources adds a lot more value to the Web than one that hides them behind its own, service-specific interface.
I don’t think this is related to whether it’s a human or a machine that consumes them; I agree this plays a role when “meaningful URIs” are concerned.
Jonathon wrote:
This is an extremely sophomoric understanding of the web, Jonathon. At the very core of RESTful design is the constraint that hypermedia be the engine of application state. That means links drive interaction, be it human or machine. I consider this the absolute most interesting aspect of REST.
In other words, WTF are you talking about?! It’s baffling to me that someone could have done any research at all on REST and then make the statement that “the number of links in a resource representation is indicative of whether the resource is designed for human consumption.” Uggh. It hurts my heart. These are fundamental, first principals!