09/20/08 :: [REST] Stu's REsponse [permalink]

|

Stu responded to my posts, I just want to publish his response as it is easier to read. So the text below is Stu's text:

Response to the post on RESTpolitik:

JJ,

First, I will repeat a couple of points I've made before to you:

1. You opened your first paragraph with an insult, and continue to pepper your entries with ad hominem insults. JJ, I'm sorry if the politics is getting to you, but if you care at all about getting your point across, you've been going about all the wrong ways to do it.

Skimming over the past few months of posts you've made (I haven't read them all in detail, it would take a long time), you almost remind me of Nietzsche, screaming into the abyss, waiting for it to scream back at you.

2. No one of repute is suggesting that "REST is all you need". The suggestion is that REST provides a better foundation for many enterprise information system use cases than does one based on basic message exchange or RPC.

There continues to be much work to provide industry infrastructure and tooling to make this palatable to many.

Onto your specific points:

1. I think you are misreading Pete Lacey's intensions. He was not "hired" to Make REST happen: he's not even an analyst! He was a consultant. Further, he made a number of very valid arguments. His most famous post, the "S Stands for Simple", actually had nothing to do with REST, but described the crazed politics of the SOAP and WSDL world that led to the flawed technology we have today.

2. Tim Bray speaks very plainly about topics he believes in. He has been at the beginning of many technologies we now rely on, and thus is an important voice. That said, he doesn't necessarily have an opinion or understanding of the business side (some might say, "enterprise architecture") of IT. He speaks as a senior technical engineer. Yes, his voice has political impact, and you're right to challenge him on it if you don't agree. I do think, however, you've gone about it in a self-destructive manner.

3. Regarding all the stuff about "realpolitik" in standards bodies. Much of what you say resonates as true. But I'd say that it's pretty much human nature: standards wind up often being proxy battles for larger power struggles. Sometimes they can be about "rough consensus and running code", but especially in areas where money is in the air, it's hard.

4. The big difference with the REST community is that I believe you have the power struggle backwards. REST was the incumbent player, being the architecture of the web. The Web Services push was a big vendor power play trying to co-opt what already existed on the Web.

If you want to talk politics, I would argue that SOAP was arguably pushed in the 1999-2000 timeframe to give Microsoft a way to change the techno-political debate away from EJB vs. COM+ (because EJB, and Java, was winning). Instead of 'portability' it was now all about 'interoperability'. There was a kernel of truth to all of this -- would the world prefer one language and virtual machine, or still have many languages There was a kernel of truth to all of this -- would the world prefer one language and virtual machine, or still have many languages and runtimes? But it was in their interests to push the alternative while they finished the COM+ Runtime 1.0 (which became known as the .NET common language runtime).

As for .NET vs. Java interoperability? .NET didn't even exist when SOAP was introduced, let's talk VB6 vs. Java! One could do that with just an XML serialization format and HTTP. I built financial systems back in 1999-2000 in this manner; all that SOAP and WSDL gave you were an IDL and generated stubs that made you feel like you were back in COM or CORBA.

Caught in the middle was the Web architecture, which was deemed irrelevant by those fighting the battles, because they were fighting over component software standards. That hypermedia might be a new way of thinking wasn't even considered (until Google became a threat).

REST was a bottom-up change that took nearly 5 years on blogs, mailing lists, and standards bodies to get traction, and a near 7 years before it became accepted, mostly at the expense of Mark Baker's health and bank account. Through this time, REST was usually sidelined in the standards bodies (other than, perhaps, the W3C TAG). It was repeatedly misunderstood, misrepresented, and ridiculed by the powers pushing their CORBA-with-angle-brackets approach.

If you perceive some members of the REST community playing hardball technology politics, it was because they were taught by the best of them.

On your last paragraph, here's my opinion: I continue to think that most of your arguments against REST are a failure of creativity and imagination on your part, and not because REST is fundamentally flawed for enterprise systems. Instead of saying "REST can't do X", why don't you look around and ask yourself "how would I do X with hypermedia"? If you have real concerns about REST (and I think there are a couple that warrant deeper exploration, such as modeling business interactions with hypermedia), you would be well suited to stop fighting the politics and just trying to improve the technology. It plays more to your strengths. Continuing down this course, you're just screaming at the abyss.

Response to the post on "Wag the Dog"

1. Tim Berners-Lee's requirements was that the web was a global information space. He didn't design all of the aspects of enabling easy machine-processability up front, but for the past 10 years (i.e. before Roy published his thesis) he has been promoting the Semantic Web as the extensions to make this work.

http://en.wikipedia.org/wiki/Sem...ki/ Semantic_Web

"I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize."

2. The REST community is not claiming that HTTP solves all of your problems. The argument is that distributed hypermedia is a general purpose architecture and that REST provides a style that enables its global scale.

3. Regarding boundaries. Your evaluation is not correct.

Firstly, arbitrary boundaries absolutely are described all the time -- through hypermedia links. The OpenID specification, for example, creates a fairly useful approach to exchanging authenticated identity.

Secondly, submitting machine-readable POs across network authorities requires sharing a hypermedia format for doing just that sort of thing. For example, there's nothing inherently wrong with something like EDI or OAGIS as a standardized data format. Even EDI AS2 leverages fairly RESTful approaches to message confidentiality and integrity (i.e. S/MIME). The problems occur when (for example), EDI-INT AS2 tunnels the old EDI exchange message types over HTTP and URI instead of actually USING the web protocols the way they were intended to be used. They could loosen their coupling if they better leveraged URIs and specified a hypermedia format to describe the partner connections instead of requiring out-of-band agreement.

4. Regarding identity.

Firstly, once you move into autonomous systems, there is no ability, in any technology, to have globally guaranteed identity. That's the tradeoff of autonomy vs. something that's inherently federated (i.e. under the control of a central authority).

Any system that 'pretends' that identity is guaranteed is (a) being naive and will have to incorporate tremendous data matching facilities to actually maintain the quality of that identifier, or (b) under the control of a single authority with the power and will to enforce that identity.

Secondly, your example misunderstands the relationship between URI and resources. It is true that two URIs can point to the same resource, and there isn't guaranteed way to tell.

It is not true, however, that a URI can point to two separate resources, except in cases where the authority over that URI has been deliIt is not true, however, that a URI can point to two separate resources, except in cases where the authority over that URI has been deliberately negligent in maintaining its quality.

Your example, /quote/{symbol}/last, is actually a consistent resource: "the last quote for this symbol". The resulting representation may be more useful if it contained a link to the "resource for this quote as of a specific time", which might be a redirect (e.g. a 303 See Other), or embedded in the media type of the quote itself, if that media had a well-defined context for such a link. But that's not necessary if all consumers require are "the last quote for a symbol".

4.3 Query languages.

Firstly, it's complete nonsense that "REST itself cannot return collections of objects".

What could you possibly mean by this? Perhaps you mean "HTTP itself cannot return collections", which would make more sense, but that's not HTTP's job, it's a media type's job. Any number of media types could contain collections of links. There are plenty of ways of doing so, depending on your intended consumer: HTML lists, Atom feeds, etc. It's the hypermedia type that determines how state is described. Atom feeds happen to be the growing approach for doing this generically. It's not a flaw.

Secondly, "the truth is you need REST-*", again is nonsensical. The truth is that you need many hypermedia types for a variety of intended uses. Again, I question why you would have a problem with this -- of course you would need many specs for orthogonal concerns. This is no different than defining a standardized format for SOAP/WSDL in XSD, except that it leverages URIs.

If your worry is "but we've spent all of this time on the WS-* specs!", that's not my problem. The industry decided to build a tower of babel based on a shaky foundation, and deserves scorn for wasting everybody's time. It will recover, eventually (it always does).

Thirdly, regarding unidirectional links. There are many references as to why bi-directional links do not scale. Many hypertext theorists, in fact, have a bone to pick with the WWW because it broke from tradition and chose unidirectional links. Yet, this approach arguably was one of the biggest architectural wins, in that it enabled decentralized evolution of links. And there are plenty of examples of localized bi-directionality being layered on top via hypermedia (e.g. RDFa, search engines, trackbacks, etc.)

Links should not be interpreted as logical relationships unless they're described by a framework that denotes them as such. For what it's worth, RDF and OWL does a reasonable job of this.

4.4 The uniform interface

Again, you seem to miss that it's up to the hypermedia format to denote the networked state machine and potential sorts of state transitions that are available for use.

Describing actions and interactions in a "connected systems" world really would just require a general hypermedia format for doing so. I'd note that (for example) XForms are a simple way of doing such a thing in both a machine-readable AND human-readable way -- they're not about CRUD; they're about describing the input for an action.

As for "too many ways to encode an action semantic, no clear way to get the state a resource is in, most people will choose a CRUD approach", you have a point. Much REST advocacy has centred around convention and XML over HTTP. Machine-readable forms or URI templates are vastly preferable. Beyond this, the fact that there is a lack of hypermedia format to describe "business interactions" as you've described them will prevent people from designing connected systems in the way you've envisioned.

But I do not think this is a fundamental flaw in REST so much as that you haven't yet found a way to map your vision of connected systems into a hypermedia format.

Regarding versioning, I struggle to understand your point here, as this is where you get very imprecise and wordy. What does an envelope have to do with anything? Since when would I put versioning information into a SOAP header (for example)?

Regarding hypermedia, JJ, you're really missing a big piece here. Hypermedia does not require a human to interpret it. Period. It is a complete fallacy. You keep repeating this, but there are dozens of counter examples: Google processes loads of links all day long without a single human in the loop. Sure, it has the ability to associate text patterns with links, so it's not appropriate for enterprise data management. But, then we have Bloglines which very precisely processes loads of links all day long without a human, and knows what categories to put the results into. Then, let's also look at the Semantic Web, which enables you to associate logical semantics with links. Do services like Twine require human intervention? Did Microsoft buy Powerset because their semantic web search engine secretly required an army of outsourced labour?

Regarding synchronicity and directionality, I think you completely miss the point that the _data model_ of hypermedia inherently enables asynchronous communication, regardless of the specific pattern of communication between agents and servers. When you stare at one layer of the network stack to the exclusion of all else, you'll see whatever you want to see; it's like arguing that any messaging system that uses TCP is inherently synchronous (nevermind that many messaging systems use TCP over UDP). There already is support for "asynchronous" interaction on the web (see, for example, 202 Accepted).

There could be more done here, in terms of enabling event notification of resources, which has been described in the ARRESTED thesis. But I think we can do quite a lot with what we have.

Regarding legacy enterprise systems support for resource orientation, this is amusing. None of the legacy systems support your notion of "connected systems", so it's really silly that you would say it's bad thing that they don't support "resource orientation". I don't think these approaches are exclusive of the other, and in either case, there's a lot of retrofitting required.

As for the remainder of your post, there's no point in debunking it, as it's mostly repetition of claims that I've debunked before.

Basically your pints comes down to two concerns:

a) you think REST is trampling the WS-* technologies, instead of them failing on their own;
b) you think REST-* is a waste of time because we already have WS-*


Firstly, given the market clout of most enterprise vendors, you give the REST community way too much credit. Enterprises are having problems with WS-*, period, and it has nothing to do with REST.

Secondly, whether REST-* gets built or not is unknown! Certainly parts are, but for the consumer web. REST really doesn't have that much enterprise penetration yet for machine-to-machine interaction.

IMO, for the enterprise, REST is just a portent. The WS-*, BPEL, and SCA technologies basically don't do the things you're saying they can do (even with WSPER). It's certainly not the REST community that's caused this mess, we're just offering an alternative foundation. The technology you've bought into is failing -- most enterprises aren't implementing them well, and I can guarantee that almost no one has a clue what you mean by "connected systems". So, you're trying to find a scape goat. I get that. But, I think you're just trying to shoot the messenger.

I'll suggest again, that you have some good ideas with the importance of managing interactions as first class instead of jumping straight into CRUD. You would be well suited to either keep working in the WS-* community to flesh this idea out, if they'll hear you. Or you could try to map your notion of "interaction" onto hypermedia. In my opinion, there's a lot more opportunity to try out new ideas with the latter approach. As part of our ongoing debates, I've done some work on this, but have had to put it on hold due to other commitments.