boo ebpml | radio

latest news


Composite Software Construction minibook published

read more ...



WSDL 2.0 metamodel published

read more ...


WSPER's primer released for comments

read more ...


Picture of the cover of the B = mc2 book

03/18/10 :: [REST] REST Versioning [permalink]

I was disappointed -to say the least- by Stu's response on REST versioning. I was expecting a lot more. I am not sure what to say about that:

What is missing, on the other hand, is a general purpose agent programming model that allows the easy expression and construction of hypermedia-consuming, goal-driven applications.

Let's be polite and argue that there are many simpler problems we need to solve before we tackle that one. So, allow me to write the response I expected from him.

First, I would have like to see Stu's model of REST. I am not sure you can just stumble upon the kind of programming model he is talking about. It has to be "designed". So, since Stu understands actions and resource lifecycles, he could have at least articulated how they fit in REST, since no one is talking about the "uniform interface" any longer, it would have been timely to establish that once and for all. It would for sure limit the amount of CRUD being generated.

So, allow me to offer, what I think REST could have been (actually is):

There are 3 RESTs out there. Roy's REST or REST Core. That's not very useful by itself as a programming model. Then there is the practical REST which adds queries (with properties) and collections. That's the CRUD of REST. The "Full REST" has actions and events. How do actions relate to a resource? Their invocation triggers a transition from one state of the resource to the next. These concepts have been on the table since the 60s. But in 2010, we are still debating about it. WOA ! That is called progress.

How are actions expressed in the full REST? Quite simply, RESTafarians do that every day: you just transform a verb into POST + noun. For instance you want to pay a bill, you POST a payment to the Bill resource. Yeap, that easy, in French this is called "encoding". I am glad ThoughtWorks charges you $300 an hour to teach you this kind of thing, this is as close as printing money as it can be.

So, in case you still wonder, no, an interface to a resource is not uniform, far from it -even Jim and Savas finally got that after 3 painful years-, no, CRUD is not the way to go, and no, people that tell you "Data Services work", have no idea what they are talking about. Sorry. If it is not clear to everyone, a CRUD based programming model forces you to push "reusable" business logic on the client of the resource, creating large levels of redundancy, not to mention that hardly anyone will let the state of their resources be decided on the client. For those of you that still don't know what REST means, it stands for REpresentational State Transfer, not for REsource Data Access. A representation of the state of the resource is passed to the client which indicates the actions he or she can take. Brilliant, Roy's REST is brilliant, he never claimed that what you needed was a bunch of CRUD.

That's why Actions work so well with hypermedia, actions are embedded in resource representations, this actually happens billions of times a day on traditional web pages. But mysteriously, the RESTafarians did not notice it, unless they don't want to "talk" about it (Stu?). I mean in the Cloud even Tim Bray finally figured out that indeed there were actions like "start or stop a server", how about "reboot a server". He even figured out, by himself, that you can't CRUD those actions.

This little action "secret" is an inconvenient truth because the RESTafarians precisely sold you on the idea that "contracts" did not exist: all this XML Schema and WSDL crap was just here to jack up the price of consulting engagements and that they will save the world by ridding it of these pesky contracts. If that is not boloney, what is Stu? But I digress. Let's go back to the main problem: REST Versioning.

Before I continue on, I want to reiterate why versioning is so important in connected systems. It is really important because you can't predict the future. Yes, you can't build an asset today with the expectation that in 3 years someone will come around and consume that asset as is. It might happen less than 1% of the time. Don't waste your time designing for that or Dino Buzzati will end up writing a book about it. So, how do you go about versioning in a connected system? It's quite simple. It is not the new consumers who "reuse" the old service. It is the old consumers who can still operate with the "new version of the service". Reuse happens the other way around. This is what is called "Forwards Compatible Versioning" because it is different from "Backwards Compatible". Backwards compatibility happens when you change the consumer and it can still talk to the old service. Versioning is the reason why RPC failed, this where CORBA failed, this is where JEE failed. This is where Spring fails. Versioning is actually such an acute problem even in monolithic programming models that people had to invent new technologies like OSGi to deal with it in  OO. I am glad that people like Dave Chappell are paid thousands of dollars per hour to claim that "SOA is a failure", that "Data Services are the only things that work" and all that because you can't predict the future.. What a gig.

Kjell and I wrote an article on how "Forwards Versioning" works in Web Services. I would recommend you take a look at it if you are not familiar with the concept.

So, now that we have a full model of REST, what can be versioned and how does it work? How different is it from Web Services?

Well, there is a lot that can change in a REST-based information system. Let's review the use cases one by one.

Use Cases Variants Solutions
Network authority Network authority is gone If the network authority is gone, you are out of luck, however, if it is still there, it can express to resource consumers that the resource can be found somewhere else (HTTP code 301 Moved Permanently). This is a bit more work than changing the endpoint of a Web Service, but it works. In particular the client can adapt to the change without configuration like in Web Services.
Resource has moved to a new network authority
Resource New resource added to the network authority Since REST encourages the use of Resource Representations, there is consequence, as long as the changes do not impact the resource representations. When a new resource is added to the Network Authority, there is no interference with existing ones.
The resource structure has changed
Query Add new queries REST makes it easy to add new queries to an existing resource. There is no penalty for "adding" a query. Existing Consumers are not impacted.

Operations can be added easily in WSDL, without communicating the changes to the new client, it is just as easy.

When you change an existing query, you want to make sure that your changes remain compatible. That's fairly easy to do, both in REST and Web Services.
Change existing queries
New State New state implies new transition That's a change in the resource structure. So the same comments apply.

Removing states is trickier because it generally implies that some actions can no longer be invoked.
New transition   Well, that's a new action, so a new POST+noun. A new sub-resource need to be added to the main resource. Just like in Web Services, you can add more operations without impacting existing consumers as long as the new version of the state machine is compatible with the old one. Old consumers will simply be unaware of the new transitions and states.
New Media Type   Changes in media types can be made forwards compatible courtesy of XML Schema, just like for their WS-* counterpart.

Check, Check, Check, just like Stu predicted, Web Services forwards compatible versioning foundation came from the W3C so it works for REST as well. It actually works even better than Web Services because you don't even waste time updating a contract. You just create a new URI syntax, you email it to the new consumers and voila.

I forgot something? What did I forget? Ah, a small detail. You remember, I keep saying "REST couples access and identity". Such a small little detail. It means that the business logic behind a GET ResourceRepresentation or a POST Noun is bolted to the URI. So when I say "GET /customers/123" I am bolted to a piece of code annotated with the innovative JAX-RS notations. That's a problem because /customers/123, as Stu explains it, is also an identity, a foreign key to somebody else. So I can't change it (remember cool URIs don't change). How do I go around that? Pete Williams provided a work around using media types. Usually Media types are used for content negotiation, he suggests to create a media type per content type, per version. It works. We know how well people are going to keep track of their media types and versions.  Just try to submit these types to the IANA ...

Does that work for POST + noun? It could. But, it is also safe to use a URI syntax because a POST of a resource to /customers/123/payments is actually an operation of a service end point, REST allows the server to append the payments in a different locations, so it would be quite ok to POST at /customers/123/payments/v1 and at /customers/123/payments/v2. The payments though would all be under /customers/123/payments/

So yes, versioning kind of works. Are we done? No, there is still another detail missing: the unit of versioning. Yes "real-world" resources have complex lifecycles and often composite states. Each composite state/lifecycle corresponds to a unit of versioning. Unfortunately in REST, there is no "unit of versioning". As you have seen, you add stuff here and there and before you know it you lost track completely of what exactly you are versioning. The key problem is that your business logic for a single resource could potentially be dangling off dozens of endpoints. So you have to painfully manage all these annotations in a forwards compatible way. I am not sure that is less work than creating a WSDL. In REST, there is not even a trace of the beginning of a "unit of versioning" beyond the URI itself. I will let you compare that with the elegance of versioning with Web Services contracts where you can replace the the business logic layer at the change of an endpoint and assemble different WSDLs for the same service and different consumers based on what you want them to understand. As a matter of fact the key design pattern in SOA is the separation between the interface and the implementation? So what do the RESTafarians argue about? They want you to bolt the interface to the implementation. They even create annotations to make sure you'll never set free from this architectural atrocity. It's like designing a house with the bathroom behind the front door. RPC, CORBA, JEE all bolted the interface to the implementation. Somehow, Web Services escaped that fate. But the old CORBA guys couldn't live with it. They actually want the interface to be directly generated from the code.

Did I talk about bi-directional interfaces? Ah yes, REST can't do bi-directional interfaces either. HTTP can, but not REST.

So Stu, I am sorry, a million times sorry. You'll retire before you can build a decent programming model on top of REST. In the meantime, you are creating a spaghetti plate at the scale of the Web. The RESTafarians, have simply have no idea about what it means to build an information system, and I am sorry, but I think you belong to that category. You have given no evidence of a basic programming model and please let's not even talk about CRUD.

So I am sorry, REST as a programming model, REST as middleware, the (other) REST as I call it (to make it clear nothing that I say here applies to Roy's REST), that REST is fraud, a massive fraud, imposing billions and billions of lost productivity in IT, setting us back 10-15 years, and I wish, Stu, that you would either tell people to hold on, until you figure it out, or if you already came to the same conclusion that you would be open to speak about it.

03/10/10 :: [MDE] From DSL to MOP [permalink]

Johan published a great summary of a somewhat old article from Markus Voelter: "Best Practices in Model Driven Development". I say somewhat old, because the article was written in 2005 and Markus's position may have evolved since that time. I'd like to take a moment to express how Metamodel Oriented Programming redefines the foundation of DSLs.

Sources for the language

I no longer think that Domain DSL make much sense. Domain experts rarely have the semantic precision that would allow supporting their work with a DSL. Note that technical people don't exhibit much more precision. Just look at the REST space and see how people "interpret" the semantics of a resource, four verbs and URI. Ok, maybe the problem is that REST is so poor semantically that they "have to" add their own semantics, but still.

Recommendation 1: build your language for the solution space, leave the problem space to creative expression.

Limit Expressiveness

On the contrary, I would argue that the problem of limited expressiveness is precisely the explosion of "interpretations" of semantics. Again, use REST as an example. Please needed a query language. What happened? everyone created its own. It may not happen on the same scale in your organization, but it will happen enough to be a real pain.

Recommendation 2: Make your language as expressive as it is needed but not more

Notation, Notation, Notation

Text is beautiful. Do not mix the need for graphical visualization with the need for graphical design. Most often textual definitions can be converted (automatically) in graphics.

Recommendation 3: use graphical editors as rarely as you can. Prefer textual notations with graphical renditions


The goal is not to have everyone express something, the goal is to build solutions fast. The design of the language must be driven by how fast you can translate requirements into a working solution. Take the example of BPMN. BPMN was designed to let a certain category of people express their view point. That's great. When it comes to create "executable BPMN definitions", well the little known secret is that developers stuff the definitions with tons of arcane, hard to debug, scripts. This is highly inefficient. From my own experience, it can take 3-5 times as much to write a solution using an executable BPMN engine than writing it in 3GL.

Recommendation 4: design the language for rapidly building solutions (from as few viewpoints as possible) 



In MOP this best practice translates as rules that enable execution elements to control the lifecycle of other elements. The language needs to be one, however, you should restrict with precision the lifecycle operations a model element can invoke on another. I would not call that partitioning

Recommendation 5: understand with precision which model element control the lifecycle of other model elements


This is a difficult topic, MOP doesn't necessary help here. However, because MOP forces you to think through the entire lifecycle of the model elements and the relationships between the model elements at the implementation level, you may end up needing far fewer evolutions. DSL focuses only on the associations between model elements, not there interelated lifecycles.

Recommendation 6: Think through the entire design of the language, especially the lifecycles

The fallacy of generic languages

We agree, yet, as you focus on the solution side, you should remain as generic as you can, what is wrong is the monadism of general purpose languages, not genericity per say.

Recommendation 7: Think polyadic programming languages

Learn from 3 GLs

I would say, learn from both 3GLs and DSL. This is what MOP is about, bringing cogency and polyadism together in one efficient formalism.

Recommendation 7: Design Cogent Polyadic languages

Who are the first class citizens?

You don't have to choose, this is the wrong choice. DSL forces you through that choice because second class citizens are created to implement the cogent aspects of the model. MOP focuses on first class citizens without the need of artificial model elements needed to express execution elements.

Recommendation 8: Avoid creating model elements to express execution semantics

Libraries (and Patterns)

I have mixed feelings about that one. Why would you create libraries and patterns in your DSL? They exist only because of the monadism of the general purpose languages. Programming Patterns would have never existed if programming languages were polyadic.

Recommendation 9: Rely on as few libraries and patterns as possible on top of your language, prefer a higher degree of polyadism

Teamwork support

Go Textual !!!!!!!

Recommendation 10: MOP is textual in nature, structure your language to facilitate teamwork and leverage all the source-based tools

Models Interpretation vs. Code Generation

The bad reputation of Code Generation is due to the lack of execution semantics in DSL, i.e. DSLs are not cogent. MOP languages facilitate the transformation (not generation) into general purpose languages because the execution semantics are half way there.

Recommendation 11: Prefer "transformation" to model interpretation or code generation

Don't modify generated code

Yes, but you can modify "transformed" code. Round-trip engineering in code generation can be very tricky. However, using a transformation paradigm in lieu of code generation make the round-tripping a lot easier.

Recommendation 12: You may use round-tripping safely when it makes sense, but avoid it if you can

I am not quite sure today, why the DSL community do not see how much simpler their life would be by making DSL cogents. I also don't see why the general purpose language guys keep annotating their language to death. They annotate it so much that they can't see the semantics behind their annotation. But, hey, what do I know? I am just a MOPer.

03/05/10 :: [REST] RESTless in Seattle [permalink]

Lori MacVittie wrote a following post to my post on REST that is both refreshing and terrifying. Refreshing because well it kind of puts together a lot of the untold truth of our industry:

1) Standards sucks and ever changing, without fitting with each other or offering some kind of upward compatibility.

2) REST is not as "easy" as some people would like others to believe it is

3) Innovation in 2010 is no free lunch, lots of people who don't understand a thing about innovation get in the way for various reason (greed, fud, ego...) and the innovators have to deal day in and day out with their stupid goals and constrains, not to mention ROI.

Thank you Lori. It's refreshing that someone is not talking about the Lalaland of XYZ and is not afraid of using "real" word, not to mention avoid bolonizing their readers. There are so few like you that's worth mentioning.

Your post is also terrifying because it leaves little hope to achieve building the right thing. Actually, the probability to get the right thing accomplished is as high as winning the lottery. In many ways, I agree with you. When you see products like the iPhone and you think at the alignment of technology that needed to happen to achieve such a device and you realize that only a strong culture and "proprietarism" can deliver innovation today. Standards are the wrong approach to innovation. They kill innovation. Pretending innovation is easy is the biggest lie of the 21st century. No innovation is complex and requires a bunch of smart people working together without just ROI, greed, or fear as a driven. I think the next decade will see the emergence of new winners who understand innovation at that level. Of course, there are counter examples to my argument, company like WSO2 have innovated on top of less than optimal standards, but I think they remain the exception, and it is probably due to their strong culture and sense of values.

Stu also provided a follow up to my challenge to define a version strategy. Don't get me wrong, I like Stu, he is incredibly competent and he backs his work with a long experience. But, Stu failed to provide a compelling strategy for versioning. He actually admits that:

In a RESTful approach, URIs are your "foreign keys", and if you embed a version identifier in them, they need to change when you upgrade to the next version if you embed those versions in the URI. Assuming you can't convince your resource owners to use languages with version identifiers as a MIME parameter or inside the language itself, how is that done? 

 This is what I mean by REST couples access and identity.

Stu also conveniently forgets to speak about the "unit of versioning". Every resource is usually participating in different lifecycles. A resource has different states, composite states. Each lifecycle has a set of actions (which are always encoded as POST+noun in REST). This is the unit of versioning. A comment from Mike actually speaks to that very problem:

there are many times when an application-flow update requires support for side-by-side versioning

Yeah, they finally touch the real problems or building real systems, not just blog posts and plain vanilla articles or useless annotations. I actually argue that without the visibility in the lifecycle, these problems are impossible to solve. You can't version an action API if you don't understand the state and transitions behind them.

Of course, people like Stu or Mike will never admit such a thing publicly. They will always say, "but... if you look here ... there is a promising solution that that problem". Boloney guys. Pure and complete boloney.

 I have ordered Subbu's book (hopefully he will give me an autograph), so I'll wait to make my final comments on versioning, but so far I think (reluctantly) that you belong to the REST wall of shame. It is time to get real and provide real recommendations. Recommendations that don't let people loose like the one you just made.

It is has been 3 three painful years to get to where we are today, i.e. nowhere. I see no success in sight and no sign of possible success either. REST is not a programming model. The (other) REST is a fraud, nothing less, nothing more.

RESTfully yours,


01/26/10 :: [REST] 2010 - Where are We? [permalink]

A post by William Vambenepe, and the comments that followed, prompted me to do a reality check: here we are, 3 years after the 2nd invasion of RESTafarians in our industry. That wave has kind of succeeded. "REST" APIs are everywhere. I put REST in quotes because the number of RESTful APIs are far fewer than the self-proclaimed ones. Even the hard core RESTafarians do not bother policing the ugly world they have created anylonger.

Take this payment API. Is that RESTful?

POST /<path>/charge?version=1.0&endUserId=tel:+447990123456& currency=GBP&amount=1&referenceCode=ABC

Yes, you have seen it you POST a "verb" (the remainder of the API is all verbs to).

But that's not it, check that one. No you are not dreaming. You want to cancel a "reservation", you -of course- use DELETE, and just to be consistent, you use a verb (reserve) and not a noun.

DELETE /<path>/reserve/1234/release?version=1.0

But, who cares, right? the coders have already hidden all that CRUD behind code generators.  Check the kind of code that Bill Burke wants you to write to deal with Hypermedia. While you are at it, I would also look in Bill's post what the JAX-RS code look like:

public interface CustomerClient {

   public MyResponse getCustomer(@PathParam("id") int custId) throws NotFoundException;

Yeap, this looks so much better and different than jax-ws:

@WebService(targetNamespace = "", name="AddNumbers")
public interface AddNumbersIF extends Remote {
    @WebMethod(operationName="add", action="urn:addNumbers")
    public void addNumbers(
        @WebParam(name="num1")int number1,
        @WebParam(name="num2")int number2,
        @WebParam(name="result" mode=WebParam.Mode.OUT) Holder<Integer> result)
        throws RemoteException, AddNumbersException;


Outstanding job guys ! all this brouhaha to get there. I am in awe. I am glad people get paid for that kind of "work". Some even call themselves "successful entrepreneurs" after producing a few annotations and a couple variations (no kidding).

Yes, REST has won. Not sure what we have won in return, but REST has. So, it is time to share some of the highlights of the Phyrric victory of the RESTafarians:

1) Dave Chappell - SOA is a failure : I am glad Dave gets paid to travel the world and propagate his boloney - what a job ! Dave (who doesn't write a single line of code) explains that for him "Reuse" doesn't work and the only services worth building are "Data Services". Why reuse don't work? Because:

Creating services that can be reused requires predicting the future… 

Dave, do you understand how reuse works in a distributed system? It is quite easy actually. You need a Forwards Compatible Versioning strategy. You can't reuse what you built 3 years ago, I actually agree with you. But, you can evolve services in a forwards compatible way such that the new version of a service (which was developed to meet the needs of a new consumer) works with all previous consumers without breaking them, and hence without requiring any changes from them. Something that your typical OO library can't do. Reuse in SOA happens the other way around: the old consumers reuse the new version of a service. That way you don't have to predict the future.

2) Stafan Tilkov - Code first does not work : yeap, another rock solid argument. Stefan complains that his code first approach creates "very large WSDL-files". First, REST does not change the footprint of service invocations. If your DTO (aka Resource Representation) is large in WSDL, it will be large in REST, just as well. REST is just a different encoding of the operations (using predefined verbs GET, POST, PUT). Second, You can't do SOA in a "code-first" fashion. I know, even Microsoft does that. I have explained many times that OO is the problem across our industry, any approach that tries to express new semantics on the foundation of OO is bound to fail. Annotating OO or wiring remote calls into and OO runtime is the wrong thing to do. OO is just a particular case of Metamodeling Oriented Programming. OO is not the foundation of programming, let alone software architecture. Semantics need to be expressed independently of the OO metamodel, in particular the semantics of the execution elements.

3) Steve Vinosky, Bill DeHora : The interface to a resource is uniform.. You don't hear that argument very often nowadays, specially after RESTfulie was published, yet it was the core argument when the 2nd invasion started.

... and the list could go, on and on. Stu, let's see what you come up with versioning. I do not wish to add you to this wall of shame.

I can demonstrate that our industry has lost tens of billions of dollars in productivity with these three flawed arguments. In one big swoop, the RESTafarians have prevented reuse to happen, crippled model-driven engineering and the emergence of a distributed programming model which could be the foundation to build composite applications.

In the end, you guys can claim all you want, but REST is just a "NO WS-*" movement. I am not here to defend WS-* or SCA, I don't work for a vendor. I, however, as a user of these technologies, have to constantly talk to people that are often completely confused about all these approaches (I mean really bright experienced people who now think that CRUD is a good way to build distributed systems). I sure wish our industry would have produced by now a nice distributed programming model and made all these discussions pointless. Everyone critically needs it. Unfortunately, as I explained before, there was not a chance, because everything is done from a monolithic programming model point of view, and somehow the gurus like Bill or Steve who had their chance at producing that programming model, project the semantic of a monolithic programming model (OO) into the distributed world. When these people looked at REST, they saw a distributed object paradigm that seemed to work. They looked at all the problems of CORBA (granularity of the calls, brittle interfaces, naming service...) and they felt REST solved elegantly all these questions. REST offered a universal naming service, uniform interfaces and DTO-size granularity. They said Bingo ! and here we are, only ashes are left, they burnt down everything. They have destroyed all the advances that were painfully conquered amongst stupid vendor politics and exacerbated egos. All gone: contracts, forwards compatible versioning, bi-directional interfaces, eventing mechanism, advanced coordination mechanism, assemblies, orchestrations,... you name it. REST doesn't offer any of them. REST brought our industry back to the a pre-Neolithic age and has enslaved everyone to CRUD. The RESTafarians made us lose another 10 years in our quest to built a true distributed programming model.

Congratulations on a job well done ! Mission accomplished !

As for myself, I am not in the business of creating "pretty stories" or attracting "followers". I am just too old-fashioned for that. 

01/26/10 :: [REST] PUT vs POST [permalink]

My post on REST, Processes and Resources is the most read on ebpml, month after month (I am not sure who linked to it). I stand by every word in this post, but I would like to reiterate a truism that apparently even some of the most senior architects and developers seem to ignore.

Lots of people who have read about REST would tell you that the "resource representation" pattern is a great progress in information system construction, in particular because you can "PUT" the representation back. That's in line with the DTO pattern that CORBA or JEE aficionados are/were so accustomed to. Of course, they often pass on the fact that having a standard "Change Summary" definition as the resource representation would be a terrific feature to have, one day the RESTafarians will look around and discover (Stefan?) that the industry had already solved all these problems well before they even started to understand them: Microsoft came out with the DataSet concept around 2003 and later, in 2005, SDO generalized that concept in both Java and .Net world. But it is so easy to ignore all the work that has been done and start over. Right.

So here the argument goes, REST is great because I can PUT stuff back. Complete Freedom, they argue. This is what Bill DeHora stated a while back:

So, in a business process where GET and PUT (and friends) apply to *all* business entities and are not just per process defined methods, why can't I GET the state and have a well-understood formal document returned citing the state of that entity? Or for that matter PUT the updated state to that entity? What's the actual  limitation induced by applying REST?

For those of you who still believe that PUT is all you'll ever need, let's look at the physical world: everything is in a "given" state. From the smallest particle to the heaviest piece of equipment. Each state has a well defined transitions to other states. I can't "PUT" a particle in any state I want, I can't PUT an elevator or a can of soda in any state I want. Actually, state is such a profound foundation in our universe that it defines "time". Time only exist because the universe can never return arbitrarily to a given state. If that were possible (who would decide which state to go to?), time would simply not exist.

Before you get too bored with metaphysical considerations, let's go back to information system construction. Information entities are like physical objects. First, they often represent physical objects and model their primary "States", if they are a more abstract concept, say like a contract, they nearly always have distinct states which control their lifecycles.

The question becomes how do you express the intent of transitioning from one state to another. I say intent, because, just like in the physical world THY SHALL NOT PUT STATE directly into the information entity. Yes there are attributes, say like the color of a soda can that can change idempotently and then there are attributes which represent the states of the information entity which can only be changed by the entity itself. The business logic that transitions from one state to another must be owned by the entity. If not? if not, terrible things happen when you have more than one consumer of that entity, you start duplicating the state/transition logic in the consumer of that entity. You get the picture. In the days we built monolithic systems, there was little value in correctly factoring that kind of business logic. In the SOA days - and I would argue the principal reason people fail at SOA - THY SHALL LET THE ENTITY DECIDE FOR ITSELF whether it can transition from one state to another. Most people do SOA and actually expose a Data Access Layer as a bunch of Services. They encourage people CRUDing. Worse, people like Dave (Microsoft) Chappell would tell you that the only thing that works is a "data service", SOA is a failure. I can safely say that he doesn't understand a thing about SOA. Now, when the RESTafarian like Stefan, Bill DeHora, Bill Burke, Jim Webber, ... come to you and encourage you to PUT up with CRUD as a key success factor for your "SOA", I smile loudly.

Now, people might tell you that PUT can express an intent, why not (Roy would disagree), they can tell you, but we use POST to encode all intent. I say why not, as long as the logic to transition from one state to another resides on the resource side, but have we gained? Nothing, we have just found another encoding (actually 2, PUT or POST) and we have lost so much (bi-directional interfaces, events, orchestration, we have coupled access and identity ...). So what is the point? What is the point of yet a new encoding? Browser access, ok, so what, do you need to displace entire technologies for that? Is that a game? I am amazed, in awe actually, at how such bogus arguments took hold in our industry, how little, nice to hear, stories ended up where they are today. Yet, REST is nowhere, no proof of any massive and successful use outside the browser.

So if you want to use PUT to the attributes that can change idempotently, great, if you want to use POST for invoking actions on a resource, even better, but don't tell me you invented anything. Information systems have been working on these principles for 8000 years. They didn't need computers, the Web, Stefan, Steve or Bill to figure that out.

The (other) REST is a fraud, and there is nothing clearer today.

01/16/10 :: [MDE] Solution vs Problem Abstractions, does it matter? [permalink]

As Google just launched a "new" programming language (what for?) more and more people are asking about and somewhat demanding better abstractions. The question is should they be problem or solution side abstractions or possibly both?

Udi recently complained about solution side abstractions:

If we want our architecture to be stable, we need to base it on stable abstractions. The only thing is that there aren’t any inherently stable abstractions in the solution domain (as we’ve had the chance to witness). That really only leaves one other place to look for them – in the problem domain, also known as the functional requirements.

Udi believes that the solution is on the problem side :-):

If we could find a way to capture those stable elements and represent them as core elements in our architectural structure, and then balance the non-functional requirements within those functional contexts, maybe, just maybe, our architecture will stand the test of time.

His position is quite ironic as the BPM punditocracy interprets the current wave of acquisition of the BPM space as a "the end of BPM" as we know it. This particular rash of BPM products built their business on the fallacy that you could somehow build solutions directly from "problem-side abstractions", i.e. BPMN. Ten years later, we are still waiting for large enterprise deployments where all business processes are somehow implemented from problem-side definitions. These vendors have for long claimed victory, this is somewhat of a Phyrric victory for our industry. Sincerely, I am glad they are going away. They have taken away tremendous resources, delivered hardly anything, and prevented the right solution-side abstractions to emerge. I have had some discussions with Keith and Scott on that topic after Keith detailed the "process trends" he saw unfolding in the last 20 years.

These "process trends" are precisely the problem you are going to face if you go find your abstractions in the problem space. First you will never succeed at converting problem side analysts that can achieve the level of rigor necessary to build a solution and because you will adopt their language, you will have less than optimal abstractions to build the solutions. The little known secret of "BPM" is that once you quickly pass the pretty (process) pictures, when you look under the hood, you see all kinds of ugly scripting language dropped wherever possible. Scott thinks that JavaScript is the "ideal" complement of BPMN. Anyone who has written more than 10 lines of JavaScript understands what I mean, and JavaScript is possibly one of the better ones I have seen over there.

So Udi, I am sorry, but starting on the the problem side has been tried and it's a mess. Developers will never be able to design abstractions that make business analysts comfortable, they need freedom and fuzziness. They want the solution side to build the solution, not them. Hence, the problem-side needs a) as fewer abstraction as possible and b) (what is the most important) whatever runs in production (once the problem has been solved) must be "visualizable" by the business analysts. That is the most important direction: solution->problem, not the other way around. So far, the problem->solution is just a pipe dream, an immense distraction for our industry and a general failure.

There is another reason, more fundamental, for why abstractions (problem or solution side) cannot emerge. If you spend some time exploring Ecore in EMF (or MOF in UML) and if you look at EMF's M3 layer (i.e. Ecore):


You can see that, the center is the "EClass". Just like in MOF, the M3 layer is OO based. OO is the enemy, sorry, I can't find another word. There is nothing abstract about OO and there is nothing architectural about OO. OO is a tiny little pattern which success is out of control. Actually, I am a bit unfair. I teach mathematics to my kids using UML. So, yes OO, provides a generic modeling capability to describe systems statically, or static abstractions. At a certain level, everything is a bag of attributes with relations to other things. But you can't efficiently describe dynamic systems in OO, the behavior is an after thought in Ecore and OMF, well beyond the OO cave.

Udi, look no further for the problem you see in creating solution side abstraction. Static solution side abstraction are not very common. Once the modeler realizes that he or she needs behavior, that's when he or she starts throwing a scripting language into the mix and everything becomes ugly, impossible to dissociate from the underlying architecture (the script has to run somewhere, call some APIs...). Once your abstraction is tied into an architecture, you know what happens next. I have raised this concern with some of the fathers of MDA but I always a distant glare and no response. I may be wrong, but IMHO, MDA is built on the wrong foundation, OO. It is going to be hard for them or for the OMG to change course, but there has been enough surges in that domain to think the solution to MDE is more OO.

I like textual-DSLs because they are conducive to modeling behavior in addition to the abstraction. You tend to create your own programming language along-side the abstractions. This is why I was generally excited about SSM and things like MService.  Note that people understand the problem, some people have shown how to extend Ecore to model "code" as well. But I think OO is the problem, abstractions need to exist completely outside the OO cave. This is why I suggest to adopt a Metamodel Oriented Programming approach.

I am certain that this problem will be solved in the next 10 years and that the abstraction will be more on the solution side, completely outside the OO space.

01/03/10 :: [Cloud] The "Techtonic" Shift of 2010: the iTablet [permalink]

I don't know about you, but I feel ripped off. I got the incredible opportunity to live to 2010 and I feel except for a couple things I can only buy technologies that were mostly available 10 years ago. I don't know what Apple will release this month. Some talk about the iTablet, but it will certainly look like an iKindle, a larger iPhone. Whether the geeks like it or not, the computer has become a consumer product, driven by consumers who want to do what consumers expect to do, i.e. they never dream of becoming a sysadmin. The devices of this decade will be designed for humans by humans, not for geeks by geeks.

So what will be the characteristics of the devices (not computers) of this decade?  That's quite easy:

  • touch, touch, touch
  • apps, apps, apps... hu?... I mean connected apps
  • location, location, location
  • no sysdamin required

Many of us would feel that this is back from the future, but let's face it, just compare using a browser based app and an iPhone app. Not convinced? Do you use Netflix? Just compare browsing Netflix with a... web browser and Microsoft media center. You get the picture, right?

In 2005, I was making fun of Microsoft's "SaaS" strategy. Shortly thereafter they changed the course and came out with a "Software+Services" strategy. Yes, that was the right move, but we all know how well Microsoft executes strategic moves.

In this new landscape devices will be easy to use, loaded with "apps+services". The browser is out, whether you like it or not, we are back to the wonderful idea of multi-platforms. Did I talk about form factor of an app? Cheap, small, downloadable. The "price" of an app: $1 or $2 sound reasonable, well suited to a mass market of hundreds of million of people. Who's going to buy a $60 box+DVDs running on an operating system that requires a Geek Squad just to keep running?

So here we are, at the onset of a tectonic shift (it has already happened, it will just become visible). Who's going to win? We know that won't be Microsoft, unfortunately for Seattle. Successful people at Microsoft are too busy boating and driving their fancy cars, they never use their own product, let alone watch how people are using them. Google may have bet on the wrong horse, the browser may be laid to REST in this decade. That prediction is easy to make: finger stokes are not RESTful and JavaScript can't access "local" information (accelerometers, location...).

So Steve Jobs may have done it again after the Apple II, the Mac, the Laser Printer, NeXT, the iFamily is breaking our industry's mold. Well done Steve.

If you want to see the future, just look at this app, that one too. Now, look at your browser...