latest news


Composite Software Construction minibook published

read more ...



WSDL 2.0 metamodel published

read more ...


WSPER's primer released for comments

read more ...


09/27/09 :: [SOA] Boris on Service, Web and REST [permalink]

It is no secret, I really like the kinds of things Boris is writing about SOA. His book is a master piece. It is complete, accessible, based on a deep experience and contains zero fluff. And of course someone of his caliber could only have a measured position in the REST vs WS-* debate:

Both REST and WS have their place in real life implementations. The issue should be not which one is better, but rather how they can coexist and when is one more appropriate. The other important thing is to make sure that comparisons are based on merits, not beliefs, no matter how strong they are. There are many useful standards and patterns created by WS* and the issue is whether it makes sense to start over or to see how these patterns can be applied to REST.

Dilip Krishnan, another InfoQ editor, and RESTafarian at large, not surprisingly responded:

Can we finally agree that the word "service" is as, if not more important, then "Web"

What does this even mean? important for what and for who?

I say not surprisingly because RESTafarians have no clear position on "service", they just say REST is the right way to build a Service Oriented Architecture. Yet, REST has no concept of "service" anywhere, just resources and their shiny uniform interface, links and bookmarks. Indeed there are no services in REST. Just read the thesis.

But RESTafarians can't care less, if they don't understand or can't explain something, the solution must be in REST somewhere and they look for it, one hack at a time. It is interesting to see how the mind works, it makes you wonder how come mankind has made any amount of progress, ever since we set foot on this planet, and what progress could we have made if we didn't get stuck in the mode of "the solution must be in this book/thesis/theory somehow, I just have to find it". Well the reality is that the solution is often, if not always, outside the box. Just ask Newton, Einstein and the many less known scientists. In computer "science" we have a very small, actually, we don't even have a box, we just have a square: one axis is Turing-complete and the other one is OO, and everything a developer can be given to do his or her job has to fit on that square.  You would surely fall off a cliff if you dared go outside.

But I digress, let's go back to "services". Even Bill, in this REST-* proposal is talking about creating a RESTful interface to non RESTful services. That certainly begs the question, how can a service be non RESTful since REST is all about SOA and replaces in its entirety WS-*.

Ganesh, in all his wisdom, has become a RESTafarian. It is interesting so see someone that understands SOA becoming a RESTafarians, at least you can have a much deeper discussion. He came up with the concept of REST as being Polymorphic CRUD:

IT folk in the enterprise understand both polymorphism and CRUD, so the combined term should make sense. I want to drive home the point that a verb itself is neither coarse- nor fine-grained, it's how each resource interprets it. Fine-grained resources will interpret the REST verbs as CRUD operations. But more coarse-grained resources can interpret the verbs as any arbitrary business operation.

I find its use of polymorphism, because for me REST is just a better CORBA, i.e. object oriented but it is not service oriented. There is no better post that explains how small the square we are left to play with that Pete Lacey's post in 2006.

They want transactions, and reliability, [bidirectional interfaces, assemblies] and asynchronous messaging, and orchestration, and everything else.

So, Boris committed heresy (defined as proposing some unorthodox change to an established system of belief, especially a religion, that conflicts with the previously established opinion of scholars of that belief such as canon). He dared say that we should focus on Service not just the Web. Unfortunately, Dilip, Ganesh, Bill and so many others, I would like to repeat that there is no evidence today of any application being built in a RESTful way. There are APIs to CRUD data here and there, but the day you'll show me an ERP system built in a RESTful way, we'll talk again.

For me a Service is a software agent which :

  • performs a well defined unit of work, invoked by expressing intent, with minimal or no knowledge of the context in which this intent is expressed
  • is readily accessible by an arbitrary number of other software agents implemented in arbitrary technologies
  • can change the way it performs its unit of work or specify its intent without necessarily breaking the software agents that consume
  • the resources involved in the performance of the unit of work, i.e. a service, may participate in more than one service

This is what happens in life every day. We consume countless services by simply expressing intent (e.g. call someone), these services can scale without impacting me  (a wireless carrier can add a subscriber without me noticing), and these services can change again without impacting me either (a wireless carrier can add a "favorite list of callers" without me noticing). An airplane can be involved in performing several services simultaneously (transporting people, parcels and letters).

Service orientation is about creating solutions from this type of software agents instead of tier-ing and integrating to achieve the same benefits that we realize in a service oriented society. Service orientation is way out of the square.  It's not that hard, but it definitely requires some unorthodox change to an established system of belief. The irony is that the RESTafarians, including Roy, are representatives of this square based system of belief. They want no progress to be made what so ever. On the surface, REST could be easily mistaken for a Service Oriented technology. After all it supports 2) well, and has some aspect of 4) taken care of (It doesn't break anything because there is nothing to break). But that's the problem, there is nothing to break, there is no intent in REST, there is a million RPC-like conventions, mostly at the CRUD level. When Ganesh says that you just have to POST something to /applications, and that replaces submitApplication, where is the intent expressed? is POST an intent? can this application participate in different "services"? No, it is not an intent, because you have to file it yourself in the appropriate hierarchy /applications. This is the tight coupling SOA has worked so hard at trying to remove. Service orientation is about expressing an intent and have no particular reason to know what happens next.

The CORBA guys see all this as the promised land, they just pushed the problem of brittle interfaces to developers. Interfaces are the problem, just tell people they don't exist anymore, they have been CRUDed away. All you do in REST is "encode" all your application semantics. They can expand without breaking, but they can't change. REST is CRUD and CRUD is a tight a coupling as you can possibly imagine.

So yes, Dilip, Service is almost as important as the "Web".

09/12/09 :: [MDE] Metamodel Oriented Programming (RE)Explained[permalink]

I have tried to explain Metamodel Oriented Programming to a few people over the last few months and I felt that I was just getting some polite interest at best. So I'd like to try to explain it a bit more.

I have created this diagram to classify executable artifacts (interpreted or compiled):

The Anemic - Cogent axis could have just been called Declarative - Imperative. It refers to whether an artifact contains "implementation" elements. An example of implementation element is a method in an Object Oriented Class.

The Monadic - Polyadic axis simply reflects the number of concepts that make up the structure of the artifact. OO languages typically have one (major) concept: a class, so they are monadic. DSLs like HTML or SCA have several. For instance SCA has: Service, Component, Composite, Domain, Assembly, Wire, Event,...

HTML is interesting because it is probably the most successful DSL on earth. Lots of people are surprised when I call HTML a DSL. I am not sure I am wrong in doing so. Yet, HTML without Java Script (i.e. an anemic HTML) would be quite a bit less useful. I am not sure the Web would be what it is today without the cogency that JavaScript brings to HTML.

SCA is also interesting because it shows how you can augment all sorts of general purpose language with a DSL: SCA + Java + BPEL starts looking a lot like a Polyadic Cogent programming model. SCA alone though is completely anemic.

This figure calls for a missing category of languages: Polyadic Cogent programming languages. So that bears two questions:

1) what do they look like?
2) what are they good for?

What does a Polyadic Cogent Programming Language Look like?

First, it is polyadic, so pick your favorite (anemic) DSL, here is one with 3 concepts (the more the merier):

Now add some implementation elements to your DSL, just like in OO, a class has a method:

You can of course choose the multiplicity of the implementation element (0,1 or more). A (SOA) Service is interesting because it can have 0 or 1 implementation, depending of whether the Service Implementation is based on a BPEL orchestration or if each operation is implemented individually. Yet, the implementation elements (BPEL or otherwise) are all completely separate from the Service Definition (WSDL). If it is a good thing to have a separate contract definition, it also hides completely the programming model behind SOA, and that's bad. It has lead to all kinds of interpretations and frankly poor implementations, frustrations and clueless analysts talking about the death of SOA.

So at the metamodel level (M2), you just add an Implementation element, but what do write it at the metadata level (M1)?

First you need some processing instructions and basic types. There are two type here: your favorite Turing complete set and an orchestrated set. We should probably combine them both and add events to the mix and have just one standard processing set which could be personalized with your favorite syntax.

The second type of instructions is related to the lifecycle of the elements in DSL: A, B, C. When you say:

A a = new A();

you are simply starting the lifecycle of an instance. Similarly, a Class in an OO runtime has a simple lifecycle: Loaded, Unloaded. A service, in a service container has a couple more states: Loaded -> Started -> Stopped -> Unloaded. Please note that some of the states and transitions may be implicit (e.g. delete() or gc() ).

So just define the lifecycle of each of the elements of the DSL, assign a catchy name for each transition between states and voila, you have your Polyadic - Cogent programming language.

I don't know why the DSL people hate code (i.e. implementation elements), I don't know why the GPL people keep adding annotations or stitching code behind your DSL. These approaches simply don't make any sense.

Nothing is simpler and cleaner. You have complete control over what the language can do, you can even add constraints about which transition a particular implementation element is allowed to call. I can create very robust Polyadic Cogent languages in a couple of days using tools like xtext from openArchitectureWare.

What are these languages good for?

Well if you see the number of annotations floating around these days, Polyadic Cogent Languages should be able to clean up this in no time. At the end of the day an annotation is a just a DSL element and people want to use the syntax of their favorite language to add some implementation element to that particular DSL element. The problem with this approach is that we put a class wrapper around this DSL concept and we let developers loose do whatever they want with the class. This approach is wrong: we are introducing vast amounts of inefficiencies with that, for absolutely no gain at all. SCA's annotations in Java are a perfect example of this problem. Don't get me wrong I really like SCA, but wrapping services (with bi-directional interfaces), wires, composites and what not behind Java Classes is introducing levels of complexities that can easily explain why SCA is not used more. The more polyadic your DSL will be the messier it will be to use annotations.

The same thing is true of code behind (not to mention annotated code behind). Code behind is a bit more aligned with MOP as it is based on events (occurrence of a state), but there are no clear separation between the code associated to each metadata element (like a class - method relationship). The code is generally behind a metadata set and generally exhibits poor reusability and factoring.

Annotations and Code Behind approaches are introducing two dire side effects:

1) there is no clean separation between the metamodel level and the underlying container of the business logic (expressed solely from Metadata and Implementation elements)

2) they prevent the metamodel to span multiple tiers of the architecture

I cannot emphasize enough how bad this coupling between metamodel / container is. Architecture has been widely successful over the last 15-20 years. We have been able to create sophisticated distributed systems that bring tremendous value to their users. No question about that. The problem though has been stitching all these tiers together with lots of boiler plate code, often hand coded, always hard to change.

Today there is a great need to create architecture independent programming models:

This is exactly what a Polyadic-Cogent Programming Language can do, spanning n-tiers, from data to presentation.

Too many companies are stuck today with what they built over the last 10 years. The containers are going out of support and yet they can't change them because they wrote all the business logic within the containers, using their APIs.

In a Cloud world, it is even more critical to be able to create architecture independent programming models.


08/22/09 :: [SOA] SOA is Failing [permalink]

Apologies for not blogging, I am (really) swamped working on a SOA project...

As I was letting my main computer install some of the latest Vista security updates, while I looked for the the first time in 2 months at my google reader, and as you could guess, (Microsoft's) Dave Chappell's latest post caught my eye. Not sure why I missed his interview in 2008 but he basically explained then and now that "SOA is failing".

Dave and I have a long history. I related several times this incident in my previous posts but for the readers who don't know about it, I was sitting next to Dave at an Indigo (i.e. WCF) SDR (Software Design Review) back in 2004 or thereabout. At some point we engaged the conversation and I explained to Dave some of my views on how "orchestration" related to Service Orientation (i.e. as a Service Implementation technology and not (really) as a Service Orchestration technology). The difference is subtle and I explained many times my views on that question. At that point, Dave explained to me that I was wasting my career trying to say things that differed with what Microsoft was saying. He also spent a fair bit of time in the 2006-2007 timeframe explaining the Microsoft world why they should not look at SCA, in particular by rejecting the idea that an interoperable assembly mechanism had any value (thank you Dave, what an accomplishment !).

He did use these words "wasting your career". I guess he did not waste his.

Watching Dave's presentation is worth its dose of humor: "Object works as a design paradigm", "Reuse of business services does not work..., but... technical reuse, i.e .Net/JEE works", "ESB is just a bunch of vendors trying to sell you their good old integration gears", "Data access reuse works" ... and Dave goes on and on and on.

The astute reader would have probably made the connection that, indeed, the Connected System Division at Microsoft was failing so much that it had to put under the SQL Server division. What a demotion! I bet there are some partner architects that must feel a bit bitter about that. Ah...unless it is because "Data Access reuse works"? What do you think Dave?

How could a company that is unable to deliver a WSDL-first capability in its SOA tool set would understand anything about Service Orientation? I feel sad that Microsoft customers have to go through this kind of flawed arguments simply because Microsoft can't deliver something that can make them successful with Service Orientation::

Object works as a design paradigm: but of course Dave, this is why all enterprise application software is made up of Customer, Account, Product ... classes. We all know that. This is also why you add a couple of annotation to a "class" and voila, WCF spits out a service, and what a "service" that is.

ESB is just about integration: but of course Dave, and do you have the slightest idea about what loose coupling is and how WCF lets you achieve loose coupling? WCF is so loosely coupled that it can't even do WSDL-first or requires shared "stuff" between the consumers and the providers. Well done guys.

Reuse of business services does not work: but Dave, how can you implement a Business Service with WCF? have you even tried? can you show any example? Have you looked what they are doing with SCA? They even have events nowadays.

Data Access reuse works: but of course Dave, CRUD is a well know pattern that provides an infinite amount of reuse, actually you can reuse CRUD operations so much that every single consumer is going to implement the same logic, CRUDing around, and then when the data changes... ah badaboom, all client breaks.

What an accomplished career Dave !! I am jealous.

I'll never repeat it enough, SOA is unfamiliar, it does require that you scratch your head a bit, oh, not that much. What does not work is trying to do what you have been doing for 20 or 30 years with SOA technologies. It is not the SOA technologies that do not work, it is what you have been doing. Object Orientation does not work (with information) but loose coupling works, provided that you understand what it means. The reuse of business service works, provided that you understand how to build, version, govern and fund business services. Ah, and data access? no it does not work, exposing data access services to consumers is like pouring concrete over your enterprise applications, it looks smooth and slick until the day you need to change a few things. 

 Lots of things are changing if you want to build a service oriented architecture:

  • Governance not just Project management
  • Strategy and Goals not just Requirements
  • Selection not just Specification
  • Contract and Quality of Service not just Functionality
  • Policies not just Rules
  • Resources not just Data
  • Lifecycles not just implementations
  • Events not just messages
  • Inter-actions not just invocations
  • Assembly not just composition
  • Federative not just Monolithic
  • Forward versioning not just versioning
  • Certification not just Test
  • Publication not just Documentation
  • Provision not just Deploy
  • Threat not just Security
  • Accountability not just Organization
  • ...

The key to reuse, Dave, because you seem to have no idea whatsoever about reuse, loose coupling and SOA: the key to reuse is forward versioning. What you have to understand Dave (sorry that's something that Microsoft is not saying) is that in SOA, what you reuse, is not an asset that was built last year, no, the old consumers of a service reuse a new version that was just built for a new consumer. Reuse is "forward" not "backward". As a consumer (and service provider) I prepare myself to allow the service to evolve without disruption. What a lack of imagination ! this is what happens in real life every day, as a consumer of physical services I rarely get impacted by changes in the services I consume (education, health, banking, insurance, groceries...). Life would be nightmare if we had to be notified of everything that is changing let alone do something about it. Reuse happens when versioning becomes seamless.

05/16/09 :: [MOP] The MoDISCO Project (ModelPlex) [permalink]

This week-end I discovered the MoDisco project (Eclipse).

MoDisco (for Model Discovery) is an Eclipse-GMT project for model-driven reverse engineering. The objective is to allow practical extractions of models from legacy systems. Because of the widely different nature and technological heterogeneity of legacy systems, there are several different ways to extract models from such systems. MoDisco proposes a generic and extensible metamodel-driven approach to model discovery.

MoDisco is part of the ModelPlex European project (MODELling solution for comPLEX software systems) and not surprisingly, Jean Bezivin is participating.

In many ways, I wonder how Microsoft can compete against Eclipse. It should have joined the Eclipse organization years ago and it could benefit today from all kinds of modeling technologies that it will never ever be in the position to develop. Microsoft has Software Factories and DSL Tools and nothing else (Oslo will be coming some day). It has not built any significant community around these assets and most likely will not be able to build any community. Once it falls behind in MDE, it will not be able to catch up and this will have dire consequences for its overall strategy. What the folks in Redmond don't seem to realize is that Modeling is "technology agnostic". It is no longer about Java vs .Net, once you understand what Metamodel Oriented Programming is, you realize that there won't be any room for a Microsoft's version of MOP. Ironically, Microsoft was one of the first to MOP around way back in the early 2000s. But it got out control, because they never understood they were MOPing one attribute at a time. Microsoft, could have positioned itself to compete advantageously on creating interpreters and compilers for MOP and deliver the best engines in the industry (and it certainly could do that if it wanted to), but if they don't even understand what Metamodel Oriented Programming is, so how could they build an MDE community and a customer base? The amount of IP that Microsoft is leaving to others to land grab is just staggering.

This paragraph, of course, begs the question: what are Oracle, Amazon and Google doing about MOP? Oracle seems to be in better shape than the others Fusion is polyadic and they are rewriting all their apps with it. Amazon, I think, does not realize how critical MOP can be to the Cloud and probably thinks it can afford to play below the MOP layer. Google, well, I am sure they think that Protocol Buffers can solve any problem known to man. Google seems to be deeply rooted in the monadic tribe and anyone there seems to be ready to reify everything and anything behind a class and occasionally a resource or an (Atom) feed. You could even argue that Google seems to drive towards an anemic monadic programming paradigm. When all you do is "search", you probably don't care much about cogency and polyadism. I did not mention SAP, but you guessed it, I am sure someone there is going to conclude that ABAP was MOP-ready before anyone and therefore they should continue the course and ABAP happily ever after. 

What I find interesting too is that Europe seems to be way ahead of the US in MDE.

Now, I must admit that I had in mind something like MoDisco once MOP would be a bit more formalized. It seems like a natural progression, once you understand that when we write write code we have in mind a cogent DSL as part of a polyadic programming model, however we only have the constructs of a monadic programming model to do that. What it means is that most code is full of patterns (explicit or not) that can be de-reified into a cogent DSL. De-reify, ah! what a word. I have started a couple of weeks ago to see how traditional patterns play into Metamodel Oriented Programming. If all goes well, and MOP really can do what I claim it can do, these patterns should disappear from the cogent-DSLs (in general). Some new pattern may appear, but the goal of MOP is really to program without the need for patterns, this is the role of the DSL and the resulting polyadic programming model.

For instance, if we take a look at creational patterns, e.g. the Abstract Factory Pattern, we see that a cogent-DSL should abstract the need of having to use this pattern in the cogent-DSL itself (not in the interpreter or the code generation tool that process the c-DSL into an executable). If we go back to my 3 element DSL example, we can see that an Element1 instance has the ability to manipulate Element3 instances. Element3 is stereotyped with a resource lifecycle.

This means that an Element1's operation can do this kind of thing (but remember it cannot manipulate Element2s):

void myOperation()
    Element3 r = new Element3();
    //do some stuff with it
    r.&Element3(); //move r into an archive state
    //well we can't do much now that it is in an archive state
    r.~Element3(); //Delete the resource

How does the abstract factory pattern play here? Well, remember that the goal of MOP is to enable Architecture Refactoring so when you say new Element3(); you actually specify that you need a new element, you just don't know how this will happen. Now, some code might we written in the Element3() constructor or it might be entirely virtual and resolved at the "Architecture Factoring" time, using in effect an abstract factory pattern (or not). What MOP gives you is an opportunity to completely separate the business logic that is specific to the solution (and written in the constructor implementation) from what is specific to the architecture (which is using the Abstract Factory pattern for instance).

It works the other way around too: you can easily detect Abstract Factory patterns in a given code base and you  can de-reify them into a c-DSL following MOP principles. One of the most important thing to realize in this de-reification process is that DSL elements have a lifecycle and that lifecycle must be de-reified from the code. If you don't do that, the de-reification process might not succeed because some intermediary actions (such as "archive" in the example above, noted &Element3()) might not easily fit in the cogent-DSL or even be detectable from the de-reificator.

Today I remain convinced that the next decade will be the Model-Driven-Engineering decade, or more exactly the MOP decade. As I said, everyone is MOPing today, and has been for quite sometime, like Mr. Jourdain, unbeknownst. So, I don't feel any kind of ownership for MOP, I am just trying to formalize what everybody is doing in an architecture independent way. Sure there are still a few die-hard monadics who think they can build any kind of solution with just X or no more than Y. These guys will spend the rest of their lives deriving all kinds of patterns and coupling or cohesion rules. They'll write countless book and articles describing their witchcraft, while trying to sell their "knowledge" at a hefty premium. So what? I don't think it will be really hard to convince the 99.99% of us -the polyadics- who just want to do their job in a sensible way that they need to bring some structure into what they have been doing for well over a decade now. Just tell me which company in the world would not want to express the solutions it is using (business, industrial,...) in an architecture independent way? Just give me one... Can you imagine? there is even the prospect to de-reify billions of lines of code into cogent-DSL? Who would say no to that?

05/09/09 :: [MOP] More on Implementation Elements [permalink]

Implementations elements are the core of MOP, they are the main difference between anemic and cogent DSL and represent a key enabler of polyadic programming models (PPMs).

The question becomes how do you specify these elements in a DSL? In traditional textual-DSL approaches, there is no real difference between them and the DSL element definitions. For instance here is an textual-DSL that specifies an orchestration language in xtext (openArchitectureWare).

As you can see, the "syntax" of the implementation elements (here Orchestration) have no real boundary with respect to the other elements of the metamodel (Message,...). You kind of guess that you are now dropping into an implementation element because you are opening a curly bracket (which is not always true). Worse, textual-DSL frameworks force me to (re-)define the implementation syntax (AlgebraOperators, LogicOperator, Connector...).

The corresponding Abstract Syntax Tree associated to an instance document (i.e. in this case, an orchestration definition) makes again no difference between the metadata and the implementation, they all look the same in the AST (the AST is on the right-end side):

As Cogent DSLs (c-DSLs) are being defined by traditional textual-DSL frameworks (since textual-DSLs are conducive to defining implementation elements), the c-DSLs designers will design ad hoc syntaxes will no or little verification possible. This will create a significant impediment to the development of c-DSLs as we can expect a "babelization" of the syntax definitions while making runtimes and interpreters harder to build.

The question is how can we provide reusable implementation element specifications? It is actually quite trivial, if only you care to understand that this is a problem. The key is to define an M3 layer (if you don't have one) or add a dimension to more traditional M3 layers (Ecore, KM3...). How would that M3 layer work? Let's take a 3 element DSL (i.e. metamodel):

I defined the M3 types as "C", "D" and "S". They are abstract types and carry some specific properties common to all metamodel elements that are "stereotyped" with this abstract type. What the metamodel says here is that Element1 has two or more operations and these operations elements can manipulate Element3 instances in addition to Element1 instances. However, these operations cannot manipulate Element2. Not that this independent of any relationship that may exist or not between these elements at the metamodel level.

What do I mean by manipulate? Well, this is where the metadata specified at the M3 level comes to play. For instance once of the key elements is the lifecycle of a particular abstract type. For instance type "C" has this kind of lifecycle:

That's why it has to have at least two operations (one constructor and one destructor).

"D" elements have this type of lifecycle:

And you guessed it, the "S" element has this type of lifecycle:

There are a few other things that need to be defined to fully specify these implementation elements, for instance the type: procedural, orchestration-based, template-based... You could even declare your favorite "syntax": Java, C#, Objective-C, C++, APL ...

As far as I can see, nobody has spent the time to define that level in the textual-DSL world: Intentional Software, OpenArchitectureWare, JetBrains... Even KM3, though extremely compact and well designed, does not offer the possibility to  differentiate between implementation elements and the other elements of the metamodel.

As you can see this is not a question of textual vs visual, it has nothing to do with that. The question is: Cogent vs Anemic DSLs and Monadic vs Polyadic Programming Models. So far, Software Engineering has always been based on monadic programming models and anemic-DSLs:

Polyadic programming models and cogent DSLs, such as ASP.Net for instance were only achieved by a combination of anemic DSL and monadic programming model (via code behind). You will notice that at this point I don't make any difference between a general purpose language like Java and a DSL. Java is just a cogent DSL that defines a monadic programming model. Similarly, WSDL is a (totally) anemic DSL that "participate" in monadic programming models. It of course does not define a programming model itself.

Creating programming models was hard to do in the visual-DSL era, but a lot more people are going to be tempted to create cogent-DSLs using current textual DSL frameworks. These new cogent DSL will inevitably lead to polyadic programming models. Polyadic programmind models already exist: HTML+JavaScript or ASP.Net + code behind are a good approximation of  PPMs. However, these programming models were developed without a strong modeling foundation.

We still have a choice today to innovate and understand how cogent DSLs can help us create Polyadic Programming Models or we can go the "classical" route and treat everything and everyone equally in the AST with an implied MOF-like M3 layer. We have the opportunity to open the door to architecture refactoring and architecture independent solution models. We have the opportunity to advance the state of Model Driven Engineering or remain classical, we have the opportunity to dramatically improve productivity of our industry in an increasingly complex architecture landscape or we can define corny rules around coupling and cohesion, invent new monadic programming models (such as the (other) REST), or return to Functional programming.

We are at a point where solutions models and architecture need to part. General Purpose Languages will stay on the architecture side while cogent-DSLs will take over the solution space. There is simply no other path of evolution.

05/08/09 :: [MOP] MOP and Modularity [permalink]

Sanjiva pointed to a moderately interesting post on OSGI. In this post, Patrick Paulin argues that modularity and visibility represent major Software Engineering advances. He illustrates his argument with a quote from Steve McConnell:

Software development has advanced in large part by increasing the granularity of the aggregations that we have to work with.

As you know I despise monadic programming models (MPMs). They are the root of all pains and aches in Software Engineering. They are the very reason why you need to painfully define artificial boundaries and coercion (not cohesion). What's interesting is that as the granularity increases, you enter unwittingly the MOP space.

The key problem that Software Hobbyist are simply not getting is that ANY monadic programming model is bound to fail, be it based on Classes, Modules, Services, Resources, Processes, Functions or whatever you think is an appropriate abstraction. Some domains might give you the illusion that such programming models work because they indeed map well to a single concept but the reality is that as you expand your solution's foot print, monadic programming models will increasingly have difficulty to solve the problem. How many rules and programming guidelines can you define for coupling or cohesion before you realize you are on the wrong path? How many four-square diagram do you have to draw to understand that (physical) solutions are diverse. How many engineered systems are built with one concept (i.e. Legos) ?

MOP has three key principles:

  • Cogent DSLs define a polyadic programming model (PPM)
  • Every implementation element in the PPM is constrained by the elements of the metamodel it is allowed to manipulate, unlike MPMs which let you manipulate anything anywhere.
  • The rules of the implementation element programming models are defined in a M3 layer and therefore can be reused across different metamodel elements, but more importantly can differ from one metamodel element to the next

When will the Software Hobbyists understand that both anemic DSLs and monadic programming models are dead? They don't exist, only they see them and create billions of worthless LOCs and googabytes of dirty data.

05/06/09 :: [MOP] Implementation Elements [permalink]

The part that I like most about InfoQ is that there is always some great content or a pointer to some great content pretty much every day. Abel Avram wrote this great summary today on "Language Workbenches". He provides a pointer to the first public preview of the Intentional Software Domain Workbench (IDW).

Like Markus, I would advise anyone interested in DSLs to take a look at it. There is serious thinking behind this new product. Ironically they only show examples that I would qualify as part of Solution domains, not Problem domains. You know my position on that topic: it is more important to develop highly productive solution models rather than trying to involve the people that can define the problem in the solution construction. So I am yet to see Intentional Software showing a problem domain.

I was both impressed and disappointed by the demo. Impressed because they seem to have built a robust MDE tool with lots of great ideas and disappointed because like every other MDE framework it is based on an AST.. An abstract syntax assumes a MOF like M3 layer. This is true of openArchitectureWare, or JetBrain MPS as well. In other words, an AST based approach simply ignores the M3 level.

For me implementation elements are not just another part of the AST, they have rules and constraints that are somewhat harder/inefficient to define at the M2 level. This is where an M3 level comes into play. Ignoring M3 prevents the definition and reuse of standard implementation semantics such as an implementation type (procedural, orchestration-based, template-based...) and lifecycle definitions. The lack of an M3 layer basically treats implementation elements syntactically. It forces everyone to reinvent its own little syntax and rules. This is a degree of freedom that is not needed and that may endanger the very foundation of Model Driven Engineering, assuming that people finally understand that anemic DSL are just a toy and start taking advantage of implementation elements (of which a method is a particular case). In other words, freedom must be given at the metamodel definition level but implementation element semantics must be constrained to facilitate the creation of standard interpreters and compilers and contribute to the convergence of programming skills. What is needed today is not a syntax parsing framework but a framework that supports the design of "cogent DSLs" (as opposed to anemic). Cogent DSLs are the reason why we need textual DSLs, but an (anemic) textual DSL has hardly any benefit on its own, it brings nothing new.

I said before that UML has lost its original purpose and has become an M3 layer, I stand by that statement. Unfortunately, UML is no MAF. However we can illustrate how a Meta-Architecture-Framework could be used to enable cogent DSLs: as an M3 layer UML is somehow expressed in a backwards way (UML was never designed to sit at the M3 level). When someone defines a stereotype <<foo>> on an element of the UML metamodel (say a class), it really means that the metamodel element foo is of stereotype class (since foo itself is a type). This is how an M3 layer should be used, for example: I should be able to say this particular metamodel element (M2) behaves like a class (M3), a service, a resource, a message, an event... At that point, it becomes possible to infer the semantics of the implementation elements which become uniform across all DSLs. Having the ability to design syntax freely for a given implementation element will drive us to a wall. It will create an universe of micro-languages and drive a high level of inneficiency.

I am certain that the MDE pundits who marvel at textual DSLs will happily either ignore (as non classical) or reify implementation elements behind the basic principles, sorry, the classical principles of MDE, i.e implementation elements are just a bit more metadata and constraints. When I look at the market today it's clear that few vendors if any will take the time to develop a Meta-Architecture-Framework, but hey we can always dream.

05/05/09 :: [MOP] Who moved my Coupling? [permalink]

As I read some of the responses Ian came up with and this comment from Saul Caganoff. It seems that we are at an interesting point of the software history. Lots of old ideas keep being recycled by an old guard of developers and architects: applying over and over the things they often learned in school and applied religiously since then.

If you are not American, you probably have never heard of this book "Who Moved my Cheese?" (and boy aren't you lucky, the CEO of one company I worked for asked every employee to read it before announcing significant layoffs, I cannot tell you how embarrassed I was to be working for this company after I read the book). The book is trying to drive you along a path:

Change Happens
Anticipate Change
Monitor Change
Adapt To Change Quickly
Enjoy Change!
Be Ready To Change Quickly And Enjoy It Again & Again

As you can see, that's not really rocket science. When you read these few lines you realize how retrograde the vast majority of the software industry is. Ever so often a spike happens, for instance, Object Orientation, Extensible Data Structure, REST (Roy's REST) or Message Oriented Middleware and how does the industry reacts to "change", it springs back to where it was as fast as it can. The pundits reify every new idea behind corny textbook ideas while self-proclaiming their understanding of the innovation in play. They make a living out of it and write more books to make sure every new idea is properly hashed down into these "timeless" concepts. I was wondering if some of the folks at ThoughtWorks would consider writing a book on Who Moved my Coupling? The path to illumination would be a tiny bit different:

Change Happens
Control Change
Reify Change
Eliminate Change Quickly
Don't Change
Enjoy Status Quo!
Be Ready To Kill Change Quickly And Enjoy It Again & Again

I bet this book would even be more successful than the Cheesy one...

Unlike other industries, software engineering has no (apparent) gravity or any other annoying physical constraints, let alone measurable metrics. Once in a while a company dies and people never look back at "reification" as the major cause of death, it's probably sales and marketing or the stupidity of the customers which was at play.

So next time around when you design a system, just ask yourself how much reification did you bake in your design? The degree of reification (i.e. the number of concepts you used to construct the solution divided by the number of concepts available in your programming model). I bet that you'll be surprised how well this number correlates with level of effort, risks, maintainability...

The question is: will we continue on the path of reification or will we for once "change"?

05/01/09 :: [MOP] Meta-Architecture-Framework II [permalink]

I re-read the 2006 paper from Wil van der Aalst et al on a "SOA based Architecture Framework". This is a good paper, it creates a metamodel of SCA, a posteriori, and defines SOA that way. This time around my read was completely different from the first time I read the paper. It is really interesting to look at all their assumptions in the light of MOP and MAF.

One very successful approach for handling complexity is modularization 

I would argue that this is especially true in monolithic programming models, when everything IS-A xxx you have no other choice than taming the complexity of the system through a systematic partition of the concepts reified behind xxx. They continue:

A collection of modules that are properly connected to each other, should behave as one module itself...In object-oriented programming modules, called classes or objects, are first class citizens. During the last decade modularization is considered as the most important feature of a design of a system. In the rest of this paper we will use the term component for a module

From this concept they propose the definition of an architecture:

An architecture of a system is a set of descriptions that present different views of the system. These views should be consistent and complete. Each view models a set of components of the system, one or more functions of each component and the relationships between these components."

I don't necessarily discuss the definition, an architecture will always at least be composed of a technical view and a physical view and possibly a "deployment" view that expresses how a solution is deployed on a given architecture. But you realize that because people have not been able to define a unified programming model abstracted from a particular architecture, people have been forced to create "logical views" based on the programming models of each element of the architecture and by stitching these programming models in an ad hoc basis. Sometimes giving up altogether (do you see a lot of people modeling JavaScript?).

For example, a view could show a data model of some components and the inheritance relationship between the components.

Ah..inheritance... I am wondering how many components in my car inherit from each other.

The meat of the paper is Section 4. They analyze the requirements of a SOA-based architecture framework. Not surprisingly but in the most ironic fashion they start with:

The basic concept of an architecture framework should be a component.

I don't know what it is with Software Engineers but everything has to be of "one kind". How about everything is a "thing" (after all Reenskaug, the inventor of the MVC pattern, had originally called it the Thing-Model-View-Editor)? Would that work for everyone?

And of course, the programming is just as monolithic:

A component should also have an internal structure that consists of a partially ordered set of activities. Activities describe the component's behavior. Zero or more data elements, which are global to the component. Data elements can be used to configure the component.

Interestingly enough the authors realize that this monotheism is impractical and suggest to leave the model open with 3 plugins:

  • A process formalism describes the ordering of the activities in a component.
  • A data model defines the data elements, their types and their methods
  • A language defines the operations of activities

I did not speak much about "implementation" elements as part of MOP. As Wil et al point it out several formalism should be allowed: procedural, orchestration-based or template-based. Defining specific "things" which all have "implementation" elements which can manipulate some other "things" (not all of them) is what's different in MOP. I think Wil et al intuitively understand that "implementations" need to be added at the right level in the metamodel but they fail to create a general formalism to do so. For instance we could define a "transformation" thing which can be applied to other "things" (e.g. types) with a template-based implementation language (say XSLT). I apologize for using "thing", I just want to emphasize that it is incorrect to try to find a generic type all all "things" inherit from or behave like.    

Even though I don't like J.D. Meyer's Application Model (which is the same that Microsoft had in 2002) and the fact that he is ok to publish bogus REST definitions, I do think his work on "Language for Architecture" is quite useful as he set out to "Map Out the Architecture Space" and came out with this:


The picture does not do justice to his work but this provides a description of where a MOP model can be deployed and some of the aspects that need to be injected in the process. It also provides a foundation for studying how architecture refactoring can work.

In many respect, it does not matter what programming model vendor I, M or O came out with. The premise behind MOP is that there is no unique programming model, at least today. I actually don't think anyone can design a ubiquitous programming model. Thinking that Java or C# would ever produce such programming model was quickly invalidated by JEE or all the code behind that Microsoft added here and there. I understand that MOP is the tower of Babel of Software Engineering, I understand that everyone coming into MOP will come from an OOP/MOF perspective and leave with a dedicated programming model, often incompatible with the one of others. But who could argue today that OOP/MOF is sustainable? even with the crutches of anemic DSLs?

At the end of the day, the problem is not to (painfully) model corny programming models, the problem is to find programming models that don't require modeling to understand that a particular element of a solution does.

04/30/09 :: [MOP] Meta-Architecture-Framework [permalink]

The main goal of MOP is to help create meaningful programming models adapted to the class of problems they are trying to solve. MOP is on the solution side, in other words, it is designed to be used by developers. The second main goal is to help make these programming models (technical) architecture independent. Enterprises of all sizes are suffering tremendously as they create business logic in proprietary programming model. Vendors suffer too, once this business logic is written, customers can't afford to upgrade from one version to another, because they can't afford to migrate their business logic to the new library, component model, super-duper client-side rendering engine...

I am not claiming that MOP will immediately yield tremendous productivity gain as people will need to create the engines behind the metamodels they create, all I am claiming is that OO has lived, in many domains, especially in a connected system, nothing can be modeled by a Class, yet developers and vendors alike end up reifying many useful concepts into a class with a bunch of methods. I bet most people today think of a Service as a Singleton and a SOA as a JEE App Server full of Stateless Session Beans.

If OO will get us nowhere, I also explained here, that anemic DSLs are a dead end. They will simply never ever yield a programming model sophisticated enough to build complex solutions.

The goal of this post is to talk about the M3 layer of MOP, dubbed MAF, the Meta-Architecture-Framework (not facility). Before I talk about MAF, I'd like to express what I think is wrong with all Model Driven Engineering approaches that I have seen so far. As I said in my post on anemic DSL, using an OO-based M3 layer is the key problem. I did some research on ArchiMate, which seems to gain some momentum as it is associated with TOGAF. ArchiMate is an Enterprise Architecture metamodel. Sure enough, it's M3 layer is OO based:

This is what the authors argue:

The enterprise architecture concepts themselves can be defined as specializations or compositions of the generic concepts at the top of the triangle. Another way to look at this is to view the generic concepts as a general means to define the enterprise architecture concepts: they can be considered the concepts to describe the metamodel...

This is quite unfortunate. The second major flaw that people do in MDE is to try to find an articulation between the different layers of abstraction, again this is ArchiMate's overview (M2):

This kind of approach is hopeless, the whole idea behind MOP is to create a logical view of the solution which is independent of the technical and physical architecture. The connection is done at the engine level, not at the modeling level. I also want to reemphasize that MOP is not about bringing business people into the development process but rather making developers so productive that they can build what the business needs in a very short amount of time and if it is not what they wanted or if their needs have changed, it can be trashed without second thoughts. Throwing away 3 days of work to better meet the needs of the business is a no brainer. Throwing 3 years of work is another question.

I understand that ArchiMate is not trying to create programming models but rather a model-driven enterprise framework that could be used to model complex business systems. The value of such an approach is IMHO negative. The models are hard to come by, hard to read, totally out of synch with the reality and don't have any practical use. I'd be interested to hear about Nick Malik's opinion on this since he gave us hints he was working on such frameworks in Microsoft IT. Unless you can create a connection with a runtime, there is little value to create (and maintain) a long term model. MOP's goal is to create readable programming models (which could also be transformed into a graphical representation), based on concepts adapted to the solution that you are building. When people are assembling a car or a house, they never look at the molecular structure of the parts, they don't build parts molecule by molecule (ok some electronic devices are build that way, one monolayer at a time).

So what is MAF? MAF is still work in progress but at the core, MAF is a classification of types of architecture elements. One of the key dimension of this classification is the lifecycle of these elements. I am not completely sure yet that the lifecycle lives in MAF, I am just trying to figure out which M3 layer do we need to infer the programming model of a particular (vibrant) DSL. Ultimately, I am conscious that people may need to extend MAF or overwrite the lifecycle definition at the M2 level, but that's not too bad.

 Let me illustrate why a lifecycle is important to the programming model. Let's take some architecture elements of a connected system:

Type: created/released

Service: deployed/started/stopped/failed/undeployed

Resource: created/archived/deleted

Assembly: deployed/started/stopped/failed/undeployed

User: created/authorized/suspended/deleted

Role: <<uses>> Type

If I want to create an OO metamodel, I can design an OO DSL with has the following elements: Class, Attribute, method and Instance. A method is an "implementation" element. We would define in the DSL that this implementation element can only manipulate instances and the lifecycle of an instance if the one of "Type" (above).

This means that the programming model can be inferred to be:

method(A a)
    B b = new B();
    //Do something with b and a

Now, if I want to express that a destructor cannot be called directly I could specify that the lifecycle of a type is created/~release, in other words the destructor cannot be called directly from the programming model, but it must be defined explicitely. If you want to create a programming model that uses Garbage Collection then your lifecycle is just "created", there is no explicit "released"

Note that when I say that a method can manipulate instances, it also means that it can invoke its methods and access its attributes. This is a default behavior. I think the key to understand is that the major driver in the programming model is the lifecycle.

My goal in the next few weeks will be to create the OO side of MAF to show clearly that OO is a particular case of MOP. Ideally I would want MAF to support all OO concepts  found in different language: JavaScript, C#, C++, Java, Ruby.... That's the litmus test I have to pass. After that I will take on both SCA and .Net RIA Services. My last step will be to take on the field of BPM. If MAF can describe successfully these three domains (OO, SOA, BPM), I would consider the approach viable.

04/30/09 :: [SOA] Coupling [permalink]

I have started a discussion with Ian Robinson on his latest post looking at Temporal and Behavioral Coupling.

His post echoes somewhat Jim Webber's post on Cohesion and Loose Coupling. For some reason, I felt that both these posts came to erroneous conclusions. Yes of course principles of cohesion and coupling ought to be considered when designing a system. This is not what I am arguing.

It struck me this morning that SOA has changed software engineering forever by enabling contracts to become "virtual". I don't think anyone is more passionate about this topic and explains it better than William Oellermann. I have also expressed many times that the Contract is about expressing "intent" not invoking a particular piece of code. Applying good old software engineering principles (which all actually revolved around the idea of designing better contracts) is at best risky.

IMHO, both Ian and Jim do not take into account the ability to decouple the contract from the implementation. I would argue that this is probably the most fundamental principle of a Service Oriented Architecture, or more exactly a connected system. It is critical to define very clear intent (action) and whether or not a particular intent was achieved successfully (event). Of course a few annotations later, most product vendors have completely forgotten this principle, but nevertheless, it is the most foundational principle set forth a decade ago.

That being said, I would argue that it is wrong to oppose commands and events, they are both complementary in establishing intent. I would also argue that a synchronous interaction where the intent is directly wired to a particular agent and the completion is notified synchronously to the sole requestor is the root of all problems in the design of connected systems.

This is why I am so opposed to the (other) REST -as a connected system programming model. REST, not only, has no ability to express intent and support events but offers absolutely no ability to decouple intent from implementation.

04/20/09 :: [MOP] The End IS Near [permalink]

Sorry, I did not mean to scare anbody with this gloomy post title. I was referring to the post I wrote 7 years ago where I was talking about "the future of the application model". At the time I had made this prediction:

So let's face it there will be probably 3 infrastructure players left within a couple of years: IBM, Oracle (only if they care to be more than a database vendor) and Microsoft.

Ah.. I got the timeframe wrong (ok, I was very naive), but the players right. This year, IBM will buy Red Hat and Microsoft and SAP will merge. In the meantime, there will be 3 regional players left: Software AG, Progress and Axway and OSS will be under the total control of IBM and Oracle (ironic isn't it?). BTW, Sun didn't buy MySQL to do something with it, it bought it as a bait.

Interestingly, the world is suddenly divided between the SCA-rich and the SCA-less folks. I am wondering what a Cloud strategy could look like when someone does not understand composition. Maybe we Dave Chappell could explore this question.

Oracle's execution has been nearly flawless since the Collaxa acquisition. Today, Oracle's strategy is all but unbeatable. I am actually surprised that very few people have talked about "the Cloud" in the context of Sun's acquisition.  Can you imagine the (enterprise) Cloud that Oracle could deliver (only if they care to be more than a business app and middleware vendor)? SalesForce is going to be joke: any enterprise application on demand !

Now it is a possibility that IBM could feel threatened enough and decide to buy SAP instead... either way, ABAP will not make it through the next decade.

We are definitely living the end of an era. Alea jacta est, the battle is now in the (enterprise) Cloud. Ah yeah, Google? I would say "not a chance".  I even think, today, that Amazon will have a tough time to compete with Oracle, IBM and Microsoft. They may not have capitalized quickly enough on their first-mover advantage to cater to the enterprise. 

Now, about the "future of the application model", well, it seems that I still need to speak at the "future" tense. When you see the level of discussions happening around BPMN,  what can I say? I could point out that none of these companies have a strong interest to create a "process centric" application model, sadly.

One last point, I cannot but reflect on the energy that has been wasted in the last ten years, the number of projects that have been trashed and shelved. It sure will never be as visible as the Pittsburg's Rust Belt but it sure feels like it. The way I see it, the areas of innovation left open are security, (business) management and monitoring and information collection, access, search/rationalization and mining.

04/20/09 :: [MOP] M is the new O [permalink]

M is the new O. OOP no longer exists in case you have not noticed, what everybody is doing is MOPing around without guidance or lights.

You want a proof? Look no further than Kris Horrocks's post. Kris gives a classical view of Model Driven Engineering, he paints the "picture" of HTML:


Yes, this is exactly how Model Driven Engineering has been hashed for nearly two decades. I have a simple question for Kris: Where does JavaScript fits? Yes, even the AJAX guys are MOPing. HTML without JavaScript has little value. Anemic DSLs have little value. Do you really think that the Web will be where it is today without JavaScript, i.e. with an anemic HTML? I mean that in a RESTless way. The Web is what it is because of MOP not because of REST. Lots of people have been MOPing with "code behind" or with "annotations" for quite some time. How long will we have to go before we leave the "O" behind? I say that respectfully, OOP has opened avenues for Software Engineering that were unimaginable 30 years ago, but Object Orientation has reached its limits.

If you want to understand the source of all software engineering aches and pain, look no further than this figure in the MOF specification, which states that everything MUST-BE-A class:

You are probably wondering if Eclipse's ecore can do better? No ! Vladimir Bacvanski and Petter Graff make it very clear in this ecore tutorial:

ecore allows you to define structural models

These models are often found in organizations as:

• UML class diagrams

• XML Schema Definitions

• Entity Relationship Diagrams

Why one more essential modeling structure?

• Ecore is focusing only on the essential information

• EMF provides tools that support

- Code generation

- Import/export to/from various other forms

-- It has IBM support

Kris, this is precisely the problem, everything HAS-BEEN a class in "classical" MDE. For decades now, MDE has been trapped in the Caudine Forks of OOP. For decades, we have trained millions of developers to ignore the subject over the object: an Object-based M3 layer enforces (anemic) "essentialism" over (vibrant) "existentialism".

So when Kris quotes Ben Gillis who expresses that:

Many applications are so large and complex even the most knowledgeable working on them can’t answer the above questions beyond generalities.  And, general answers aren’t good enough for a lot of development and support scenarios.

You guessed it, the question is how can OOP survive architecture? No, not everything IS-A class and HAS-SOME operations. This view has hurt us so much in the last decade, in the wake of the rise of architecture. How can EMOF (Essential MOF) map to the HTML modeling structure? where does JavaScript fits?

We are in dire need of going from MOF to MAF (Meta-Architecture-Framework), we are in dire need of creating a modeling framework that enables us to deliver architecture friendly programming models without the stone age techniques of code behind and annotations.

Kris concludes:

Today, application (meta)data is strewn across a wild west of distant, isolated towns with fractured infrastructure, poor communication, and little to no law and order. This is true both on the Microsoft platform and across the industry at large. It's time for something better.

Yes, it is indeed time for something better, but O-based approaches will not drag us out of the current vortex. This is not just a "metadata" problem, it is about enabling M as the new O.

04/16/09 :: [MOP] Why M3 is critical to MOP? [permalink]

 Jorge Ubeda commented on my criticisms of Oslo. He emphasized two very important quotes from Charles Young:

We need to be able to specify the metamodel very precisely in order to ensure that our models are valid and well-formed in respect to BizTalk Server. However, we will probably be less interested, in this scenario, in ensuring that our metamodels conform to some meta-metamodel. One reason for this is that we probably won’t have a compelling need to exchange BizTalk-specific metadata with other systems and applications.


There is no need for developers to engage in a steep learning curve in respect to meta-metamodelling, no need to conform to unfamiliar APIs and no need to inject an unnecessary level of platform independence in the way models are specified.

Au contraire, Charles. M3 is essential to the mission set forth by Oslo in being able  to deliver amongst other things "Connected System Programming Model". M3 is not just about "metadata exchange". Who said so?

The whole goal of Metamodel Oriented Programming is to create programming models independent of architectures, however, MOP does imply a "programming model", actually a meta programming model. If we look at OO with the eyes of MOP we can see how critical it is to be able to stereotype the elements of the metamodel as this "metametadata" will define, in particular, their behavior in the "implementation" elements.

In OO, we know that a "class" can be "instantiated". In MOP you can have many different entities that don't behave like "classes". So, when we are able to write:

Class a = new Class();

It is because it is defined at the M3 layer. M3 will tell me that the elements of type "class" are instantiated that way, while, say, a primitive type has a different instantiation mechanism. You can easily see that in a connected system a "Resource" behaves very differently from a "Service", and both of them behave very differently from a "Class", at least from a lifecycle perspective. As I mentioned earlier, there are also fundamental differences between Methods and Operations.

Not convinced? Look at methods. Why can't methods be "instantiated"? At what level (class or instance)? Ah, yes some OO languages allow you to do that... How would you express that? Where do you define a "metamodel" that can describe any OO language? well this is actually a metametamodel. This is why M3 is so important.

An OO developer has no knowledge of M3, and in the case of OO, M3 is exactly equal to M2, which contributes to the perception that M3 is useless.

You could easily imagine that we could create an M3 layer called the Meta Architecture Facility (MAF) that specifies all the types of elements that can be encountered in a connected system (or a robotic system, or a musical system, ...). Anemic DSLs probably don't need such a strong M3 layer but MOP cannot exist without it.

The M3 layer could also be used in the runtime implementation to weave some aspects (monitored, secured...)

So we still agree on the fact that the users of MOP don't need to know as much about M3 and this is typically a layer where vendors will play. At a minimum, M3 will let me defined the rules with which I can write implementations using the metamodel elements. And I am not even talking about how useful M3 becomes when you need to version your metamodel or your metametamodel.

As I understand it, Oslo has given itself the goal of enabling something at the level of MOP: however, trying to do it from a pure "syntax/parser" perspective is bound to fail. Today Oslo gives me the ability to define a syntax and generates a parser that will let me parse files that are defined using that syntax. That's a key building block, agreed. Yet, unlike OpenArchitectureWare, Oslo doesn't let me define easily the metamodel underlying the syntax (at least based on my understanding). I just get a "tree" of metadata (the equivalent of a DOM) and I am on my own to consume that tree, assuming an underlying metamodel. One could argue that this is ok, since the runtime implementation is really the one that knows about the metamodel. However, if you now introduce "implementation" elements (such as an orchestration, or a more traditional implementation a la OO) then you need general rules (just like in OO) that expresses how a metamodel element behaves in an implementation.

So ignoring M3 has nothing to do with classicism or pragmatism. M3 is the foundation of MOP, without it you are building on sand.


04/16/09 :: [Cloud] Upgradability [permalink]

Cloud has many benefits that represent key adoption factors (KAFs).

The one that come most often are perhaps:

Elasticity -> Pay for use

Turnkey -> Low or no effort to set up environments and migrate solutions from one to another

I think however that Upgradability is going to be one of the major factor behind cloud adoption, i.e. the successful companies in the Cloud will have to provide services that are elastic, turnkey and upgradable. The ones that do not build upgradability at their core will eventually be out-graded.

Of course, Richard Webb will argue that Operability is another key aspect of the cloud, but that might come into play as you create composite solutions from different elements under different domains of control.

What do you think?

04/15/09 :: [MOP] Language Design [permalink]

I started to look at Mgrammar and I did not see the light: the approach to parse a given grammar in relation to the production of an output is fragmented, so unlike xtext, it seems that you are pretty much on your own to create the metamodel behind your grammar. I need a bit more time to come to a definite conclusion but it looks that xtext is a lot smarter in the way you can attach a true metamodel to your syntax. Of course xtext has an M3 layer. It seems in effect that Oslo has no real M3 backbone, you can define any output you want and then build an engine that can consume it.

Serendipitously, Jonathan Allen publish a news on InfoQ about Andrej Bauer's essay, On programming language design. Something caught my eye, I think this is an oxymoron:

A language must support the programmer’s laziness by providing lots of useful libraries, and by making it possible to express ideas directly and succinctly.

Look around, developers have become scripters, there is a gazillion APIs poorly or smartly designed. These APIs change rapidly and trap business logic with more force than a black hole's attraction. The whole idea behind MOP is to create a programming model that is API-less. APIs are used in the implementation of the engine(s) implementing the programming model.

Then Andrej made my day:

Many languages are advertised as “simple” because in them everything is expressed with just a couple of basic concepts. Lisp and scheme programmers proudly represent all sorts of data with conses and lists. Fortran programmers implement linked lists and trees with arrays. In Java and Python “everything is an object”, more or less. It is good to have a simple language, but it is not good to sacrifice its expressiveness to the point where most of the time the programmer has to encode the concepts that he really needs indirectly with those available in the language.

Precisely, how many times have I heard, I love this language everything "IS-A Process", "IS-A Resource", "IS-An Object"... this is precisely what MOP is about, it is about aligning the expressivity of the language to the concepts the developer needs to manipulate to construct the solution. MOP is not a "problem side" approach, it only focuses on the people constructing the solution.


04/14/09 :: [MOP] Understanding Oslo [permalink]

First and foremost, I am not the only one to have concerns about Oslo's "pragmatism". As I said, this was kind of a disaster in WCF, I am quite surprised that pretty much the same people get to try the same thing.

I certainly support Lars' idea to bring together in the same room people like the oaW team, the EMF team and the Oslo team (it would be quite something to meet at the M3 bar in Oslo). Software engineering would make a huge leap forward if that would be to happen.

You gotta love Dave Chappell's humor too:

"Initially, Oslo referred to a lot of different things," said David Chappell, principal of Chappell & Associates in San Francisco. "Now, Oslo refers to modeling technologies. and the repository. So just in terms of clarity, that's progress."

Indeed, progress it is...

Oslo seems quite ambitious despite this demotion. Burley Kawasaki explains:

Oslo will provide a consistent application model across both on-premise and Cloud environments...Oslo as a general purpose modeling platform can target any number of environments... one of the big breakthroughs that we see is that Oslo was built from the ground up assuming there was going to be this composite world

Doug, with such an ambitious target why are you wasting your (and our) time on CRUDish-REST services? When can we see this in action? which programming model are you going to use? So far it is not very clear that you even understand what "composite" means. As a reminder, Microsoft tried to kill all "choreography" standards (I was present in both when that happened), declined to participate in some aspect of SCA and has no understanding whatsoever about the BPM field. So I really can't wait to see what you guys are going to come up with (hint a bunch of webparts don't qualify as a composite application). Again, looking outside the box would really help you.

One of key pieces that seems to be missing in Oslo to "target any number of environment" is a transformation capability. This is related to the discussion Runtime vs Code Generation. I am not a big fan of "code generation" but Artifact Generation is kind of a must in such framework.

Shawn Widermuth has written a good introduction on MSchema

MSchema is a language for defining your data store and relationships between data that Oslo uses to define how to handle storage.

I don't care about that, as a matter of fact, Thomas Huijer writes an hilarious post where he claims:

I recently was in a session on Oslo, when someone at the very end asked: “But this MSchema thing isn’t really different than T-SQL, is it?”. Well, that person completely missed the point of M and Oslo.

If it is agreed that MSchema is more constrained than T-SQL in what it can do, Thomas don't seem to realize that MSchema is just a "cleaner" way to write some T-SQL. T-SQL is metadata, just like C# is metadata...ah and just like MSchema. This is where model transformations come handy.

Creating database schemas and storing data is not such a great problem to tackle. Murl is another one of these problems that are not really going to drive your Oslo's requirements very far. I mean come on Doug, do you need a DSL to create CRUDish HTTP requests?

Lots of people seem to agree:

I just found out about Oslo the other day and it seems like it has incredible potential. However, the more I use it, the more wasted potential becomes apparent. The Oslo FAQ attempts to stress that the Oslo repository is not yet another database, but it doesn't seem to provide much to back that statement up. That's because right now, there isn't anything to back that up. [don't forget to read the remainder of the post].

So, if MSchema is pointless, how about MGrammar?

MGrammar is what you use to define textual-DSLs. I did not get a chance to create MGrammar's metamodel, i.e., in first approximation, Oslo's M3 layer (if I understand your architecture correctly). James Clark provides a good analysis of the tool which is not much more than a clone of xtext.

There's a bigger issue lurking here.  I think Microsoft see Mg as more than just a nifty library.  It's part of their vision for a next generation application development platform, where developers become more productive by using custom DSLs rather than XML [as a common syntax]. I have mixed feelings about this.... How do you make your platform encourage developers to use a DSL where it makes sense, and discourage them when it doesn't? Up to now, part of the answer was that libraries made it a bit easier to use XML (or some other standard format) rather than some completely custom syntax; so unless there was a substantial benefit from a custom syntax, developers wouldn't bother...It doesn't help users if they have to learn a new syntax for every application...I definitely don't want every application to be using its own completely custom syntax.

I think that James nailed it on the head. Doug, what's the point of using a DSL when you can create a library with a well defined API in a particular programming language? Oslo lives at a much higher level, the one that Burley is talking about or what I call Metamodel Oriented Programming.

Ray Ozzie, Chief Software Architect at Microsoft, ended his PDC keynote ... by cautioning that the new technology is "nascent." Hopefully Microsoft won't have to use forceps like in the good CSD days.

Doug, Charles, I would certainly spend quite some time to correct the perception that quite a few of us have (hopefully this is just a disconnect between us and you guys) or if it is more serious, I would look at correcting the course by spending some time away from Redmond.

04/13/09 :: [MOP] M3 [permalink]

This is such an interesting discussion. What a change. Charles provided some background information about M3 and Oslo.

First let me take a few items off the discussion. What I like about Oslo is that for the first time a modeling framework is not afraid of surfacing the continuum that exists between code and models. The MDE community has taken the wrong turn a while ago when it decided to oppose models and code. This was actually a tragic mistake (like so many - some people could learn from that and avoid eradicating concepts simply because they don't fit their day to day job).

If you (not you Charles) still don't see it today, well you need some glasses: code-less DSLs have had a marginal impact on software engineering (I'd like to argue they have had none), and code is full of ... metadata sprinkled wherever a developer decided to land them: annotations, descriptors, graphical tool... That mess has to stop. My mission in life is to help stop that mess by surfacing more general programming models (from a connected systems point of view, for instance) and helping change people's view on MDE towards MOP. I don't work for a software vendor or a standard organization so believe me, my agenda is as clear as what I have stated.

I also would like to separate "Metadata Management" from "Semantics". XMI, MOF compliant repositories... are all great, and they are a consequence of the existence of the M3 layer. But I don't care, really. For me it is more important to first focus on the kinds of problems your modeling framework is going to tackle, i.e. the semantics.

I am not an OMG guy, I actually never attended a single OMG meeting in my life. I like the organization, they have a good process. They produce well designed specs (most often...) and I know some great people that contribute such as Conrad Bock and Fred Cummins (and many more).

So when you say:

'Oslo' really isn't expressed in terms of 'classic' metamodel architecture.

I am still a bit worried. Microsoft has often "innovated" well outside the "classical" roads in the last 10 years but I would argue that it was most often out of ignorance rather than based on a bold new way of thinking. A lot of teams at Microsoft simply never get out of the Redmond campus or even read what's going on outside of the Microsoft world. I have no doubt that some bright people at Microsoft can push the boundaries of classicism (Volta comes to mond), but my word of caution is 9 times out of 10, this proves to be shortsightedness.

Again, if 'M3 exists', as in 'gravity exists', there is nothing 'classical' about it. When you design something to be used on earth, you can't just pretend that gravity is too 'classical' and you have had the brilliant idea to simply ignore it in your new designs.

I'm schooled in the old linguistic philosophy that languages (including modelling languages) are built on the 'trinity' of syntax, semantics and pragmatics.

I am a bit worried again of the 'pragmatics' aspects. Pragmatism works well with humans, just a tiny bit less with software, specially when you are talking about building something as significant as:

M is designed to be used at any level within a metamodel architecture. 

If M would allow me to build my M3 layer that would be just fine. That's totally doable, but I am not sure this is what M does today or even tomorrow. This kind of thing does not happen serendipitously or even pragmatically.

It seems that M is more built around the premise that a particular formalism (graph theory) is good to express lots of stuff and you can even build a general repository to store graphs of stuff and then you let it loose and you see what people do with it.

So, while I appreciate Jean-Jacques' comments about pragmatics, I think that this is potentially the area of greatest strength in terms of what Microsoft is doing. 

Again, Microsoft has used this excuse so often that it's not even funny to point out that pragmatism has this unique ability to drive you fast where you don't want to be. Based on the CSD track record I would actually be very wary of focusing only on 'pragmatism'.

Let's talk about M3 layers for a second.  The most famous ones are probably MOF and Eclipse's ECore. The beauty of an M3 layer is that it is reflexive and should be capable of describing itself. So there is no need (that I know of) for an infinite numbers of layers. The problem in the OMG's Modeling Architecture is UML. UML was just a particular (and large) metamodel supposedly designed to model OO solutions. Unfortunately, UML started to take off at the same time as "architecture" but UML was never designed to model architected (i.e. n-tiered) systems. Even though I use UML to communicate a few things, I see it as vastly useless for an architect and so does pretty much anyone. The reason why I say that is because UML has been made "extensible" with profiles, so nobody is really using "UML". As soon as people started to use profiles, UML got promoted to the M3 layer and created even more problems there. The UML metamodel is way too complex for an M3 layer and people change widely the semantics of UML elements (via stereotypes) as they please. As an M2 technology UML is quite dated (UML2 added a few interesting new concepts, but again missed the mark on architecture) and as an M3 technology UML is a real disabler of Model Driven Engineering. In particular, UML's implied programming model is inappropriate for connected systems (again with some improvement in UML2). UML should actually be useful as a warning that a modeling architecture is a modeling architecture and you can't mess with it "pragmatically".

If you want another warning, Microsoft itself suffered quite a bit from an ill designed M3 layer in the Operations and Management area. Microsoft started with  SDM claiming that:

There is an elegant simplicity to modeling in SDM

You want to laugh when you hear something like that. Microsoft created a cluttered M3 layer, a la UML and well what happened later? they came out with SML a much cleaner M3 layer, but probably a bit too thin to be useful. As I mentioned I am not religious about a particular M3 layer, I am simply trying to warn you that "the 'trinity' of syntax, semantics and pragmatics" is bound to drive you in the wrong direction without the lights of the M3 layer. It is very important to understand and manage that layer appropriately, not because of repository and serialization concerns but precisely because of the semantics that you are going to be able to express at the M2 level. M2 is were people work, M3 is where Microsoft should work. Now that you are bringing "implementations" into the picture it is even more important to drive the design at this level. So it is not like Microsoft does not have experience with all this, but I am very worried about the casual approach that the Oslo team is having with these extremely complex issues.

So when you say:

M3 is always there in some sense, but doesn't always need to be represented explicitly within a given context.  

I agree, but don't use it as an excuse to advise Microsoft to ignore this layer, this is the worst disservice you could do to them. As a user of Oslo, I don't need to go back at this level, agreed.

I need more time to understand your diagram, however I don't get the M1-Mn arrow. Again everything stops at M3, there is no "Mn". All I know if that M3's goal should allow you to avoid "extensions" and "stereotypes". As I eluded here, there is not one "uber" M3 metametamodel. SML is an M3 layer for building operations and management systems. So it would be perfectly honorable for Oslo to allow you to build your M3 layer, but that's different from "I don't need an M3 layer".

Oslo, the stated intention is to reduce the barrier between models and runtimes to the point of near-invisibility. The Oslo goal is to promote the direct consumption of models (including metamodels and meta-metamodels) within a wide variety of runtimes.

All this is good intentions and nicely said, but I, again, am very concerned by the later part of your sentence ("direct consumption of models (including metamodels and meta-metamodels) within a wide variety of runtimes") runtimes consume models, possibly implement metamodels and usually have nothing to do with MetaMetaModels.

Again, I would strongly suggest that instead of just thinking outside the box the Oslo team also looks outside the box, not to mention that defining very precise goals (such as the ones around architecture) really helps, it's way too easy to build something that "can do everything" and claim later that people simply don't know or are too stupid to use it.


That should not be interpreted as some conspiratorial (and completely pointless) attempt to undermine well-accepted standards.

We all know Microsoft well that Microsoft's "bratsy" attitude in this area has made it lose a lot of credibility both technical and leadership wise. Who cares today about what Microsoft is doing in standards? Just tell me one standard where Microsoft has shown technical excellence and provided industry leadership towards enabling a large market for our industry. Being one of the editors of the OASIS ebBP specification, I can speak from experience. Who today comes up with a standard idea and wants to bring Microsoft in? Who wants to join a standard lead by Microsoft? Again, looking outside the box would certainly help a lot.

04/08/09 :: [MOP] Where is Oslo Going? (III) [permalink]

Doug Purdy continued the discussion.  He also commented on Charles Young comment. Woa, what a change from the REST debates, no more discussion about the meaning of PUT or arguing about URI templates. Should have stopped arguing about that a long time ago. I mean how can you not want to discuss a sentence like this?

‘Mn’ agnosticism, I suggest, is an aspect of the true foundation on which Microsoft is constructing Oslo.

Doug added on top of that statement that "Oslo can do everything, data, metadata,...", yeah.

I actually don't disagree that Oslo can do all Mn level, that would be quite surprising if it was designed with a missing level. The question, the fundamental question that Charles raises is that can you define your own M3 layer in Oslo or is the M3 layer defined by Oslo? The additional question is how does "implementations" is defined in Oslo's M3 layer?

Let me explain why I talk about "implementation" rather than "method". As you know the metamodel of a Class in an OO runtime has both attributes and methods. If you look at a service, you are talking about operations. Sometime, operations map well to a method (some people actually believe that a Service operation IS-A class method), but not always. You can create a Service implemented as an orchestration and it has ONE implementation which integrate multiple operations. So unless the M3 layer of Oslo does not define an implementation element, I can safely argue that  Oslo won't be usable to create a programming model for a large class of connected systems. It is not a matter of grammar or "Mn agnoticism", it is a matter of defining M3 properly, which MOF of Ecore can't do. So if it is not too much to ask, could please clarify the M3 layer of Oslo or, possibly what are the extensible mechanisms you provide at the M3 level to support this kind of concept.

So as you can see, the question is a lot more fundamental than your answer seem to indicate and I would like to continue to argue that if you continue mapping M to code snippets you might miss very important requirements.

Doug, on the other topics (productivity, hi-REST...), let's agree to disagree, it is actually not as important as the M3 question (incidentally, nobody had ever told me that REST clients were so hard to build...).

I like your quote by the way "All Applications in the World are CRUD" :-) This is with this kind of visionary statement that Software Engineering keeps making leaps and bounds...

04/08/09 :: [MOP] Where is Oslo Going? (II) [permalink]

Doug Purdy responded to my post yesterday.  So let's discuss the most important points.


Doug, I urge you to contact a sample of your customers and look at the annual pipeline of projects that the business is asking IT to do. Compare the requests from the business, from what was approved and budgeted as well as with the actuals at the end of a particular year (say 2008). Then derive the average productivity gain that is need to allow most IT organization to deliver 100% of what the business needs in a given year. Don't forget that some projects don't even make it to the list so allow for another 20% of orphan projects. I think you would be very surprised how much 10X would buy you.

Now, in terms of productivity in the Cloud, as you main have noticed, the Cloud provide "turnkey" operations, so right there you have quite a productivity gain. I have several personal data points that show that Cloud based development environments already deliver about or slightly more than 10X improvements. However, their programming models is still too basic to tackle all IT challenges, but they will get there, make no mistake. If productivity cannot be improved in traditional IT on-premise architectures, make no mistake, these on-premise architectures will disappear entirely within 10 years, so frankly the clock is ticking. (no pressure).


The current DSL tools (the DSL Toolkit) are for visual DSLs, not for textual DSLs. We want an architecture that supports both both textual and visual DSLs operating over the same model.

Doug the part I like most about Oslo is that finally someone understands that anemic DSLs are pretty much useless. The fact that you understand that M is about programming models and not just DSL was very encouraging. The distinction between vDSL and tDSL is purely stylistic and frankly totally uninteresting. The fact that there is a continuum between programming models and DSLs, is IMHO far more important. Hence my disenchantment with your approach to constructing Oslo based on REST samples. If you think you need all this machinery to CRUD or to convert a 3-line code snippet into an HTTP request, that's quite overkill. BTW, I am sorry, but I don't see any hi-REST in the samples that you are providing. Hi-REST involves HATEOAS at a minimum and there is no evidence of HATEOAS in anything that I have seen.

If you really want to make an impact and be relevant you have to tackle real IT problems and URI templates are not real IT problems. You have to deal first and foremost with the heterogeneity of IT. I was quite encouraged to see at MIX that a new Microsoft is emerging: some product divisions, such as Azure or the one lead by Scott Guthrie, are not assuming Microsoft technologies all the way. Which product more than Oslo could afford not to deal with this heterogeneity. IT is in great need to decouple programming models (note the plural) from architectures (plural again). As the first non-anemic DSL framework, this is the mission of Oslo, whether you want it or not. CRUDing or lo-REST is a distraction, by far. There is so much else to do, I mean real-world and pragmatic, things to do.


You are misunderstanding this sentence and it you are forgetting about an important aspect of this technology: managing the application. Applications are in silos today (how do you think about the apps on a box when you are managing them?), the applications themselves are composed of different silos (the presentation silo, the middle-tier silo, the data silo, etc.). Our hope is to make it easier to design, develop and manage the applications across all of these silos. In the limit, we will have a unified model for all aspects of the application. Pragmatically, there are going to be N models (legacy silos, silos that are “opaque”, etc.) and we want to have some level of support for all of these.

Well, I'll wait and see what you come up with in this area. First the problem is not just about layered architectures, it is also about connected systems and composite apps. Even if you consider SOA and Composite Apps as particular areas of applications, you cannot abstract composite programming concepts away from the core design of Oslo, otherwise it will not be capable of enabling people to define their particular composite programming model. In other words there are a set of M3 concepts that you need to take into account to let people design M2 programming models and with which they will implement M1 solutions. Ignoring, once more, a "connected system" programming concept would make Oslo just as successful as WCF.

Second I am still a bit turned off by your statement of "there are going to be N models". I am not convinced you need these models. Each architecture layer has a programming model today. If you provide a good transformation framework to go from the programming model to these concrete models maybe this approach could yield some benefits. I would like to suggest that it would be better to focus on a "DSL connector framework" that would allow you to deploy a programming model defined and managed by Oslo into a variety of architecture layers. I spoke with Nikhil yesterday who mentioned the "metadata pipeline" in .Net RIA services, I think that this is a very important concept and ultimately you could see some integration points between the pipeline and Oslo.


I like many aspects of your MOP formulation. I do not like the explicit transformation, but otherwise it seems reasonably sound at a high-level after skimming it.

Thanks, actually if you read my post, I explain that MOP may alleviate the need for creating the grand MDD machinery which is composed of problem and solution metamodels, models and transformations between them. I am not ruling it out, but MOP, via increased productivity, may simply not require too much improvement in the way we capture the problem since you could theoretically iterate fast enough to deliver the desired solution. The purpose of the figure was to show what people are trying to do in general and where MOP fits in relation to the general MDD approach.

So I am still for the most part unconvinced by your response, until I see I how Oslo can help me create programming models that will allow me to evolve my architecture while not having to rewrite my core business logic, I don't see any benefit for IT. I mean, for SOA sake, some people are still running some business logic that was written in the 70s. At some point you have to realize that the evolution cycles that all vendors (not just Microsoft) are pushing on IT are simply unsustainable. Which vendor can guarantee today that some code written in XXX will run in 2050? Shouldn't Oslo be the element that enables that? Don't you think that people would pay far more money for that than for optimizing the way they CRUD or simplify the inextricable problem of writing URI templates?

04/08/09 :: [MOP] Where is Oslo Going? [permalink]

I had an interesting discussion about Oslo with a reader, S�ndor Nacsa, in the wake of my post on .Net RIA Services on InfoQ. S�ndor provides plenty of new links about this Microsoft project. What is clear is that now Oslo is only about "modeling", SOA and Composite Applications are simply areas where Oslo can be applied. That's a bit different from the original announcement on the project in the fall of 2007 and pieces like "Dublin" seem to have fallen off Oslo which is now only 3 elements: a metadata repository, quadrant and the M language.

Before I start commenting on these new links, I'd like to make my position very clear. When it comes to MDD, MDE and MDA I support an approach that I call "Metamodel Oriented Programming" (MOP).

MOP is an approach which seeks to create a programming model independent of architecture such that architects can architect and developers can build the solution in an architecture independent way. This is in line with projects like Microsoft Volta which talked about "Architecture Refactoring" concepts. Incidentally Volta has disappeared from the face of the earth, what a shame. MOP is not about creating a single programming model, but again, rather an approach to separate architecture(s) from programming model(s). I, of course, focus on SOA and Composite Apps, but as a good software engineer, I believe that anything I do is so general MOP can be applied to anything...

So let's talk first on the approach that the Oslo team is taking to drive its direction and requirements. If I understand it correctly, they are using "FTA"

what is it? ‘FTA’ stands for Federated Task Assistant, which could just be descriptive enough to tell you what it does, but probably isn’t. So, let’s start from this premise: you have a bunch of tasks you have to perform, from a bunch of sources and as parts of many bigger things. For example, I need to order a new credit card because my current one’s magnetic strip is failing; I need to write people’s reviews because it’s that time of year; I have to write a blog entry on ‘FTA’; I have to ensure my wife’s rather substantial list of tasks for me is at least partially addressed, and so forth. Some of these tasks are personal, some are work-related; some get tracked through other systems (like our internal bug tracking system, our HR systems, TFS, and so forth) and some get forgotten if they don’t get tracked (well, for me, that’s most things).

IMHO, it is not a good start. If you build something like Oslo you start with a programming model like .Net RIA Services or anything you want that tries to do the same thing and then you build Oslo to make it easy(ier) to build something like .Net RIA Services. In case you have not noticed MOP has already happened, all the annotations in Java or C# is MOP layered on top of OO. But MOP layered on top of OO does not provide a clean separation between Architecture and Programming Models. This is the mission that Oslo should set itself up with. So starting with an "app" is a traditional of course Microsoft approach. But this is the wrong level. This is actually catastrophic to start at this level, it ensures they will never deliver something at the MOP level. What our industry needs today is not a better way to write code snippets or string templates, it needs a way to express business logic in a sustainable way, i.e. outside a given architecture.

S�ndor provides another link on "Oslo Q&A". It is probably Doug Purdy who is answering these questions. After a little bit of blabla

Oslo” will help provide greater levels of agility and productivity by greatly simplifying the development of applications.

They seem to understand the problem that MOP addresses as they talk about "Visibility into Distributed Solutions".

“Oslo” will bring together a connected view of today’s models, which are often built in vertical, isolated silos.

But frankly I am a bit scared by this sentence, MOP is not about stitching together the programming models across the layers of an architecture, MOP is the other way around, it is about creating and deploying a unified programming model onto all the layers of an architecture (whatever this architecture is, SOA, WOA, EDA...).  MOP is a much safer bet when you look at how efficient our industry has been at delivering stable architectures that last more than a Gartner business cycle.

So when you combine, blabla, an out of the gate flawed approach and an undefined shipping date ("We are not disclosing the release schedule at this time."), I don't see why I would spend any time on Oslo.

If you want further signs that Oslo seem to be going completely off track I'd like to point out that:

a) Oslo can't eat its own dog food: "EF and EDM are important technologies for Microsoft. “Oslo” full embraces EF/EDM as a primary mechanism for “Oslo”-based runtimes to access the repository. "

b) Doug Purdy touts 10X improvement as a great achievement, he obviously has not done any homework on what IT needs and Cloud Computing provides today in terms of a programming model that abstracts architecture and provides already 10X or more

c) "M lets developers build out domain-specific languages (DSLs) relatively easily" is no longer the problem, Microsoft has DSL tools for that, the question that the Oslo team should ask itself is does M stands for Model or Metamodel?

d) It seems that the Oslo team is heavily (and emotionally) involved in REST, actually let me rephrase that, involved in both lo-REST and CRUD-REST. That's pretty pathetic for a team that think they are working on models. One of the key foundation of MDE is precisely to avoid CRUDing at the programming level.

I would conclude, that yes, the direction of Oslo is fairly clear based on all the links you provided, sadly, this project is focused on solving problems that people have already solved and completely missing the mark on MOP.

04/02/09 :: [MDD] What's Next? [permalink]

Promise, I'll let the middleware hobbyists discuss how four little verbs and an infinitely hackable URI syntax can create a connected system programming model. It actually seems that the RESTafarians are reaching the state of Epektasis if I read Bill's POST correctly.

For those of you who care Epektasis can be defined as:

We move “out of,” in a continuous “epektasis,” beyond the stage we have reached to make a further discovery.

In French the Epektasis represents the effort that one makes towards the "divine" and is considered the state you reach beyond ecstasy.

So no more debate over my URI syntax is better than yours or you are obviously using the wrong verb, no more APP crutches, partial or complete CRUD, I promise, there is really nothing to add, frankly, the RESTafarians are just a bunch of Dog Wagers. I am fairly confident that we will look back in a couple of years and say, yeah, what? "simplicity", "discoverability" and "serendipitous" "reuse", you must be kidding me. People will say, "we spent a couple of years staring at URI templates", what were we thinking? And, yeah, some people get money to teach other people how to write an URI templates... Man, what is the G-20 doing? Here is the solution to world hunger, let's sell URI templates, that's much better than printing money, and anybody can do it, even my mum could do it. I bet we could plug AIG's hole with just a few of those.

So that's it.

What's NeXT? well the fundamental problem remains, how do you improve productivity in IT? Yeap, you guessed it URI templates and navigational data models are not going to cut it. Next week I am meeting with Nikhil Kothari, architect on the .Net RIA Services project at Microsoft. It is very refreshing to see a company like Microsoft or Oracle investing heavily on advanced programming models for enterprise information system construction. In case any one doubts about the problem at hand, that is the problem, whether they are in the cloud or not.

What's important to realize is that somewhere along the way the architecture subsumed the programming model. In the last 15 years we lost the path, we let architecture take over and were left to use whatever programming model was available in a given layer: SQL, EJB/Java, HTML/Javascript... no to mention all the frameworks and libraries developed here and there because of the shortcomings of the programming model of a given layer and the mismatch with the others (O/R mapping anyone?), and ah... I almost forgot, the communications between these layers. As you realize by now, it is hopeless to think that 4 verbs and some corny URI templates are going to help this picture in anyway. How could one think that we can improve the productivity without creating a programming model independent of the architecture. Incidentally, when you do that you get architecture refactoring for free (almost, let's say you are in a much better position to perform architecture refactoring activities). When you see the current middleware mess  compounded by the rate of evolution of frameworks and libraries, not to mention programming languages, you look at this and you say, guys, do you have any idea what IT is going through? Information Technologies are killing IT. I bet a lot of people would pay a lot of money just for that. In case you wonder about the cloud... yeah you get the idea, the cloud makes it even more mandatory to create a programming model independent of the architecture of the solution. Yes, it is possible to create a world where architects architect and where developers are actively building the solution.

So this is what I plan to work on NeXT, yes this is wsper and Metamodel Driven Development, so this is not a real surprise hopefully, but I hope most of you understand how critical this is and how far URI templates are from the solution.

04/02/09 :: [Cloud] Open Cloud Manifesto [permalink]

So, here we go again, just as if WS-* was not a deterrent to such initiatives and as if we didn't got a sense that vendor driven standards were a (nearly) complete waste of time some feel that there are never enough "standards" and we should of course "open" the cloud. Hum... what are we talking here? IaaS, PaaS, SaaS? The manifesto argues that an "open" cloud should give:

Choice – Organizations should be able to freely choose between different vendors.

Flexibility – Organizations should be able to cooperate even if they are using different clouds.

Speed and Agility – Organizations should be able to easily build solutions that integrate public and private clouds.

Skills – Organizations should be able to have access to people whose qualifications are not tied to a particular cloud.

While I could easily see companies like Akamai and Cisco looking for some standardization at the network services level, I have real trouble to believe that every company which has signed the manifesto is looking forward to enable this:

Cloud providers must not use their market position to lock customers into their particular platforms and limit their choice of providers.

Does it even make sense to think that you could benefit from "Open Cloud Computing"? Cloud Computing is about providing "turnkey" and "elastic" IT capabilities/services. The Cloud is about mixing and matching these capabilities to support your business. No matter how you look at it, the Cloud IS-An SOA, what would be the point of standardizing all the Services in an SOA? It does not make much sense to me: "turnkey" and "elasticity" are very desirable "features" that I would easily trade for "interchangeability". Let's face it, IaaS is a big vendor play but the services and capabilities that run on IaaSs is an open field where everyone can play and as a user and I pick and assemble the ones which make sense to support my business. It is the responsibility of the service or capability provider to make sure that as many "consumers" can consume his service, but thinking that every service provider has to deliver an "open" service, is IMHO, looking at the problem upside down and the best guaranty that the Cloud will deliver no value at all, while slowing its adoption to a point where all the investments made in it go sour. 

So let me be clear, looking at what happened with WS-* it would be a complete disaster to restart a wave of standardization. Why don't we let the vendors build their IaaS and their services and then decide what, ever, we need to standardize. Nice try guys.

04/01/09 :: [REST] Today is a Great Day [permalink]

Today is a great day, I decided to free myself from the REST debate. I exchanged a last email with Francois Leygues who argued that even if REST added nothing to Middleware, "we would not be loosing anything" by using REST, so what is my problem? Francois, I give up, in front of such a pile of BS, I have nothing else to say, you win. For the Nth time, you loose bidirectional interfaces, assemblies, forward compatible versioning, orchestration... you loose the foundation to build true connected systems and I can freely and safely conclude that you have no idea what you are talking about. You are simply XML challenged and you have no idea what a composite application and a connected system is. Unfortunately, I have to live every day with the limitation of the synchronous client/server programming model that you are so much in love with.

REST gives you a choice between the plague and the cholera. You can choose between CRUDing or by hand coding your IDL. Yeap those were the days when you opened a socket and hand coded your request in a String and you wrote your own parsers to parse the request or the response. The RESTafarians (a.k.a the middleware hobbyists) want you to return to pre-CORBA days, then a genius will automate all this handcoding and show how they can layer CORBA on top of HTTP. All the progress made Service Orientation, Event Orientation and even Resource Orientation are about to be washed away.

Even Tim Bray himself can't use REST in a Resource Orientated way. How ironic, Tim.

So, so long and thank you for wasting every body's time, this is exactly what our industry needs, I can't wait until billions or resources are deployed right and left and then we will talk again. This seems in line with our society, so who I am to say anything. Today, is a great day. 

03/23/09 :: [REST] To Crud or not To Crud... That is the QUESTIon ... [permalink]

Tell me if you CRUD and I'll tell you who you are...It looks like after all that Roy does not like CRUD. He is gently but surely asking the RESTafarians to CRUD no more. You must admit that how low can a RESTafarian go? My eyes fell off my head when I read this comment from Mike Kelly on Roy's post:

... I’m still struggling to understand why ‘monitored’ state should be preferred as non-editable.

I'd be curious to know what Roy thinks of all this CRUD. I can guarantee you that more than 95% of the people using "REST" will CRUD.

Stefan continues:

I often find that I chose PUT instead of POST (and end up creating an additional resource in the process) because the behavior requires idempotence. That seems preferable IMO to making POST idempotent.

Stefan was laughing at my proposal to declare the idempotency at the message level and not at the verb level so he decides when to use a VERB simply on the idempotency property. Don't you find that odd? Wouldn't be a lot more natural to reserve a verb for "updates" and a verb for "actions"? Some actions being idempotent? Surprisingly, even Roy seems to be a bit confused on the question:

Stefan, I think it is better to say that we only use PUT when the update action is idempotent and the representation is complete.

Stefan, come on... how can someone like you recommend CRUDing? Don't you see the coupling?... How could Roy agree with that? The reason why I say Roy is confused is because of the combination "idempotent action" and "representation is complete". Probably the only action that generally matches these requirements is a "Replace" of the content of the resource (which should always be forbidden unless you are dealing with very simple lifecycles like the one of a Web page). A typical action will simply convey the resource action identifier and the arguments that are necessary to perform the action. Associating actions with a Single Verb (POST) is the way to go.

I guess Roy's precisions clarify the little discussion I had with Anne.

TThe RESTafarians have entered the second phase of their battle. Now they have made enough noise to have REST being tried out by lots of vendors and some vendors actually providing an implementation, "anything" HTTP can do is RESTful. REST does not have "actions"? no problem Tim Bray adds a controller and some actions or  Stefan and Anne suggest CRUDing. I am really impressed at your integrity guys.

As a result, Roy is in this uncomfortable position to always remind people what is RESTful or not, yet he wants REST to expand. Sadly no one even draws a simple state machine to reason about the problem.

REST is the equivalent to the Real Estate bubble, vendors and developers adopt it under the wonderful premise of ultimate simplicity, the RESTafarians jubilate, when the developers will finally understand the trap in which they fell, it will be too late. Many Apps will simply fall under the weight of their CRUD.

The (other) REST is just a fallacy,, deeply and totally fallacious guys.

03/21/09 :: [REST] Action, States, ControlleR...? [permalink]

As you may know I have spent a good 18 months trying to debunk some of the biggest BS they have spread on our industry, in particular the "uniform interface" concept. Well yesterday, Tim Bray wrote on "REST Casuistry" .

In his post Tim discusses the "whys" of the Sun Cloud API. Listen to this:

why, to create things, for example a VM in a Cluster, you needed to POST to a special create-vm URI. Why not just POST the representation of the VM to the cluster?

Yes, you heard it right, there are "actions" and they don't even bother modeling them as nouns anymore, thank to a new pattern invented by Tim Bray himself,

Uh, well, because when we cooked up the idea of special purpose “controller” URIs,

Tim, woa, that's news ! You mean the "interface" is not "uniform", boo...

So read my lips, I have said it many times, REST is going to be steamrolled by MVC, this is one of the first sign of it. A "controller", Tim, you must be joking, right? Stefan? Steve?

If that wasn't hilarious enough... Tim continues:

The next argument is about all the other “controller” functions. Deploying a model, starting and stopping and rebooting a machine, attaching networks. The argument is that it’d be more RESTful to have some state fields in the appropriate representations, and just update those fields to the desired new state values.


But you’re not really changing a state, you’re requesting a specific set of actions to happen, as a result of which the state may or may not attain the desired value.

 In fact, when you hit the deploy switch, the state changes to deploying and then after some unpredictable amount of time to deployed. And the reboot operation is the classic case of a box with a big red switch on the side; the problem is how to push the switch.

Really? I mean Really? Hurray, Tim has landed on earth !!!!!!! No surprise that Steve Vinoski is ending his column. How could he not end his column after such a statement (very timely)? Incidentally, it is no surprise that Steve is now going to write on Functional Programming - a programming model that negates the very notion of state. It is encouraging to see people at Microsoft rethinking the programming model with a focus on "domain objects" (not OO).

How many times the RESTafarians swear, hand on the heart -and tongue on the cheek-, that my thinking along the lines of actions and states was complete boloney? John Heintz? (we don't hear you on this topic anymore...) Joe Gregorio? Bill deHora? Steve Vinoski? Where are the RESTafarians? Shouldn't Tim be banished from RESTafaria? or do you still believe that a "client" can "naturally adapt" to any arbitrary change to the set of actions without any changes? 

And Tim, maybe you'll need a contract one day to express all these wonderful actions... and maybe, just maybe, you'll need to version them, possibly in a forward compatible way. You could also think that the latency could suggest a dose of asynchrony? Orchestration anyone? Assembly of Cloud resources?

The (other) REST is just a fallacy, deeply and totally fallacious, Tim.

03/11/09 :: [Other] Cause and Effects [permalink]

The Wall Street Journal had a couple of interesting pieces today. First the effects of the crisis:

Forbes found 793 billionaires in 2009, down 30% from a year earlier. This is the first decline since 2003.

The total net worth of people on the magazine's list this year fell 46% to $2.4 trillion. The average billionaire is now worth $3 billion, 23% less than in 2008.

So the "Entrepreneur" Bill Gates wins over the "Investor" Warren Buffet. Same for Michael Blumberg with Reuters.

The second piece was an essay by Allan Greenspan to exonerate himself from one of the cause of this crisis.

First he explains that:

 the presumptive cause of the world-wide decline in long-term rates was the tectonic shift in the early 1990s by much of the developing world from heavy emphasis on central planning to increasingly dynamic, export-led market competition. The result was a surge in growth in China and a large number of other emerging market economies that led to an excess of global intended savings relative to intended capital investment. That ex ante excess of savings propelled global long-term interest rates progressively lower between early 2000 and 2005.

He continues:

Global market competition and integration in goods, services and finance have brought unprecedented gains in material well being.

He concludes happily:

It is now very clear that the levels of complexity to which market practitioners at the height of their euphoria tried to push risk-management techniques and products were too much for even the most sophisticated market players to handle properly and prudently.

At the end of the day it is difficult not to believe that someone has been eyeing the treasure chest of the baby boomers. I belong officially to the last generation of Baby Boomers, being born in 1964, so I don't really feel like one, but looking at the wealth evaporation that happened in the past 12 months I can't stop but thinking that this crisis was aimed at them and their golden retirement. Mr. Greenspan seems quite naive to to believe that a Bank CEO would have no idea of the risk that was taken by their very own company. Bank panics are not new and thinking that none of that would happen in this instance makes us wonder whether Mr. Greenspan was really the right guy for the job. It makes you also wonder why Mr. Bush, who shun by appointing friends and common idiots like John Snow at the Treasury, all the sudden appointed heavy weights such as Paulson and Bernanke months before the crisis started. It is also coincidental that the uptick rule was removed just weeks before all that happened. The trap may have been bigger than this treasure chest and target China's and Sovereign Fund's piggy bank as well as, not to mention an evil plan to punish Europe for its lack of cooperation with the Bush grand plan using AIG and Madoff to propagate the banks panic across the Atlantic.

I am sure a lot of people will debate on the causes, I think that monitoring the effects will most likely point us the cause. Let's see who are the new world "leaders" in a couple of years.

03/08/09 :: [Other] I am a PC [permalink]

A lot of my friends who work at Microsoft think that I don't like the company and I never miss an opportunity to criticize it. Some might say that I do that because Microsoft never extended me an offer to work for them. I did interview for Don Box's team back in 2005 and had a phone screen in 2007 with the BizTalk team. I think it is actually a good thing that the CSD never wanted to hire me. I was a big fan in 2003-2005 and was happy to recently join again a Microsoft shop. Unfortunately the CSD only shines by the lack of progress it exhibits over the years. I don't know what they are doing there all day, unless it is the new "fixed time, flex scope" approach they have taken. Or maybe, the CSD is actually using a flex time, fixed scope approach. These things are so confusing.

I won't go in too much detail but here is what happened to me in the last couple of days of trying to use CSD's wonderful technologies. So I now use the BizTalk server, hence I have to use VS2005. Ok, so what, I don't see any difference between VS2008 and VS2005 except that they can't share anything. As one of my Microsoft friend told me VS2008 is a "new platform". So I wanted to build a mockup service to invoke it from the BTS. I remembered that I had to download the WCF extensions to be able to do that. So I searched for them on MSDN. You would never guess what happened to them? They are gone, you can only get the ones for the "new platform", you have to go to "" to get them (illegally)..  I mean come on, is the CSD really part of Microsoft? My "reconnect" adventure with the CSD did not stop here. I have an MSDN subscription so I installed the "new platform" and got the VS2008 extensions for WCF. (Can't wait to see what will happen with VS2010). I had a WSDL to start with for my mockup service, so I searched on yahoo for the antinomic query "WSDL-first WCF". Don Box never quite understood how WSDL-first differed from Contract-first and WCF has always had the hardest time to deal with it. He realized after the fact that WCF could not do it and basically put the baby in the hands of patterns and practices which did not really solved the problem. Yahoo search gave me a great reference written by Dino Chiesa, Microsoft's Interop Guru, in September 2008. I'll let you read it... I just want to point out that my WSDL was basic, especially the message types, and had only one operation and SVCUTIL choked on it. After a couple days of adventures with WCF, I went to download WSO2 WSAS (I love the form BTW) and it took me less than 30 minutes to deploy my mock up service. Oliver Sharp could possibly reflect on the fact that an ultra small outfit in a developing country was able to deliver a first class job on one of the most basic SOA problem.

But I digress, I wanted to write this post to express that I am a PC. I have owned a Mac for a couple of months (I miss Objective-C) and after checking with a Mac user that I was not hallucinating, I started to wonder how Microsoft's marketing could let Apple run these silly ads. It took me 12 years to give some money to Steve Jobs after he left the NeXT of us behind. OS X is a joke, as far as I am concerned. Even the hardware itself is a complete joke. My top 3 complains are major:

  • Can't switch between users, when my kids want to use the Mac, I have to logout or let them use my account
  • Can't deal with dual monitors very well, the software can't position the second monitor where it is physically, and the hardware offered me a mere 1280x1024 on my 37" 1080p monitor. What a rip off...
  • I don't know if this one should be first or not, but Apple still has a single menu bar, just as if I rarely switch between apps. That's very handy in dual monitor mode as you can imagine.

I bought the iWork suite and the software is abysmal, I would easily pay double for Office. Not to mention that the finder made me appreciate MS explorer.

Anyone who has used a NeXT in the early 90s would agree that OS X has made hardly any progress since then. Even the documentation is using the same old style. OS X and iWork are about 15 years behind.

So I am a PC, and yes when my PC breaks down, I have several others at home that I can use in the mean time. I don't think I'll ever buy a Mac again, I don't see why they can charge 30-40% more for 30-40% less functionality and efficiency. At that level, I am happy to pay for the extra 2Gb of RAM I need or the silly little flash memory to "ReadyBoost" my computer.

Now, it is does not mean that Windows is perfect or can't be improved, but I am a PC and happy so.

03/01/09 :: [Other] To Count or not To Count... [permalink]

I must admit that I have an odd political boundary. I could probably define myself as an ultra liberal (in the US sense, in France it means the opposite) true capitalist. In other words I believe in an innovative, service based, compassionate society. Innovative because innovation is what carried us where we are and will sustain us into the future. Service based because, unlike the alternative servant-based, it distills the best everyone has to give. Compassionate because everyone deserves to be free to be who they are, with the promise that if you contribute positively to society, society will not make it too hard to live a decent life. After all we have well enough food, lodging, healthcare and education for anyone, not to mention parks and libraries.

The role of government is to define policies that foster innovation with enough regulation to make it safe and sustainable, provide the infrastructure to support service orientation, provide some services such as justice, police, education,… and make sure the level of compassion is adequate and environmentally sustainable. Strangely enough, I believe that the best orientation for policies is to create maximum wealth, maximum wages, and maximum sizes for companies. I believe that small is better than big when it comes to sustainable capitalism. Imagine if we had only one farm to feed everyone and that farm would go out of business… Capitalism is not about employing as little people as possible, Capitalism is about driving people’s activities to meet the needs of the people.

The engine behind capitalism is innovation. There might be a time when we won’t have much to innovate, but I hope I would long be dead by then.

President Obama released his budget yesterday and I must admit I read accounts of it with horror. Most people could have thought that I would be happy to see all the additional taxes on the wealthy, oil & gas companies and what not, but that budget looks more like a scapegoat hunt than a budget that will restore, innovation, service orientation and compassion. The problem is less to limit the concentration of capital but rather make sure that the people that hold the capital use it to spur innovation and service orientation in a compassionate way. In the last 15 years the people that concentrated the capital used it to build useless structures (mansions, yachts, jets and fast cars) supported by a servant-based economy while paving the path of “least innovation”, crippling R&D expenses compounded with very high level of “me-too” R&D projects.

Niall Ferguson seems to go at the heart of the problem:

There is something desperate about the way people on both sides of the Atlantic are clinging to their dog-eared copies of Keynes's General Theory. Uneasily aware that their discipline almost entirely failed to anticipate the crisis, economists seem to be regressing to macro-economic childhood, clutching the multiplier like an old teddy bear.

Let’s be very clear, we reached a point of absolute absurdity where pretty much in the world is in debt, massive debt (people, cities, counties, states, countries…). Adding more debt is not going to help, western countries have been walking on their heads for at least the last 30 years: they replaced inflation with debt and shun via their ability to create policies that killed innovation.

Bill Burke asks us to give President Obama a chance, and nothing would make me happier than his success. But how can you expect to change anything when all the people that drove us where we are, are still at large and are just as ready to ride an economic recovery again by pumping commodity prices as hard as they can. I mean come on, people are storing oil in super tankers with the hope that oil will rise again… Additional taxes on the wealthy may look popular (and could result in smaller mansions and more schools) but the key is not to level field by the bottom, this is what socialism and communism have done and we know where that goes. The key is to drive capital away from short term profits, specially when it involve commodities such as energy and food, and into the creation of jobs, i.e. in the hands of true entrepreneurs, not the one that are running after a quick buck, fast cars and big houses. The biggest failure of capitalism has been the creation of “Billionaires” which created role models across the world that pumped pretty much any human activity into their personal high/bio tech boom. Does it really make any sense that Michael Heisner, former CEO of Disney, was making close to $1M per business day. What activity could he possibly be doing that would mandate such a hourly wage?

Let’s admit that leverage generated tremendous accounting errors, errors that are impossible to fix across the globe without a massive debt forgiven initiative or hyperinflation. Let’s make sure that money exchanged in any kind of deal is proportional the activity being performed. When Sun buys MySQL for $1B, it is buying 16,000 man years of work at the average US salary. Does it make any sense? Who will pay the difference? If banks’ leverage let everyone make the same kind of “error”, who will ever repay that debt? When two dentists that I use for myself and my children work only 3 days per week, I think that they could work 5 days and lower the cost of care by 40%.

America’s greatness came each time she put their people first ahead of special interests (and the government). Today is no different, the government alone or insipid TV ads cannot alone return America to where it once was, only its people, if only the “top” would let us do something. I’ll never say it enough there is enough food, lodging, healthcare and education available to meet the (reasonable) needs of anyone, the only reason why it is not possible is because the people that manage the capital manages to fuel their lifestyle, and nothing other than their lifestyle. And frankly, as humans, we don’t want any charity from them.

02/27/09 :: [REST] Are you Link'in? [permalink]

I have been thinking last week about a potential implementation of HATEOS that was not CRUD oriented. As I mentioned several times, CRUD introduces the worst coupling possible between a consumer and a provider, not to mention that most providers would never trust a consumer to CRUD it in the right state. Just ask the Societe Generale how much money you can loose when you let people CRUD around your systems.

So I will take off CRUD off the discussion for the rest of this post, if all you want to do is CRUDing, JAX-RS lets you do that copiously...

Here is my typical real world example that I like to use to test concepts. It shows two resources, a PO and a shipment which each have a lifecycle as represented below as a series of states and transitions. Transitions occurs as actions are invoked on the resource.

A REST implementation typically uses a POST of a resource to the target resource. For instance, in order to transition to the submitted state (from created which is itself achieved with a PUT), you would POST a submission, in order to transition to paid, you would POST a payment, and so on...

This is based on  RFC 2616 which defines POST as follows:

"The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line." (trans.: if you create a new resource "inside" an existing URI, use POST: this applies if you are doing something like creating a new resource and you don't know what its URI will be).

The problem I have had with the claims made by the RESTafarians is that HATEOAS (HyperMedia as the Engine of Application State) is enough to manage "application" state. Of course they should define what "application" means in a "connected system" as there should not be really any application boundaries, but let's pass on that, one day they'll wake up and realize how much BS the (other) REST is as a programming model, again, nothing I say pertains to Roy's REST.

Stefan for instance has been claiming that all he has to do is to "POST" resources to advance the state of a process. He has never provided an example to show how this would really work. Savas and Jim don't even understand the difference between a resource lifecycle and a business process, so it is hopeless to think they would successfully explain how to implement a process-centric scenario in REST.  

The best I can come up implementing an action that follows the POST specification, (without CRUDing with a PUT of course) and therefore without changing anything to the target resource, is to have predefined links in the parent resource. So the PO representation would look like that once it is instantiate


<link rel="canonical" href="" />


<link rel="submission" href="" /> 

<link rel="payment" href="" />

<link rel="shipment" href="" />  


As "action" resources get posted the links in the target resource become active. They would otherwise return a 404 error. This model has a nice side effect, the red arrows in the figure above that represent state alignment messages, can be implemented without exchanging messages until that information is needed. Of course, once you need the information you must now navigate the link but isn't it the premise of HATEOAS since REST can't do joins? HTTP is no XQuery, but who cares right? So all is well in a perfect world as our old friend Voltaire would say. We can all go back and landscape our backyard.

Well, not quite. The first problem is that you have to navigate all the links to figure out what is the current state of a PO. In particular this process has to occur each time an action is "POSTed". The second problem is that links are "unordered" in a resource representation. Again, REST was designed for humans, even if Roy kind of denies it, and humans can make sense of everything they can process (yes link represented in a foreign language may be as dead as static text). So unless you share this ordering information out-of-band or come up with a state machine microformat you can't really figure out which state this resource is in.

Overall it looks to me that REST would gain immensely by defining a proper action mechanism (and while they are at it, inter-actions and trans-actions, not to mention a contract and versioning, but that's still a touchy subject despite Subbu's work). Today the RESTafarians are either CRUDing or wiring POSTs directly to the target resource, via a command pattern, in complete violation of the POST semantics.

Guys, more semantics are necessary, be it for actions, or, for instance, "joins" that would bring information automatically into a consumer via HATEOAS links without having to navigate the corresponding link. Subbu seems to be working along these lines, I wish he would go all the way. There is simply no other way.

Now, I apologize that I have trouble to understand the full meaning of Roy's definition of a REST API. So maybe I am completely off base here. Maybe he considers CRUDing acceptable, at least I could not rule it out from his definition.

02/15/09 :: [REST] Towards an Identity Mechanism [permalink]

Subbu reported that since "things can be accessed from multiple URIs, and without analyzing representations, it is difficult to conclude that those URIs are indeed pointing to the same thing."

Search engines have now recognized this issue, and would like servers to include a new canonical link relation to inform them of a canonical URI. For search engines, this simplifies de-duping.

Here is the example from the Yahoo! search blog.

<link href="" rel="canonical" />

This is similar to one of the solutions I suggested in my original post on resource identity.

That's great news that REST is "evolving", even though some people have claimed before that "REST" was all you'll ever need.


02/13/09 :: [SOA] WSPER's DSL [permalink]

Here is my first pass at WSPER's DSLs, this is still quite rough but it shows the general idea:

an EDM DSL    (resource) wsper/v0.1/edm.xtxt

a Message Type DSL (query, events, actions) wsper/v0.1/mtdsl.xtxt

a Service Contract DSL (interactions) wsper/v0.1/mtdsl.xtxt

the BPMN DSL (process and assembly)  wsper/v0.1/bpmndsl.xtxt

a State Machine DSL (resource lifecycle) wsper/v0.1/wsper.xtxt


02/09/09 :: [SOA] A Message Type Architecture for SOA - comments[permalink]

So far the article seems to have got positive reviews, that makes me happy. I think it starts addressing a vexing problem that has been described many times, and most recently by Peter Rajsky, yet no real solution has been brought up.

As I commented, I have seen this approach succeed with an Adaptive Software registry and using UML as the metamodel for the EDM. I had designed the UML profile but left the company before the project was complete so I cannot comment precisely on what was achieved other than the consultants and my former manager told me they had succeeded. I later tried without Adaptive (for budget reason) and tried to use oaW M2M UML capability, but failed due to the complexity of navigating the UML metamodel (as I needed to pick up the UML profile elements) to generate the XML Schema.

Kell-Sverre wrote a nice post on “Business Event Message Models” in the wake of the article. He adds:

I just think that we need to model also the business capabilities and interactions that utilize the message types to get a complete set of artifacts for service contracts.

One difference is that I prefer using a common information model (CIM) as the basis for modeling the message types, rather than an enterprise data model (EDM).

First, on the EDM effort, yes it is kind of true, we had bought an Insurance EDM from a vendor so we did not have to create one. Adaptive Software has also tons of adapters that can collect your system metadata, which then you can assemble into an EDM while keep traceability to the physical elements. If you don’t have either, yes you are on your own and that’s a daunting task, but I don’t see why you have to do it all to start. You can model the high level entities first and then as needed you model the basic entities.

I think this CIM/EDM debate is in for a long time J. Just to throw my two cents, if you look at Fig 9. in the article, you will see that my vision for SOA is to create services on top of the systems of record. I have argued very often that it is even OK to start with Just a Bunch of Web Services (JaBoWS), as long as you have the vision to over time evolve this service to become an enterprise service.  

For instance you want to automate a billing business process, so at some point you are going to create a “pay bill operation as part of your Bill Service” such that your process can invoke it and users do not have to open the billing system and enter the information that will result in paying the bill. It happens that you also have 3 billing systems and the process that you are automating only needs to talk to one. So what do you do? Do you spend a bunch of time in Governance to get the funding to build an “enterprise class” Bill Service? Or do you start with just what you need? I would argue that the later is more likely than the former. Is it bad? No, provided that you provision the capabilities that will make your service versionable and compatible. When another process comes around you’ll expand the footprint of the payBill operation without breaking the initial process.

This is a very likely outcome if you do things right, unfortunately none of the vendors, pundits and analysts, never ever told you that. Actually, some of the vendors wanted to sell you their ESB just on the premise that it could help with versioning…. You get the picture. What I am talking about is not hard to achieve, it simply requires a bit of discipline and understand how technologies (such as XML, XSD, WSDL) support this approach. Believe it or not I first wrote about this capability back in 1999. I had written a much more comprehensive paper (which was never published and which I have lost) on “Extensible Object Models”. At the time, I had to fight the CommerceOne guys who had come up with an “Object Oriented XML” (nice reification again –can you be more clueless about SOA than reifying SO behind OO? Yet pretty much everyone does it…). They tried hard to push it in the W3C. I don’t know by which miracle extensibility remained opened as it was the W3C working group was considering eliminating it. I guess each actor/vendor decided to ignore it, so it did not matter to them if XML was extensible or not.

Kjell-Sverre continues:

it is unrealistic that you will be able to avoid mediation completely in your service bus.

First, I like to think of an ESB as a service container, not as something in the middle. So for me Consumers and Providers sit in an ESB, it can be the same or it can be a different one (obviously you don’t want too many but 2 or 3 different ESB that offer different capabilities and scalability models can be considered). As I said in the paper, if there is a mediation that is needed, I prefer it on the consumer side. I don’t think it is a strong requirement, but again with the right versioning strategy, meditations should be minimal. Lots of mediation is going to happen between the service provider interface and the back-end systems. That’s trickier.  

Composite services that compose business capabilities across two or more domains will require mediation between the message models

I am not sure this always true. Again, either a service is enterprise class or not. Furthermore the versioning strategy that we presented supports “consumer areas” where variations that are consumer specific can be added, otherwise if they are non breaking, compatibility will take care of that without a mediation. In that case, again, you don’t need a mediation between the service provider and consumer. As a general rule, you should avoid having a mediation. Believe me, there is enough mediation between these and the back-end systems on the provider side and towards the presentation layer (or other back-ends) on the consumer side. So for me, I reiterate that the service interface is the CIM and that you get a homogeneous set of semantics because it is based on the EDM.

I guess is what I am trying to say is that you should use a Message Type Architecture and a Versioning strategy that avoids mediation (even if it only succeed 90% of the time). Too many people have decided to bite the bullet and mediate everywhere, I think this is not a viable strategy for SOA. Mediation happens between the service interface and the system of record, not between service consumer and service provider.

Now, I’d like to comment more generally about the article. Stefan and I had agreed to restart a discussion about REST but we could not pass the “caching” question. Enterprise Data does not cache very well, who can argue otherwise? How many times a day (or a month) do you need to look at the same bill? The problem I have with arguing with Stefan, Tim Bray or Steve Vinoski or Jim webber is that the only answer to everything is REST, just REST and nothing but REST. There is nothing that REST can’t do “out-of-the-box” in a superior way to anything else. This is wrong, this is the attitude that killed our industry. I think pretty much everybody gets it today, except for them maybe. REST is a great technology, it brought us the Web. However the Web is not the Enterprise, and like any technology you have to learn how to use it and where to use it. I wish simply that Stefan and I could agree on that last sentence, clearly and unequivocally. That would go a long way.

I wanted to show in this article with as much precision as possible:

a) How Resources (EDM’s entities), Events (as an occurrence of state)  and Services fit together from a modeling perspective. No service interfaces are not randomly designed (I have seen it before and yuck, what a mess)

b)    How incomplete REST vision is when it comes to actions and events (events cannot be supported without polling in REST, you can imagine how that flies). REST was designed for the Web where the actions on a page are extremely limited and you bet a page has no event on his own. A page is not an information entity that has a lifecycle. A link is not a relation, an HTTP verb and error handling is only meaningful to a Page “entity”, not to an order, invoice, or bill.

Peter asked why I had to "fight EDA", as I explained, this is not so much EDA but the reification of SO and RO behind EO. Why would you have to move to EDA because you need an event somewhere, or simply because you need some asynchronous interaction. This is what I am fighting, this system that says you have to use uniform semantics and reify everything behind it. The message I want to convey is Resources are great, Events are great and Services are great. And by the way, when you surface them properly you get business processes for free.

Solomon Duskis commented on my post on RESTful patterns.

I also don't understand what your issues are with POST ... Adding POST to a REST call explicitly states that the action performed will create a system state change.

Yes, Solomon, we agree, this is exactly how I interpret it, but I am not sure the RESTafarians will agree with you. It breaks the “resource-only” approach that they tout to everyone that wants to listen. POST is not and cannot be about just adding a resource somewhere like a payment resource to an /order/payments/ collection to show that an order has been paid. Most often, people use POST merely as an action message that will change the resource state (for instance “paid” in the order status element). If that’s the case, then REST brings absolutely nothing to the table and removes all the major advances that SOA brought us (birectionality, assembly, orchestration, compatible versioning,...). The way people use and will use REST is as a protocol not an application protocol.

The claim that I have heard from the RESTafarians over and over is that I can model some process activity by just “adding” resources here and there in some directory. If this were true, I just want people to explain to me how I answer the question "was this order was paid or not"?

If indeed people have to do this extra hop to the parent resource to change its status value to “paid”, guess what’s going to happen 99% of the time? They are going to CRUD the state of the order to the right value instead of bothering POSTing a payment somewhere else.. Actions are my only issue with REST. No action means no interface, no interface means no boundary (to manage, monitor, and all the runtime governance stuff that Stefan dismisses), no compatible versioning, no assembly, no orchestration,... No action means CRUD (for both action and content update) and RPC (for non actions) will dominate the programming model. This is my only issue with REST and has always been. An interface is not uniform, you want part of the interface to be uniform but it is an illusion to think that a 4 verb interface is "enough".

So yes, we can circle back and forth for another 50 years, we can keep pretending that EDA can do everything, or ROA can do everything or SOA can do everything (or OO or EJBs, pick whichever one you want). We have seen where this kind of conjecture drove us: each and every time, in the wall. I mean how stupid can you be when you believe that events can be implemented in REST or by saying that's you'll never need asynchronous and/or pub/sub interactions? This the problem, not the solution. We could also listen to the hand waver dudes that produced SOA-RM, SOA-RA and SOAML, we could think that the W3C SOA RA is cool too. In reality, as long as we will not express the nature of the articulation between Resources, Events and Services we will remain in this near death state where nothing works, everything we do costs tons of money (because of the reification), and where the pundits and analysts can claim anything they want and pretend they understand this space when it reality the only thing to understand is that we each have a piece of the same puzzle. The resources (entities) of an EDM can be projected in Service calls, yes events can be clearly marked as Event messages and no you don’t have to emulate request-responses into pub/sub event messages.

Where are all the enterprise architects? Where is the Open Group? The CBDI forum? Are they asleep at the wheel? We can’t expect any vendor to come up with a model like this because no middleware vendor can offer it today or tomorrow. Middleware vendors will stay at the protocol level for decades to come. Look at guys like Keith Swenson or Ismael Ghalimi, the only thing they want to sell you is their products and their corny vision of BPM, CIRCA 1995-1999 a la XPDL-BPML sauce. Guys, we are well into the 21st century, it is time to wake … up !

02/09/09 :: [HP] Buyer Beware  [permalink]


If you plan to buy an HP computer in the near future I would have second thoughts. I bought a top of the line HP Pavilion last March. I like to run VMs and I got lots of memory, the fastest hard disk possible and the fastest processors available.

The machine performs pretty well. However in the last month both the motherboard and the fan failed. I had mentioned the fan when they repaired the motherboard, but sure enough I had to send it again one month later to get the fan changed as it made way too much noise.

HP technicians are so zealous that they reimaged my hard disk after changing my fan. Yes my fan. I called HP and they just said they were sorry.


01/31/09 :: [REST] Finally Some Sanity...  [permalink]

I could not believe my eyes. One of Sun's CTO, Ross Altman, wrote this about REST in a presentation called "Question some assumptions":

Assumption: RESTful Web services should be used instead of WS-* Web services

A case can be made for the use of RESTful services for Opportunistic applications. However, for Systematic applications, the Qualities of Service that are required would have to be built on an ad hoc basis.

As a result, the cost of RESTful services would go up and interoperability would go down

If the counterparties building a RESTful interaction have to “reinvent the wheel” of runtime governance standards, the costs and complexity of RESTful Web services would increase dramatically, undermining the attractiveness of the REST model

Since the runtime management capabilities for each RESTful Systematic application would be developed in a non-standard way, interoperability between RESTful implementations would drop drastically

Not surprisingly, Stefan, Steve and Tim are not referencing or commenting on these words of wisdom (which should be obvious to anyone). It was a hard fought battle, but how could the RESTafarians ever think they would win with their fallacious arguments? It is amazing how much, in the last 10 years or so, a bunch of people thought that the others could just swallow any kind of garbage. We had to deal with the WS guys and now the RESTafarians.

The sad part is that there is merit to the concepts of Resource Orientation as I said many times, and an articulation between SO, RO and EO is necessary to move forward.

The worst possible contribution to people like Tim Bray, Stefan Tilkov or Steve Vinoski to our industry, beyond pushing the envelope on the myth that spewing BS is socially acceptable -as long as you have a network of followers- is that they made us lose at least 3 years to come up with a unified model, if we ever come up with one.

In addition, JAX-RS implementers -a.k.a the middleware hobbyist- have shown with great precision what is wrong with our industry's approach to middleware. Mark Little, interpreted my comment before as "middleware is a failure", this is not what I meant, I am trying to pinpoint that this particular approach to middleware is a failure. Which approach? The approach to tie any middleware concept to the Same Old (and Rusty) Programming Model. The failure of middleware (not middleware is a failure) is to fail to recognize that the middleware is the application model and the application model is the middleware. We can no longer ignore "the message" (and therefore asynchrony) as a primary programming construct. People will some day realize how much a mistake it was to deal with "messages" with an "API" within the traditional programming model. The POSIX days are mostly gone, so in most recent years, part of this API has been abstracted with annotations that automate the receiving/activation part but make no mistake it is the same old programming model.

My statement can be illustrated by J.D. Meyer's usage of patterns to implement RESTful services:

Implement a unified interface to a set of operations to reduce coupling between systems.

... [he explains]

Rather than defining chatty object based operations a fa�ade combines multiple operations into a single interface.

As I said several times, REST opened the passage to an "application protocol", there is a lot to learn from there, as long as you consider the Web as an application and Enterprise Information Systems as something different from the Web. (Hence Roy's REST is not enough for EISs but the concepts are spot on.)

What people have been doing for over 10 years now, be it with CORBA/IIOP, JEE, REST or WS-*, is to work on the protocol and stop right there. They keep wiring these innovative concepts to the same piece of code -behind.

This is very clear in the way JAX-RS or Microsoft are using REST. This is why Microsoft for instance can confidently SOAPify REST invocations. For them, and actually for all RESTafarian, except maybe Subbu, REST is just a protocol, not an application protocol. (Roy is not a RESTafarian).

The only thing that JAX-RS has achieved is to transform a URI syntax into an RPC protocol. I can confidently say that JAX-RS transformed the semantics of URI into URA (Universal Resource Access). Well done guys !

You will probably notice at some point that with REST, it is really going to be really hard to “reinvent the wheel” as Ross claims since REST is RPC and not message oriented.

I'll let Ross make the conclusion:

Assumption: Over the next five years, SOA will be dismissed as “just another over-hyped idea”

The terminology may change, but the architecture will remain until we’re no longer implementing Composite Applications that rely on integration between heterogeneous technical and business domains

Indeed...So Stefan, will you have the integrity to comment on these new developments? or is your "opinion" that all this is rubbish allright?

So what's NeXT now that REST is unfortunately dead? Well our industry will turn to a new "protocol", MVC where endpoints are associated to actions and not resource. Make no mistake, they'll call it again an "application protocol", and once more they'll wire this protocol to the Same Old Code. We will happily loose another 2-3 years. By 2015, after the death of IT, somebody will wake up and claim this is a mess, he or she will assemble all these concepts into a uniform connected system programming model and we'll have finally the capability to create Composite Applications after about 15 years of religious wars that profited a few and killed so many. (Isn't that the definition of a religious war?)

So some people might ask why I am so upset at this "wild bunch of high flyers" that travel the world, wine and dine in the best places as they pontificate on what's the best way to do this or that? The only reason why I am so mad at them is because so many have to pay the price for the inefficiencies they create be it when they market a useless product, based on fallacious technical recommendations, that an army of developers and consultants have to fudge to get anything out of it. I am mad to hear that vX of a product (where X>15) is now "working", the product team finally understood what they had to do, and all the previous (incompatible) versions on which you have written all your code are good for the trash. IT pays a heavy, heavy price for these few people. How many projects in IT failed because of these "wonderful" technologies (including CORBA, JEE or WS-*)? How many projects never saw the day because the ROI was simply not there? I worked with these people, or for them, or against them, most often in spite of them. I know them in and out, I have tried every possible way to talk with them, but you can't. They are the "TechnOracles" of the world (such a serendipity Duane...), you just can't question their recommendations. Yet, their recommendations drove us into a wall. Yes, Joe (McKendrick) or Nick (Gall), maybe the press and the analysts could wake up a bit and actually raise questions rather than "reporting" or, for some, fantasizing on the death of SOA. Across the board skepticism is a good thing, and it is about time these people stop steamrolling the rest of us.

01/26/09 :: [SOA] In defense of SOA standards bodies...  [permalink]

Duane Nickull and Joe McKendrick responded to my post on the SOA Soup. Thanks but no thanks. One of the comments from Alexis reminded them that maybe, just maybe, they could at least look at SAE and Praxeme. Couldn't Duane (and Joe) ask themselves this simple question: how come such small groups of people produce work that is vastly more superior to the one of the mighty vendors' representatives, with the structures and processes of organizations like the OMG and OASIS? Ah... I almost forgot, Alexis, Duane is "the" TechnOracle, he does not ask questions, he has only answers.

Do they even care to look? Could they compare the petty work of SOA-RM, SOA-RA and SoaML with these two pieces of work? No, I forgot, they don't look, they defend their crappy work. How crappy? Let's take versioning for instance. Miraculously SOA-RA talks about versioning. For Duane's working group, versioning is an "Architectural Implication", the "implication" is that we need:

mechanisms to support the storage, referencing, and access to normative definitions of one or more versioning schemes that may be applied to identify different aggregations of descriptive information, where the different schemes may be versions of a versioning scheme itself;

Boy, aren't we glad that this committee is not designing airplanes. Joe, reading behind this arcane language, you see that they don't even think of runtime "compatibility" -a major architectural implication, no what they have in mind is "source control style versioning", at design time. Do you think they would actually reference the work of Dave Orchard on the topic? Here is a guy who spend a good chunk of his career building one of the most critical piece of SOA, by himself in the shadow of this machinery. A Forwards Compatible versioning scheme is the most essential element of SOA, this is what makes SOA different, never before, the software industry had such a capability. Wouldn't you think that such a foundational piece would deserve at least a link to Dave's work (say as an example of a versioning scheme?). Noooo....

SOAML has zero reference to Service Versioning or Version. You mean Version is not even an attribute of a Service Description? Joe, what is there to defend? Please tell us. Read the spec.

Let's take another example, just as foundational. The relationship between an Enterprise Information Model and Message Types. How are these two specs treating again such a foundational problem? They hand wave. They don't care. Way above their heads. When will those bozos stop wasting our time with these "specs to nowhere"?

And I can continue on and on and on. Joe, how come it took until June 2007 to complete the WS-* stack? Don't you think we could have done that earlier? How come a spec as innovative as WS-CAF was killed on the vendors' politic altar to be replaced by something vastly inferior (WS-TX), designed by a couple of big vendors and handed out for signature at OASIS? Why has WS-I made bidirectional contracts an outcast of SOA? 

Both SAE and Praxeme are not vendor driven... their goal is not to bake a product or somebody's career or pet project into the standard, it is to actually produce something that can be used by everyone. Duh...

I feel sorry to have to write this that way, but these people like Duane or Cori will never understand. There is no other way. They are "standards pro", they are "thought leaders". They think and behave like wrestlers that have to win a competition where everything is permitted from reification to spec injection and even back-stabbing. When the Burton Group writes "SOA is Dead" just to get enough publicity to acquire a few more "clients" (at the cost of wasting everyone else's time and pausing a few SOA initiatives that suddenly felt the urge to investigate whether SOA is really dead or not), you understand how sick our industry has become. Actually, Anne was quite right, this SOA, the one of the standards, the vendors and the analysts is dead. The users are taking matters into their own hands. They are tired to wait for these guys to produce anything of value.

Joe, unless we change the system, we'll keep being in this garbage-in and garbage-out mode. Don't you think that we, the users, deserve better? Don't you think we have listened enough to these guys? Look where they drove us.


01/26/09 :: [REST] RESTful Patterns [permalink]

Lots of activity on REST last week. Bill Burke released RESTeasy and J.D. Meyer's team came out with a set of implementation patterns that facilitate RESTful service implementations.

The good thing about all this RESTful activity is that we finally have some very concrete stuff to talk about, no more hand waving by the RESTafarians who like to think of their opinion on whatever stuff as solid reasoning.

Let's start with RESTeasy. As you may be able to tell, I was very interested to see how JAX-RS and Bill used "POST" in their model and how HATEOAS would surface. Well HATEOAS is "out-of-scope" for that and POST is used classically to add resources "around" an existing resource (e.g. POST a reservation to a tennis court resource to perform the "reserve" action).

JAX-RS is very sparse on the usage of @POST. This is the only time it is mentioned in the 49 pages spec:

public class WidgetsResource
  public Widgets getAsXML() {...}
  public String getAsHtml() {...}

  public void addWidget(Widget widget) {...}

Now, adding resources right and left is what REST wants you to do. as I mentioned, this is quite an odd model because the question is how do you figure out whether a tennis court is reserved for a given time? Do you "try" to POST a reservation and wait for an error?, if not you immediately cancel it. Do you ask for all the reservations resources and figure out for yourself? Do you ask the "tennis court" if it is reserved? If so, how? remember, your "request" must return a resource representation of some sort? In reality, the wonderful world of REST is going to turn into a massive CRUDing orgy who has the potential to kill the concept of "connected systems". RESTafarians, shortsighted as they can possibly be, don't see the coupling that is introduced by CRUDing, even if, you, like Microsoft, use the incredibly innovative concept of "Application Lifetime States". 

J.D. Meyer's view on REST implementation is no different. Here is the key to Microsoft's vision to implement REST: the Entity Translator pattern:

Implement an object that transforms message data types to business types for requests and reverses the transformation for responses.

... [he explains:]

Resources exposed by the service represent an external contract while business entities are internal to the service. As a result, translators are required to move data from one format to another.

J.D. shows either how ignorant he is about REST or how blind the RESTafarians are about actions (which ever you prefer) since he recommends using the Fa�ade pattern just behind the translator pattern:

Implement a unified interface to a set of operations to reduce coupling between systems.

... [he explains]

Rather than defining chatty object based operations a fa�ade combines multiple operations into a single interface.

So just how, POSTy is Microsoft's view of REST? Check this wonderful "Promote" action !! Of course, J.D. is going to explain you that the purpose of the wonderful machinery he has put on top of the promote action is precisely to enable consumers to POST a "promotion resource".

But then the question is what for? what have we gained since we are calling the same old code? absolutely nothing. However, in the process we have lost the key advances of this decade in terms of middleware:

  • bidirectional interfaces
  • forward compatible versioning
  • assemblies (such as SCA)
  • orchestration languages (such as BPEL)

This is incredibly sad, the middleware intelligentsia has lost its way. They simply don't understand that to make any progress, the application model and the middleware has to complement each other, in other words, the application model needs to be message oriented. Adding annotations to corny OO concepts (which are used to generate boiler plate code) is what is wrong and what has been wrong ever since CORBA, whether you Object Orient, Service Orient, Resource Orient or Event Orient these annotations. How could all these people believe that this is an "annotation" problem?

Never more so, my 2002 post on "The End in Mind" has been so current. Yes, Stu, our industry only knows to circle back... no matter what you feed it.

Yes, you guessed it, if all that the RESTafarians achieves is to "route" calls to the Same Old Code, we would have lost everything: Service Orientation, Resource Orientation and Event Orientation, all together, in one big swoop. Thank you guys ! What an achievement ! I do not mean it has to be that way at all. I respect Resource Orientation as a concept as long it is well articulated with Service Orientation and Event Orientation, but this is not what the "Wild Bunch" has in mind. They can't care less about articulations. They want a bloody dominance (a.k.a reification), and we all know how this will end up: dominance will only lead to its demise.


01/19/09 :: [Other] Really Shortsighted Syndication [permalink]


Last week the Wall Street Journal had a really interesting article about Microsoft's tortuous Online Advertising History: Microsoft Bid to Beat Google Builds on a History of Misses.

The part of the article that caught my eyes was this figure:

[search ads spending]


I had read a couple years ago in Google's financial results that Search was growing faster than display ads but I had no idea that the damage was so extensive. I say "damage" because I can really see a strong correlation between this trend the the disappearance of the press. Everywhere across the country the press is being hammered by decreasing advertising revenues. Even the country's most prestigious newspapers are not immune to the phenomenon. Our own Seattle-PI may disappear in a few months.

Now, I am a geek. Dave Winer is also a geek, he invented RSS (Really Simple Syndication). Everyone I know at Google, Microsoft, Yahoo and Amazon is a geek. I am sure there are few that are not, but, their business model is also "geeky".

For instance, let's look at Google's business processes and technologies in more details. Google's processes were born after some insight into Overture's business. Overture didn't have a "search" business and was managing online advertising for large Web sites. Google automated Overture's business processes with ... search, a natural choice at the time. In that process, Google opened up the online advertising market to scores of Mom & Pop web sites. When Google decided to tie online advertising and search, they sealed the fate of the press. They sealed the fate of the press, not just because they spread out an advertising pie to more "publishers", they sealed its fate because they decoupled advertising from "content". Did they do it on purpose to capture a larger share of revenue? was it a geeky mistake (as in honest mistake)? It does not matter, the damage is here.

Incidentally, Google's advertise-to-cash business processes are fundamentally flawed. If you look at it in detail, there are only two roles outside of Google: the advertiser and the ad-publisher. Please, note that I do not say "content publisher". This again looks harmless. Yet, we may realize that when Google decided to aggregate the publisher and the content author roles into one, they basically forced people like me (content author) to become a publisher and they forced publishers to "hire" content authors. Again, I would easily argue that this was a geeky mistake but in this day and age, it is accelerating the press agony while crippling the online advertising market as advertisers experience more and more difficulty in relating "content" with advertising.

I would like to argue four points:  

1) The rise of search is destroying the Web as a branding platform: if search may indeed find some "instant" customers, advertisers are returning to traditional media (TV, Radio and ... Press) for branding activities which is one of the core markets for ads. This is why the online advertising market is saturating and never reached the projected $80B.

2) The rise of search is partly due to click-fraud: I have not checked lately if Google has improved its algorithms to detect fraud but I did an experience two years ago where I used two different AdWords accounts to advertise a free community teaching tool that helps children learn how to read. As you can guess, one was set up with "search" and the other one was set up to display ads on related web sites. Surprisingly, the display account would reach its limit within the first hour of the 24 hour period, while the "search" account would nicely build up over the 24 hour period and reach its limit randomly. As a result of potential click-fraud, advertisers are limiting their display dollars, which in turn impacts the press.

3) If we were to create additional syndication business processes that relate publishers and content authors, we would enable the press to increase its revenue: publishers could select more content without having the need to set up complex relationship and revenue sharing strategies, and similarly, their content could be syndicated in other newspapers, blogs or social networks, again bringing more revenue to the publisher.

4) RSS and ATOM are terribly shortsighted syndication technologies: unfortunately, these technologies were invented by geeks for geeks. They are useful, yes, but their side-effects are dire. We need a technology that will allow syndication with revenue sharing, while leaving some control to the publisher to integrate the content and the ads into its site layout.

 I hope that Google, Microsoft, Yahoo and Amazon realize how critical online advertising is to a strong and independent press, and ultimately to society, and that the current online advertising business model is killing it. I also sincerely hope they realize that the press can thrive again if only the eternal bond between content and advertising is renewed.

01/17/09 :: [SOA] The New Whatever World [permalink]

Miko Matsumura renamed recently its blog the "WhateverCenter" (previously known as the SOACenter) in response to Anne Thomas Manes publication of SOA's death certificate.

In a recent blog, Stefan Tilkov discusses Subbu's question about how reliable is the self relationship defined in Atom. Stefan concludes:

But in conclusion, I stand by my opinion that URIs can and should be used for identity – whatever “identity” might mean for you.

Even though Subbu keeps pounding that RFC 3986 clearly states:

URI comparison is not sufficient to determine whether two URIs identify different resources.

In other words, URI cannot be used safely for identity purposes. The method URI.equals(URI) returns an undefined result, actually, I can safely say that it returns a whatever result.

What's interesting about Stefan's response is the "Whatever" word he used. We now live in a world where any claim has merit. REST has some little snags here and there? No, not in the whatever world, you redefine your expectations to match what REST, i.e. whatever, does, and voila, you just got a whatever solution to a whatever problem. You can say anything you want, short of "gravity does not exist" and get away with it. You just need a social network of believers. The more believers, the more "truth" there is to your statement.

This whatever world is quite magical. You want to start a war, you just fudge some intelligence data, and voila. The whatever world let's you get away with it. Your product can't do SOA, no problem, define SOA as "whatever" and your product can now do SOA. SO is different from OO? No, not in the new whatever world. All this OO stuff is now SO, i.e. a class is now a service (and hence a service is a class and an operation is a method). Facts, statements such as the one from the RFC 3986? Minor annoying details, they now count far less than someone's opinion. I am sure that Bush's opinion when he started the war was that Saddam had WMDs.

What does Stefan really means by "Whatever identity means to you?" What does "I stand by my opinion that URIs can and should be used for identity"? You mean a question as essential as identity only deserves an "opinion"? Yes, but of course, as long as it has been published (on the Web) it is truthful. Isn't it?

01/11/09 :: [SOA] The SOA Soup [permalink]

There is no better way to kill a SOA initiative than handing out the trilogy of the most vaporous specs the SOA community has ever produced: SOA-RM, SOA-RA and SOAML. Considering that Duane Nickull is behind the first 2 and Cori Casanave is behind the 3rd one, this should be no surprise to anyone.

It's interesting that people can still fund this participation for this kind of initiative. If SOA-RA's figure 2 does not scare you enough, you'll spend much of your time wondering whether this spec is just shallow or plain idiotic. For once I could easily side with the RESTafarians who look at this kind of work as a justification for adopting REST. Check this out, SOA-RA does have a "resource" as part of their RA. However, a Resource is the parent type of a description, as in a Service Description. This is hopeless. The saddest part of this document is fig 21 & fig 26, as they describe the relation between the information model and the service interface.  Of course, SOA-RA had to have a layer of Social BS otherwise it would not be "complete". A bit of light with the action model, but that's not enough to salvage the spec. Fig 30 is really rocket science, I am glad they thought of it. Fig 31 is the pinnacle of this spec, this is service orientation at its best. Thank god, they were able to derive the W3C reference architecture (figure 37) from their RA. What couldn't we have done without the W3C vision for SOA. On the highest note of the document, they do talk about service collaborations (p72). I know now where Jim Webber got his initiation to the "Service Oriented Business Processes":

Business processes are comprised of a set of coherent activities that, when performed in a logical sequence over a period of time and with appropriate rules applied, result in a certain business outcome. Service orientation as applied to business processes (i.e., “service-oriented business processes”) means that the aggregation or composition of all of the abstracted activities, flows, and rules that govern a business process can themselves be abstracted as a service [yeah right...]


Fig 50, 51 and 52 are hilarious. Isn't UML wonderful? Who knew SOA Governance could be defined with a handful of classes?

SoaML is not bad either. as a starter, you get that they already got the camel case a bit wrong. But if that was the only problem. You'll notice that there is a bunch of vendors behind it. Now, I like the OMG, this is a great institution and they usually come up with great specs and the SOA Consortium has published countless amounts of wisdom to help you drive your SOA initiative. So where did SoaML come from? This is back to the future XV. Already, back then, Cori Casanave was plotting to align ebXML BPSS with OMG's BOCA, even though they had nothing really in common. BOCA was a great spec, way ahead of its time, but what a stretch to extend it to B2B collaborations. No worries, the software industry loves to use one thing for another, reifying they call it. Actually, I learned this word from Cori himself (sorry in France we don't reify, this word is banished from our vocabulary). I feel sad that Fred Cummins got dragged into that spec, there is certainly a big mismatch there. So what did Cori did this time? With the help of Antoine, they plugged ebXML BPSS in the SoaML. In reality that's BOCA that he has tried to plug in. How about DCE? SGML anyone? Anyone remembers OBI?

You will notice on figure 4 how much a dead end UML profiles are or how bad if MOF, whichever you prefer. They need 2 "interfaces" to describe a "Service Interface" because of the bidirectionality. I should be happy, unlike the idiots at WS-I they support bidirectionality.

I like the hat on fig 18... LoL

Figure 22 shows again how wrong it is to go the UML / UML profile route. This is pure hand waving. Come on Fred, you can't let that pass. Can't anyone get right the relationship between an information model and a message model?  You can't tell me you can't.

You see, it really takes a couple of guys here or there to screw up an entire field or even an entire country for that matter. I got an idea, you guys could spend some quality time on defining the REST-RA? you that that is a very promising field. I am sure you can get your name up high on the board and magazines. If you are lucky analysts will even ask you some questions and write about how cool your REST-RA is.

My advice is that you use the CBDI SAE and Praxeme. These guys really know what they are talking about.

The tragedy behind SOA is clear: it lacks a real RM, RA and ML/Programming model. Over the last ten years, everybody and their brothers, sisters and mothers told us what they thought SOA was. They pompously initiated debates around data, integration, JaBOWSs,... before they proclaimed their own ignorance and closed mind and told us XXX is dead (where XXX is ebXML, Web Services, SOA, ESB...). In the mean time the vendors like Cori or Duane (and frankly many many more) reified their products, thinking, programming model, databases... you name it behind the SOA banner... ah no, it was actually the other way around, they came out with their petty vision of SOA and explained us their silly product was SOA. WCF for instance came out and reified SOA behind OO, now the same team is reifying REST behind SO, i.e. OO. You take Bill Burke with REST easy and what does he do? He makes REST so easy to use that actually you add a couple of annotations to a Java class and you are done. Happy cruding ...

So I don't know what to say anymore, I don't know what the 2010s will bring us. I just know that the 2000s brought us nothing and SOA-RM, RA and ML are an excellent summary of what happened in the last 10 years. 

01/11/09 :: [SOA] Analysts are Dead, Long Live IT [permalink]

If this deeply and totally stupid discussion has proven one thing is that as of 1/1/2009 analysts are dead. The people that make their living by listening to rumors, that collect a couple of data points and pompously write "research" papers about their Starbucks discussions, are gone. They won't be funded in 2009. No more Psychedelic Quadrant, no more bogus measures about product capabilities and market numbers, no more SOA-EDA-WOA crap. They not only can't make up their mind (how could they just by talking here or there over coffee), the direction they set is as bogus as their approach to set a direction for IT.

Analysts have eradicated any effort to get it right, to build the right products and deliver the value that customers need so desperately. They single handedly ruined our industry by:

  •  driving good architects to adopt the same tactics and slowly drift to using bogus claims to stir the technology pot to their advantage
  • enabling the emergence of a group of pseudo-architects who write white papers and guidance all day and say whatever they want (there will always be an analyst to say this is great, since they heard it from an "architect")
  • discouraging architects that try to put some sense behind all this. Their management or their management's management come to them and tell them, but what you are telling is "failing". Why should we invest there? Their message is geeky (no, CxOs don't need to understand XML extensibility) by nature and yes, this stuff requires that you scratch your head a little bit. That don't seem to be a capability that comes with the vast majority of analysts.

As they understand that they are increasingly becoming irrelevant, they decided to take a "Paris Hilton" approach to their recommendation: the more dramatic, the more audience they would get.

Analysts should be all over people like Dave Orchard and have him explain them how Service Versioning plays in an SOA, why is it critical to reuse and why the lack of a service versioning strategy could explain some low ROI. Do they even understand what Service Versioning is? There are many people who have spent their career to get this stuff right and there are many smart people that get it. But that does not "sell". How come Gartner can still come up with a new acronym and make the "news". How come Anne writes a post with no facts and totally bogus claims and that makes the "news". This wonderful system works like People Magazine and Paris Hilton. They can't exist without each other. As sad as it may be, some people have even successfully injected that way of thinking in the IEEE itself.

There are a few analysts who are still doing the (valuable) job that we are expecting them to do. The CBDI Forum comes to mind.

01/09/09 :: [SOA] Can Anne be more Wrong? [permalink]

Anne decided to drop the rhetoric and explain what were her motivations.

My real point is that we should not be talking about an architectural concept that has no universally accepted definition and an indefensible value proposition. Instead we should be talking about concrete things (like services) and concrete architectural practices (like application portfolio management) that deliver real value to the business.

Anne is asking us to turn away from Architecture. This is precisely what has been so wrong with SOA for 10 years now. This is what no one ever got at Systinet, this is why Systinet ended being a "SOA registry company" after realizing that Infravio had opened up that market.  We have no universally accepted definition and indefensible value proposition precisely because we are not talking about it, or when we talk we get lost in silly rhetorico-political discussions (pick your favorite: Integration, JABOWS, REST, Dead or Alive...) with people that have no other objectives than sticking their little name up the software industry post.

Anne's statement is so silly that even Duane Nickull was able to make a sensible comment about SOA, which happens probably once every 5 years.

Now correct me if I am wrong, but SOA is "Service Oriented Architecture". Is Paul implying that architecture that is oriented around services is itself dead, yet the services will exist? This makes no sense as everything has an architecture, whether explicit or not.

Actually, Anne's argument is so silly that I even have to agree with Stefan who equates CRUDing with SOA:

I think there is a strong case to be made for the core ideas of SOA. In fact, I don't think there's an alternative.

Tremendous architecture concepts have been born over the last 20+ years. We have a unique opportunity to bring these concepts together (unlike the OASIS SOA RM) and all that a little clique of pundits can come up with is "let not talk about it"... (with my Cluso accent) "Zhere is a financial crizis, su let zus duck behind our desk and sip our lattes".

Note that this is also true of BPM, so nothing specific about SOA.


01/04/09 :: [SOA] Great News, Application Architecture is now Stable (at least at Microsoft) [permalink]

Who knew that we would see such a glorious day? I could not believe my eyes. J.D. Meier runs an interesting blog. You can sense that the chap spends a lot of time thinking. So, I opened his  application architecture guide vintage 2.008. (J.D. is apparently the lead on this wonderful document).

Here is the v2.008

And here is the v2.002,MSDN.10).gif

In 6 long years, this picture has not changed. Wow ! Houston, the eagle has landed.

Yet, this architectural guide is full of "modernism" (actually structuralist post-modernism): for instance, it speaks about REST at length, yet REST has absolutely no impact on the logical view of the application model... nice ! Why do we even bother? Hum... I am actually not sure people at Microsoft are teaching RESTful principles and RESTfulness in a way that the rest of the industry would consider RESTful (Dare, are you ok with that?). So, I am certain the RESTafarians are going to love Microsoft's Architectural definition of REST:

REST is based on HTTP, which means that it works very much like a Web application, so you can take advantage of HTTP support for non-XML MIME types or streaming content from a service request.

REST works ...  very much like a Web application (specially the ones you build with ASP.Net). Ah, ah, ah. I don't think I had laughed as much in a long time.

The whole document is actually hilarious. This one is not bad either:

The main difference between these two styles [REST and SOAP] is how the service state machine is maintained. Don’t think of the service state machine as the application or session state; instead, think of it as the different states that an application passes through during its lifetime.

HATEOAS is about application "lifetime" states. Wow. So much to learn, so little time.

I am wondering if Doug Purdy agrees with that one:

SOAP is much better suited for implementing a Remote Procedure Call (RPC) interface between layers of an application.

There is better. Who knew REST could "provide" anything you need?

The WS-* standards, which can be utilized in SOAP, provide a standard and therefore interoperable method of dealing with common messaging issues such as security, transactions, addressing, and reliability. REST can also provide the same type of functionality, but you must create a custom mechanism because few agreed-upon standards currently exist for these areas.

I love the "you must create a custom mechanism". The more custom, the more chances it  could become an agreed-upon standard.

I left the best for the end:

The most common misconception about REST is that it is only useful for Create, Read, Update, and Delete (CRUD) operations against a resource. However, REST can be used with any service that can be represented as a state machine. In other words, as long as you can break a service down into distinguishable states, such as “retrieved” and “updated,” you can convert those states into actions and demonstrate how each state can lead to one or more states.

You mean "retrieved" is an "application lifetime state"?

At Microsoft, in the CSD, you can convert "states" in "actions" (they actually finally found a use for the Alchemy project). Unless they have the same linguistic problems at Microsoft and JBoss. I think they cheated on jBPM's documentation. Now that I think about it, they also forgot the "created" and "deleted" lifetime states, that would make a new acronym that we never heard before: CRUDed.

See, guys, even REST is not going to resist this phenomenon. Ted Neward was right on the money. Let's give J.D. and his team a lollypop of appreciation for such innovative contributions to REST (sorry no gold star due to financial crisis).

I don't know what to say anymore, our industry has become a junk yard where nothing matters. As Dave Chappell told me once, you are wasting your career, the truth is whatever Microsoft (or XXX) says it is (He consults for Microsoft all the time). Yes, I still can't understand how can a company this caliber let this fly? What's the purpose? kill REST by uneducating the masses? Don Box's team screw up a couple of annotations, so the whole world has to be uneducated? is it really 2009? Are we in America?

 Microsot is sure good at shipping:  Ship, Ship, Ship, Shipped ... surely the guide was.

Subbu and I live less than a mile away from each other, I'll suggest next time I see him that we create the "Dead Architect Society". We could arrange weekly meetings at the Ale House and talk about the good old days of architecture, while sipping an IPA or two. Software Architecture does not exist any longer. At least, I can't hear a pulse anymore.

01/04/09 :: [BPM] Grosse Fatigue (DEAD Tired[permalink]

I really don't know why I spend any part of my Sunday looking at jBPM, maybe I am always hoping that one day one of the BPM thought leader of our industry will come up with something of value that can really bear the name "BPM".  It's true that I had given up on jBPM a few years ago and I wanted to see if it was still fair to ignore it.

I browsed through the documentation.

As you can imagine I was intrigued by the "state" activity. Unfortunately Tom needs to take some linguistic classes. Here is how he defines a state (he defines all states that way):

<state name="Verify supplier">
    <flow name="Supplier ok" to="Check supplier data" />
    <flow name="Supplier not ok" to="Error" />

There are two kinds of verbs: action verbs and state verbs. Action verbs are used to show when somebody does something. State verbs are verbs that state that something IS. I assume in his mind "Verify" is a state verb. This is elementary school grammar. I don't know Dutch, but I assume they have the same distinction.

The section on "variables" is marked "TO DO" so I don't have any hope to see any resource lifecycle. Ever since I came across Tom's work, he's had a UML activity-ish approach to BPM and adding a "BPMNish" look and feel is not going to change anything.

Subbu forwarded me this quote from Ted Neward:

XML Services: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop.

It looks to me that our industry has entered "the bigger fool" phase . Everything is up for grab and anyone can pretend that his/her product/project does "foo" if "foo" is what makes their product/project sells.

01/01/09 :: [SOA] SOA  [permalink]

As many of you know, I am pretty passionate about the work I started about a decade ago around SOA/BPM (some would say too passionate). I hope some of you find my point of views helpful. Johan den Haan wrote me an encouraging email a couple days ago thanking me for the discussions about SOA we had over the last 12 months or so. I'd like to thank him back, because any discussion is helpful to both protagonists.

Maybe it is just a sign of times, maybe it is just our inability, as humans, to clearly express, and hence understand, intent when communicating, but as of 12/31/2008, people are still completely confused about what SOA is. At this rate, I have no hope this will change in 2009 whatsoever. It seems as if the software industry is unable to create long range coherence and articulate different concepts together. For decades now vendors and the people that work for them have looked at creating standalone products and technologies, leaving just enough room for plugins. For decades these vendors never ever thought about "composition" be it at the infrastructure or the solution level. Computer Scientists don't have a much better track record. For some reason they never expended their horizon beyond Turing maybe to Petri (State) or Milner (Inter-process Communication). The one who did immediately took on the task to create "Turing complete" technologies. I am actually surprised that the RESTafarians never claimed HATEOAS to be Turing complete.  What we have noticed for decades too is: whatever concept someone comes up with, there will always be someone else that will use it for something completely different and claim that it can do everything possible. In physics, the equivalent would be someone trying to use Newtonian Mechanics to explain all aspects of the universe, large and small, Energy or Mater. Why is it so that in physics this approach would be considered ludicrous and in Computer Science it is perfectly ok?

Back to SOA. SOA  is very easy to understand as soon as you understand what reuse and composition means.

Reuse at the software library level does not work in the information system world. In the enterprise, what you want to reuse is information and information management, not some Fast Fourier Transform algorithm (that was solved decades ago). Reusing information means that you have to reuse it 'in-place'. You just can't duplicate information like you duplicate an algorithm. Could we just expose a database connection then and call it done? not quite, because you would miss the information management piece. Information alone is worthless with the metadata, business logic and reporting that surrounds it. Well, then I can reuse that business logic packaged as components or libraries? Not quite, the reason is "control" and the necessary alignment between information and information management. These two can never go out of synch. So if I give you a set of components and a database connection, the day the information owner wants to change something in the way this information is managed, you are going to have to change lots of things in the code that uses these components.

Computer Scientists and Middleware hobbyists don't get that, for them, information is this dirty, annoying, stuff that either sits in memory or flies over the wire. Code (and scripts) rule. They focus all their energy to produce reusable code. LoL. They want to know as little about information and information management as possible. All software architectures are based on this tragic deliberate will to ignore information and information management. And the people the claim SOA is a failure are simply basing their reasoning on "code". They all naively believe that this is just a problem of shoveling back and forth data from physical media to objects or in memory data structures.  Even the mighty RESTafarians do that. They cleverly stop at the "Resource Representation" level ignoring "information" and "information management". The RESTafarian's REST is just an "on-the-wire" thing.

I won't say that Web Services/SCA gives me native reuse capabilities of information and information management. After all, it was designed by a bunch of Computer Scientists and Middleware Hobbyists. Yet, they serendipitously (and involuntarily) set up some important foundations to reach that goal. The day someone will associate service-orientation with resource orientation and event orientation we will have a robust infrastructure to reuse information and information management.

But to reuse, you also need to understand how to "compose" or "federate". There are three levels of composition/federation: information, process and presentation. Of course, the Computer Scientists once more are confusing the space by reifying all composition mechanisms to the presentation layer (mashup they say). This is music to the ears of the RESTafarians. Unfortunately, ignoring information and process composition is not going to get us to the level of information and information management reuse that is needed. And one more time the analysts will be prompt to conclude that XXX does not work (I predict that XXX will equal to REST towards the end of 2009).

So by the end of 2009, nobody will be able to argue other than we lost a decade, a precious decade. We wasted 10 years over stupid battles and meaningless arguments, promoting rhetoric over reason. Make no mistake, this unified federated programming and middleware model that should have emerged in this decade will be critically missing for Cloud Computing. As a result, Cloud Computing will lead to a babelization of the software construction landscape, creating major infrastructure and solution silos (and customer lock-in) because no single vendor stood up to enable composition and reuse.  Yes, I agree, these two words are indeed very scary for a vendor. Actually, a siloed Cloud Computing facility is any vendor's dream with customers paying rent for decades to come.

I won't be writing as much in 2009. Happy new year !

12/29/08 :: [BPM] The BPM Soup  [permalink]

Christina Lau, distinguished engineer at IBM shared a presentation about BPM Zero, a project she is working on.

BPM's history has been quite hectic over the last decade and frankly overwhelmingly disappointing. All kinds of people make all kinds of claims while butchering perfectly good technologies and never talking to each other.

You have the BPMN camp who claims that BPM is about notation and a notation can be made executable. That camp has been the most successful so far and I would say legitimately. I have however argued that this model is incomplete (and flawed) and if we want to continue making progress we need to introduce at least two concepts: a task container independent of the process engine and "Resource Lifecycle Services". These two architectural add-ons would make a tremendous difference in building process centric solutions, but this camp is completely closed to any discussion: notation rules. People like Bruce Silver, Sandy Kemsley, the OMG/BPMN 2.0 authors... refuse any discussion on that topic.

You have the lost BPMN <=> BPEL camp (dubbed the "roundtrippers") initiated by Intalio. Intalio seems to be the only one left in this camp. People have long deserted this path which leads to nowhere and frankly does not make any sense.

You also have the cynical camp, best represented by Mark Masterson, who claims BPM does not work and only social chaos  networks can salvage this space. Sure whatever, just as if a little bit of collaboration and 5 star activity ratings would help you create efficient processes. Why didn't we think of that before?

Most of the RESTafarians have stayed clear of the BPM space, rightfully so as REST is an antithesis of BPM. In case no one has noticed, Roy's REST and the Web have nothing to do with business processes. Sure enough a few RESTafarians hand waved a "since everything is a resource in REST, a process is-a resource, therefore BPM can be made RESTful". Whatever.

Well the BPM space has a new thought leader, Christina Lau. What a presentation she gave ! Can you be more buzzword compliant? REST, BPM, SOA, BPM-as-a-Service, BPMN, BPEL... Her presentation is quite unfortunate. IBM has had a fairly decent and robust BPM story built on years of research from Steven White to Ksenia Whaler and the whole process server team. All these people's work has been swept away in a single 20+ slide presentation: BPMN? a business process model notation? no way, this is the ultimate scripting capability to compose REST services. BPEL? way too complex, and by the way everybody knows that Send, Receive and Invoke are "HTTP activities", entirely RESTful -of course. Who cares about asynchrony or correlation in BPEL? Hydration/Dehytration? you must be kidding! No need for this kind of crap, REST is synchronous. I am curious to see what the RESTafarians are going to say about how RESTful her model is. Christina's notion of RESTfulness, of course, mandates that we start from scratch while trimming and reifying well established concepts from BPMN to BPEL (and I include REST in that set).

BPM could have been a great technology providing high level of efficiency. Instead, we got a bunch of people who invaded that space and trashed it entirely. Even IBM is not immune to that phenomenon. At least this latest product bears its name with honesty: BPM Zero has zero BPM in it. It could just as well be called REST Zero too.