08/17/08 :: [MDE] The Model Driven Engineering Revolution [permalink]


I started to use Model Driven Development Environment back in 1991 with NeXtStep. It was natural to use model-driven approached to Industrial Process Control Systems since these production systems are composed of a variety of elements that can be up or down, present or removed from the system... When most of my competitors where using "objects" to model buttons and knobs, I was using objects to model physical components of the systems and their relationship consuming "services" such as "recipe" service, or a "graph" service. The very nature of the model behind these components made possible the development of general services even though their individual behavior with respect to say, a graph or recipe service, was quite different.

When I taught an OOP class at the School of Computer Science of the Faculté des Sciences de Luminy (home of Prolog), it was natural to teach OOP in the context of MDE, even though I had no idea that this was something special. The semester long project I gave to my students was aiming at building a model driven GUI builder for Sun's OpenView.

Today, the Model Driven Engineering revolution has somehow already happened: models have (slowly) become a common artifact of project deliverables and solution construction. Ever since UML and UML tools came about, people had a way to create models, all kinds of models, and communicate about them. In the wake of OMG's MDA, both Microsoft and Eclipse furthered this foundation by providing simple means link models and code.

I believe that we are at the onset of the second wave of the Model Driven Engineering Revolution, the one that will actually engulf most developers and architects, like OO did in its time. It will happen by having developers weaving models in their day to day work or more exactly weaving their code on models.

Model Engineering makes a huge difference. If you think of an industrial process control system, from the end user perspective, there is absolutely no difference between a factoring at the button and knobs level and a factoring at the process control system component level. You can also imagine that the impact of one factoring vs the other on development, QA and maintenance is huge.

At the end of the day, I strongly believe that OOP has dramatically hindered the consciousness and adoption of Model Driven Engineering approaches because most often the models are not trivial (like in Process Control Systems) and do not map well to OO based models. In retrospective OOP could appear to have been the tree that hid the forest. With that in mind, I really like Jean Bezivin's position statement at OOPSLA 2003:

  • Model engineering is the future of object technology.
  • Model engineering subsumes object technology (goes beyond but does not invalidate)
  • Many general principles learnt during the development of object technology may be applied to the development of model engineering  

Everyone should reflect on these 3 bullet points. To better understand his position, he also provides an anonymous quote from 1980:

Because of the wonderful unifying properties of the object paradigm, the transition from procedural technology to object technology will bring huge conceptual simplification to the software engineering field. Since everything will be considered as an object, we shall observe a dramatic reduction in the number of necessary concepts

Let's see how it would like like with "resources" instead of objects:

Because of the wonderful unifying properties of the resource paradigm, the transition from distributed technology to REST will bring huge conceptual simplification to the software engineering field. Since everything will be considered as a resource, we shall observe a dramatic reduction in the number of necessary concepts... .

Wow, that sounds really good, isn't it what Tim et al keep spewing RESTlantlessly?

Let's try it with MDE:

Because of the wonderful unifying properties of the model driven engineering paradigm, the transition from traditional programming to MDE will bring huge conceptual simplification to the software engineering field. Since everything will be considered as a model, we shall observe a dramatic reduction in the number of necessary concepts... .

It works ! MDE is obviously the greatest nextest thingy that will make IT [fill in the blanks]...

So why do I say that OO hindered MDE? First, the "Everything is an Object" proposition does not make any sense. There is actually a "model" behind an OO "object", and this model, though quite general, is not appropriate to be the blue print of anything that can be coded.  What does this model look like? Something like this:

Unfortunately I cannot create a "service" simply by creating a class "service" (though a lot of pundits would like you to believe so). The OO model has been used -most often unconsciously- as a meta-metamodel (Layer M3 of the modeling architecture). GoF patterns represent other entities that cannot be described by instantiating a "class". Actually, pretty much every concept in the solution domain cannot be designed by designing a "class".

You could say, big deal, most good developers are very conscious of what they build and they use patterns to manipulate model elements:

  • If a model element can't be instantiated with: Thing t = new Thing(); we use Creational Patterns.
  • When model elements need to work together we use Structural patterns
  • When model elements have a complex behavior, we use Behavioral Patterns

and so on...

What is essential to understand is that in model driven engineering, there is an underlying modeling architecture:

The model of the solution (M1 level) is often poorly expressed because OO does not bring any consistency to the process of surfacing the metamodel. Two developers on the same team having to implement two instances of the same model element type might use a different set of classes (think a Purchase Order type and a Customer type for instance, both being an information entity). In that case, the OO model is used itself as an M2 layer and people pay absolutely no attention whatsoever to the M3 layer. When OO works -as intended- model elements are all (and only) specific classes. How often that really happens?

On the other hand, today, Model Driven Engineering is going a bit too deep and too fast in the modeling architecture. The four layers above are of course totally justified. But could we pragmatically apply them in a more efficient way? Sure enough, being able to generate graphical tools, repository schemas or reuse execution runtimes for different metamodels (or DSL) is a great capability to have and people should continue working on solving this question. But should anyone use them systematically?

A more interesting question to reflect on is: "is there really a discontinuity between Models and Code?" IMHO, the key success factor for MDE is to not become religious about any of the concepts of the modeling architecture (while keeping them in perspective and applying them pragmatically). So how could we be pragmatic?

First you need to understand the difference between the problem and the solution domains. Yes, both can and should be modeled. Intentional Software for instance is putting the emphasis on problem domains which is a very interesting approach but somewhat more risky because too much formalism might hurt and limit the way you define the problem. Others are looking for transformations between one and the other to both simplify the development of the solution and create a view of the solution (once built) for the business partners that defined the problem but sometimes developing this transformation is completely non-trivial and some of the solution domain model ends up spilling into the problem domain model making it hard for the people that define the problem to define the problem with the precision and flexibility they would expect from a well designed problem domain model.

Second, you need to breakaway from OO, the more you will think about your solutions in terms of Objects the more likely your solution will become difficult to implement and inflexible. EJB come to mind. The model behind EJBs makes (almost) total sense. The realization in OO was a disaster. 

How do you break away from OO? It is actually quite simple. The meta-metamodel of OO can be viewed of composed of three elements: types, properties and implementation (everything in the OO metamodel is one of these three things). The way you break away from OOP is by using the same meta-metamodel but by associating "implementations" (or however you want to call them: methods, member functions...) to any of your model elements.

In other words, a method does not systematically belong to a class. Let me take an example. If you take a service, you can argue that a service has one or more operations. Each operation may have zero or one method. The service itself may have zero or one method. How is that possible? Think of a BPEL implementation exposing a service. All the service operations are going to be tied up to the same BPEL implementation. In that case the operations have zero implementations and service has one. This is how the metamodel of a service looks like:


This looks trivial, but this change is extremely profound. In particular people that are looking to adopt an MDE approach with no implementation elements at all will almost systematically fail. The problem OO has introduced is that most people think of OO's metamodel as being sacred when in essence it is just a template for Model Driven Engineering. The key value of OO is in fact its meta-metamodel. OO is an M2 level technology but everyone is using it (often unconsciously) as an M3 level. That's why a lot of the solutions are extremely hard to build with OO alone and need to rely on a sophisticated set of patterns. This also explains why Spring had so much success in fixing enterprise Java, because it created a clean separation between the EJB metamodel and any other Solution Domain metamodel.

This is also why a lot of model elements end up looking like plain old classes. This is probably most visible in Service Orientation where most Service Containers map a service to a class implementation and operations to method implementation, leaving absolutely no room for a "BPEL" like implementation or bidirectional interactions, even though they are at the core of Service Orientation.

Please note that the model elements (just like in OO) are entirely accessible in the implementation elements of the metamodel, including constructors and destructor. Again, OO is a template of how all this works. Incidentally, one of the major issue I have with UML when applied to DSL is that UML does not surface "implementations", it instead embeds and somehow reinforces OO's metamodel. This is also where the assembly of several DSLs can be made easier  via implementations and tolerate a certain degree of overlap in the semantics of each DSL.

Third, developers need to understand the difference between platform independent solution models and  platform specific solution model works. In many respect, this looks like the Intentional Software approach but applied to solution domains only. In other word, why bother creating a PSM? Isn't it easier to create a compiler from the PIM directly to an executable artifact? I know that this is a bit provocative but this is really a (pragmatic) question of Level of Effort rather than theoretical question. Is it harder to create a PIM, a PSM and transformation between the two, and then a compiler from the PSM. What really make a model Platform Independent is the "implementation" elements baked in the model. Sure you can use a Java or C# syntax for specifying these implementation elements, but that does not make them platform specific. You don't have access to anything platform specific library here. This is the key separation of concern that needs to happen and that's why implementing a parameterized compiler from a PI(S)M looks pragmatically good enough. You can write a compiler for Java, for .Net, something that leverages JEE or SCA, or whatever. The key is that your PI(S)M will always remain Platform Independent while containing most if not all of the behavior of the system.

My recommendation to developers and architect is: metamodel (as a verb), metamodel completely and thoroughly and even if you don't create a (PI) model of your solution and a compiler (based on this metamodel), write code with the metamodel in mind (this will endup looking like a framework of course). For instance, define precisely what a business entity is, an association, a business process, a task... Remember, you are NOT creating an OO model, you are creating a metamodel. Every solution domain has a metamodel. There is nothing absolute about it, the metamodel of an information system is different from the metamodel of an industrial process control system, and what works for a travel company may not work for an insurance company.

Let me tie this to REST. If the RESTafarians would somehow create the metamodel of the Web and the metamodel of an information system they would understand immediately where "everything is a Resource" (i.e. REST) falls apart. Unlike what Tim is saying the Web (the most successful information system) does not look at all like enterprise information systems as Dino reminded him.

In case you care, I have created the metamodel of most WS-* specifications (here). This is a MagicDraw file which can be opened with the community edition of the tool.