I was once an Architect and a Modeler until…

… I realized how broken was the conceptual foundation on which languages and systems are built. Here is an account of my 20 year journey.

Some Background (to show that I am just an Old Fart)

Back in 1997, at the height of the OOP hype (and the emergence of UML) I interviewed with Craig McDermott, who was at the time the VP of Architecture at Sapient. We had lunch at a little café on the banks of the Charles River at one Memorial Drive where he dropped a bomb: “I think you are an Architect”. Wow, that sounded really good. I was sold, imagine that, I was an Architect.

Minutes after I accepted that generous offer, I was thrown on a big project that was failing, an oil pipeline management system. The project was coded in C++ and driven by the Industry’s Bible at the time, the infamous GoF Design Patterns book. At that point, it had become all but intractable. There was something odd in this project. Its goal was to automate business processes and yet, there was no “business process” in their programming model. Imagine debugging and maintaining that kind of project. Needless to say that I was quickly taken off the project, but I was given a couple of months to build a business process engine, in Java, with one of the first ORMs (I can’t believe they are still around on such a broken idea!). Ultimately, Sapient was not the right environment to build a product and I moved on to NEC Labs and then to eXcelon which productized my designs in the context of B2B (the ancestor of the API Gateway). At that point, we were at the height of the B2B hype with Market Places (Ariba and CommerceOne). The concept of a Business Process Engine never quite lived up to the hype, and it’s for the most part irrelevant today, most of the revenue is coming from managing simple workflows (not automate information system processes).

I became a passionate modeler in 1999 when working on ebXML. My modeling mentor was Karsten Riemer, who was working at Sun on B2B and I must admit that the rigor and expressivity of UML was love at first sight. There was something incredibly sexy about being able to represent and communicate abstract concepts with such precision rather than just powerpointing about them. But it seems that only a handful of people in the world shared that love.

In spite of all this precise modeling, we failed our first test when the specification was used by the STAR-XML consortium to model and implement B2B interactions in the Retail Automotive Industry. We had some of the best minds with people like Karsten, who had put years of work and real-world experience into that specification and it failed with the first real use case someone was throwing at us. Not the second or third, the first one.

So here I was, as an Architect is was clear that the conceptual foundation of programming was flawed and as a Modeler it was clear that Modeling could not help much either. We could have upgraded the ebXML BP metamodel to support that use case, but what about the next one and the next one. And sure enough, I was involved in building a number of metamodels (PLM, CRM, Mobile…) and the same reality hit each time. The only successful metamodel I feel I built was the “Common Information Plugin” but that was a very narrow scenario (creating message formats from ER data models, not entire information), it just happened that a DSL was a great solution for that problem.

The bottom line was that it didn’t matter if your programming language was conceptually poor or your metamodel was rich. Large solutions always reached a “quick sand” point, i.e. a point where their complexity overwhelms the underlying conceptual framework and the resulting chronic pain becomes unbearable (Yes, I saw a large $1B CRM project collapse in a big Bank under the brittleness of it’s architecture – the vertical slice architecture).

So why not combine Programming Languages and Metamodels?

I tried, … I explored “Metamodel Oriented Programming”. Something interesting comes up when you create a classification of software building blocks along two axes:

  1. How rich the conceptual foundation: monadic (single concept, e.g. Class or Function), polyadic (Multi-concept, e.g. DSL)
  2. How much “code” you can express: anemic (none), cogent (any amount)


I realized that there was and there still is a quadrant that is vastly uncharted (the upper right one), we don’t have a lot of technologies that are both cogent and polyadic (RoR should probably fit there?). Functional Programming and Object Oriented Programming are both cogent, but they are monadic (and pretty much the two faces of the same coin).

Model Driven Software Engineering is generally Polyadic/Anemic. I tried to come up with an example of what a Cogent Polyadic programming environment could look like with the project WSPER and then when Xtext was robust enough, with the project Chorus.js and Canappi, but to be frank, something was still not quite right. A rich, polyadic and cogent conceptual foundation was not necessarily yielding the results I was expecting. I was always getting to the point where it was hard to deal with specific use cases. Cogency was always better than polyadism.

Programming Languages (such as Java, C# and now JavaScript) have introduced a number of annotations, which act as a polyadic appendage, in lieu of libraries, but they have done so in stereotyping language constructs rather than introducing first class citizens. Nothing is dumber than stereotyping a class as a message definition or an API call as a class method.


Then there is System and Enterprise Architecture…

In 2006 I joined the ranks of IT after spending all my career in product companies. I truly wanted to get a deep understanding of the problems we were all facing and get to the bottom of it (and I did…). At that time, it was at the hype of SOA and my title became even more gratifying, I was a freaking Enterprise Architect. This was when Information Systems started to evolve from monolithic to “composite”, i.e. when a solution relies on more than one system of record. There were various reasons for that evolution from agility, reuse to ownership boundaries (these days, people talk pedantically about the Conway Law, or even more pedantic the Inverse Conway Manoeuvre). In the end, the role of an Enterprise Architect is about controlling the boundaries of the Systems of Record, which in turn make it easier or harder to automate processes and build composite solutions. Sure, a big part of the job is supposedly about defining standards, especially the technology portfolio, but in a rapidly evolving landscape does this still make sense? and I won’t even talk about coding standards or technology selection… (Shall I talk about that director of Architecture who looked at the design of my SOA and told me that I was missing a point-to-point capability? … nah)

Of course, our industry had to come up with yet again an obvious… I mean, obviously broken conceptual foundation for SOA: REST and another monadic/anemic conceptual framework where everything is a “resource”, when that didn’t work we invented something even more monadic and anemic (consistency anyone?), Microservices and all the Cargo Cult practices that go with it, such as Event Sourcing and Domain Driven Design.

I don’t want to bore you with more stories from an otherwise dull career, I just would like to point out some food for thought, because you may have missed it (and I didn’t  want to make it clear), that, when you think of it, there is a common thread between GoF, Business Processes, B2B,  Architecture, Enterprise Architecture, SOA and even REST: we are missing a type of building block, which is otherwise rather intuitive to humans: State.  You can think of state as being the property of an engine “started” or “stopped”, a light being “on” or “off”. Interestingly Mathematics and Computing have no concept of “state” and in pretty much 99% of programming languages, state is reified as a property of a type (from database to data structure). As a matter of fact, State is so hard to deal with in software engineering (and not just from a scalability point of view) that we thrive to make everything “stateless” for whatever that means.

The goal of this post was to set a 20-year context to an article I am writing that explains what I think I discovered and why you should pay attention to it. The important point to remember is that

Large solutions always reach a “quick sand” point

Stock Photography: © WikiMedia

Leave a Reply

Your email address will not be published. Required fields are marked *

68 − 65 =