This post is a synthesis of two posts I originally published on my other blog "unRelated".

One of the key foundations and most attractive principle of Agile or Lean methodologies is that  "Everyone can help each other remain focused on the highest possible business value per unit of time".

I am certainly a strong supporter of that principle. However, value is often difficult to assess, I would actually argue that it is easier to identify what has less or little value, but what we think as valuable can potentially lead to many false positive, or simply be "business-as-usual" and hide the broader structure of the solution. 

"User Stories" are the corner stone of identifying and delivering value:

An argument can be made that the user story is the most important artifact in agile development, because it is the container that primarily carries the value stream to the user, and agile development is all about rapid value delivery.

In practice, very few people focus on the benefits part of a user story. All user stories I see are either what we used to call "requirements" (just phrased slightly differently but isomorphically) or "tasks" needed to advance the state of the project.

However, there is a fundamental flaw in the construction of user stories, even when they are properly written, because they somehow make an assumption about the shape of the solution, and drive the author to turn almost immediately in solution mode, leaving no room for creative and out-of-the-box thinking.

Let's compare the metamodel of a User Story and to the formal definition of a Problem. The metamodel of a User Story looks like that (using the BOLT notation):

As a <role> I want to <action> so that <benefit>


I define a problem formally as a non existing transition between two known states [1],  the metamodel of a problem looks like that:



A solution is a way to transition between these two states. Please note that both the actors and the actions are part of the solution:



This is where the problem lies when using User Stories, you are specifying the requirements with the solution in mind. There is, of course, a general relationship between some of the actors and entities of the system with the "start" and "end" states of the problem. The problem states are always defined in terms of their respective states (possibly as a composite state), but it is a mistake to think that the actors and entities that perform the actions, as part of the solution, are always the same as the actors and entities related to the (problem) states.

Hence, an action is solution centric and should not be part of the problem definition. As soon as you pick one, you have put a stake in the ground towards the direction you are going to take to solve the underlying problem. The other issue is that the start and end states are never clearly identified in a user story leading to confusion in the in the solutioning and verification process, since the problem is not defined with enough precision. Benefits could sometimes align with the target/desirable state, but the definition is often too fluffy and more goal centric, not effectively representing that (problem) state.

Ultimately, the relationship between problems and solutions is a graph (states, transitions as problems, actions as solutions), and this is where the coupling between the problem space and the solution space at the User Story level becomes unfortunate. This means that User stories cannot be effectively nested and clearly cannot fit in hierarchical structures (which is common to most Agile tools I know). This problem is quite accute as teams struggle to connect business level user stories and system level or solution level user stories. The concept of having a single parent directly conflicts with the possibility of having multiple possible transitions into a single state and decomposition principles where the same problem appears in the decomposition of several higher level problems. 

I feel that distinction is profound because we can now clearly articulate:

a) the problem statements with respect to each other (as a graph of states and transitions)

b) we can articulate the solution in relation to the problem statements

c) we can articulate the verification (BDD) in relation to the problem and solution [2]

d) we can actually articulate the Business Strategy [3], the Problem Statement, the Solution and the Verification with the same conceptual framework

e) derive the Organizational IQ from the problems being solved on an every day basis

To the best of my knowledge none of these articulations have been suggested before and no one has ever provided a unified framework that spans such a broad conceptual view from the Business Strategy to the Verification. In the proposed framework the business strategy is simply a higher level and specialized view of the problem and solution domains, but using the exact same semantics (which are described here). In other words the enterprise is a solution to a problem, which is a composition of smaller problems and more fine grained solutions, etc. This has an extremely important implication for the execution of the strategy because now both the Strategy and its Execution are perfectly aligned, at the semantic level: the strategy, problem, solution and verification graph represent a map that everyone in the organization can refer to. 

To take advantage of this new conceptual framework. I suggest that we make a very simple and easy change to Agile and replace "user stories" by "problem statements". Each problem must be "solutioned", either by decomposing it into simpler problems or solutioning it directly. Value can still be used to prioritize which problems are addressed first, that part of the Agile and Lean movement is very valuable, so too speak, but the focus on problems and solutions opens a new flexibility in how we handle the long range aspects of the solution while enabling the highest level of creativity and ultimately a direct articulation with the IQ of the organization. 

As problems are decomposed, we will eventually reach a point where the subproblems will be close to or isomorphically related to the solution. But it would be a mistake to not clearly delineate the problems from solutions, simply because at the lowest level, they appear isomorphic. 

If we start drawing some BOLT diagrams, a problem lifecycle can be defined as:

The fact that the lifecycle is pretty much identical as the one of a user story enables most of the Agile processes and tools to work nearly unchanged.

You may want to know "How do I write a Problem Statement?". Personally, I don't like canned approaches. Oviously here, the mere definition of the two states (low value and high value) is enough to describe the problem. If a solution already exists (i.e. it is possible to transition between these two states) you may want to describe some characteristics of the new solution. I googled "How to write a Problem Statement?" and I felt there was already a good alignment betweent the results and the abstract definition provided above. For instance:

We want all of our software releases to go to production seamlessly, without defects, where everyone is aware and informed of the outcomes and status. (Vision)

Today we have too many release failures that result in too many rollback failures. If we ignore this problem; resources will need to increase to handle the cascading problems, and we may miss critical customer deadlines which could result in lost revenue, SLA penalties, lost business, and further damage to our quality reputation. (Issue Statement)

Here we see two states for the releases: initial state (low value) tested, and the high value state (in production). There is also an undesirable state (failure) that the new solution will prevent reaching. For me the most important thing is that the problem statement must avoid at all cost to refer to the solution. Even if the people specifying the problem statement have an idea about the solution, they should capture it separately.

This new focus on problem & solution provides a rich conceptual framework to effectively organize the work of a team. After all, we have been innovating, i.e. creating solutions to problems, for thousands of years, so it is no surprise that our vocabulary is quite rich. Here are a few concepts that could be used:

Goal: a goal is not a problem, but you often need to solve problems to reach goals, so it's important to keep them in mind

Fact: a fact often constrains the solution, so they need to be clearly surfaced and accounted for

Assumption: assumptions are very important because they also constrain the solution, but in a more flexible way. Assumptions can be changed, facts generally cannot.

Statement: the problem statement is what physically replaces the user story.

Hurdle: During the decomposition of a problem, hurdles might be identified, they are not a problem per say, but they impact the solution. It could be for instance that a resource is not available in time to meet the deadline.

Snag: A problem can be downgraded to a snag as the solution is obvious to the team and represent a low level of effort. It can also be a small unexpected issue, that need to be quickly resolved.

Dilemma: A problem can be upgraded to a dilemma, when several solutions are possible and it is not clear which one to chose

Setback: The team can suffer a setback when it thought it had found the solution but it didn't, or could not find a solution and need to reassess either the problem or the approach

On the solution side, we can also capture different elements and stages of the solutioning process:

Answer: Findings related to a question raised in the problem statement.

Result: A validation that the solution conforms to a Fact

Resolution: The choice made after reaching a dilemma

Fix: a temporary solution to a problem or a snag to make progress towards the solution to the greater problem

Development: An element of the solution, usually the solution to a subproblem or a snag

Breakthrough: The solution found after reaching a setback

Way out: A solution was not found, nevertheless, the project reached a satisfactory state to meet some or all of the initial goals


From a management perspective. The Solution or Delivery Manager can escape the bureaucracy that Agile has created. Ironically, moving stickers around is a zero value activity, with zero impact on the organizational IQ. The solution manager can and should be responsible for the IQ of the project, which rolls up and benefits from the IQ of the organization. It should keep track of the elements that are incorporated in the solution as problems are solved. It should encourage team members to be creative when necessary and to shamelessly adopt existing solutions when it makes sense. It should help resolve dilemmas and push for breakthroughs.

The PMO organization becomes the steward of the Organization's IQ.

As we define problems and solutions in terms of entities, state, transitions and actions, the BOLT methodology provides a unified conceptual framework that spans from Business Strategy to Problem and Solution Domains to Verification (BDD).

To summarize,

1) We have provided a formal model of a problem and a solution, and how they relate to each other

2) This formal model offers the ability to compose problems and solutions at any scale, over the scope of the enterprise

3) Problems and Solutions can be composed from Business Strategy down to Verification

4) We suggest that Agile methodologies replace User Stories by Problem Statements

5) With the renewed focus on "problems", we can also integrate the work of Prof. Knott on Organizational IQ in the whole framework

Last, but not least, decoupling problem definition and solution yields a tremendous benefit in the sense that both can evolve independently during the construction process. 


[1] For instance, you build a vehicle, obviously you want to car to transition to the "in motion" state. Different "actions" will lead to the vehicle to reach that state (a horse pulling, an engine, transmission and wheels, a fan, ...).

[2] BDD Metamodel (Scenario):



[3] Living Social Business Strategy mapped using the same conceptual framework (Source: B = mc2)


There have been a lot of talks in our industry about what software engineering is and how it should be done. For those who don't know, the foundation of Computer Science, and unfortunately Software Engineering, is λ-calculus. If you are not familiar with it, it's an algebra that defines formally how "computations" are performed: 

  • a variable, x, is itself a valid lambda term
  • if t is a lambda term, and x is a variable, then (\lambda x.t) is a lambda term (called a lambda abstraction);
  • if t and s are lambda terms, then (ts) is a lambda term (called an application).

If this is not clear enough, here are how lambda expressions are defined and computed (again from Wikipedia's article): 

lambda abstraction \lambda x.t is a definition of an anonymous function that is capable of taking a single input x and substituting it into the expression t. It thus defines an anonymous function that takes x and returns t. For example \lambda x.x^2+2 is a lambda abstraction for the function f(x) = x^2 + 2 using the term x^2+2 for t. The definition of a function with a lambda abstraction merely "sets up" the function but does not invoke it. The abstraction binds the variable x in the term t.

An application ts represents the application of a function t to an input s, that is, it represents the act of calling function t on input s to produce t(s).

There is no concept in lambda calculus of variable declaration. In a definition such as \lambda x.x+y (i.e. f(x) = x + y), the lambda calculus treats y as a variable that is not yet defined. The lambda abstraction \lambda x.x+y is syntactically valid, and represents a function that adds its input to the yet-unknown y.

Bracketing may be used and may be needed to disambiguate terms. For example, \lambda x.((\lambda x.x)x) and (\lambda x.(\lambda x.x)) x denote different terms.

λ-calculus comes from a time where the main value proposition of "computers" was to compute ballistic trajectories, decode encrypted message and occasionally crunch a few profit reports. 

Times have changed, we now use "computers" for a bunch of other stuff. Anybody who has had to write some code has probably noticed that we typically wrestle with four concepts:

These concepts form an interesting symmetry that spans across four views: physical/conceptual and static/dynamic. The very problem introduced by  λ-calculus is that a variable, x, is used interchangeably to deal with a type, a relationship or a state, but types, relationships and states are fundamentally different from one another. Great developers will natively sort them out, others will at best create a maze. 

Object Orientation, for instance, is just a bunch of actions and types. There is no way to express directly states and relationships. States are systematically reified behind type properties and relationships behind type composition mechanisms (a.k.a. containment). Even in SQL, where relationality is the norm, relationships are "coded" as an attribute of a type. Conceptual frameworks like UML have tried to correct this myopic behavior by adding a layer of higher semantics, but again if you look at the structure of UML, i.e. MOF, it is "essentially" physical (actions ant types).

One could actually argue that the adoption of OO has been so wide spread precisely because it enables to code "what you see" (the physical view). You see a "customer", hence, you code a customer class... right? 

Depending on the domain, relationships and states will be more or less important/trivial which will make traditional λ-calculus based software engineering paradigms more or less effective, but thinking that you can translate reliably and consistently state and relationship semantics into actions and types is the biggest fallacy of software engineering and the root cause of pretty much any problem you see today.

Thinking that one can fix software engineering without changing its foundation, or simply by developing higher semantics that suffer from the same myopy, it a bit like a doctor trying to treat a fever with a couple of TUMs.

Here is a fun project you can do with an entry level Raspberry Pi board. In this project you'll learn how to create a set of APIs which control the Raspberry Pi GPIOs and an orchestration which turns on and off a LED connected to one of the output ports of the board's GPIO. We use a a kit such as this one.

Disclaimer: There is a great probability to damage your board if you connect pins incorrectly

Here is what our project looks like:


Chorus.js generates a complete environment (APIs+Orchestration) which is deployed to and therefore runs on the Raspberry Pi.


Figure 2.1 Architecture of the Raspberry Pi project



Figure 2.2. API Orchestration

Let's start by implementing a service that will control the LED. Conveniently, there are several node.js modules which can be used to read or write to the board's IOs. We used the rpi-gpio module. The service implements three operations: turnOn, turnOff and read


Figure 2.3. The service definition

We introduce here a new module concept: the service definition references the gpio module which is defined as follows: require

This code is added to the generated service code. The name of the module is used as:

var moduleName = requires('moduleName') ;

If the name of the module is different, you need to use add a module statement such as above:

var gpio = requires('rpi-gpio') ;

You can then invoke these functions from the operation implementation:


You can also take a look at the message and entity definitions:



You are now ready to orchestrate these APIs:


The last step is to define the environment in which the service and the orchestration will be deployed:



The cord definition can be downloaded here and here is the native types declaration. You are now ready to deploy the files to your Raspberry Pi. We use the Arch Linux distribution. We successfully installed Node.js and MongoDB All you need to do is add the rpi-gpio module:

$ npm install rpi-gpio

You may run into an error if the time is not initialized properly on your raspberry pi. Depending on its configuration it will not pick up the time from the network and npm will fail if the computer time is set to 1970. We highly recommend that you install the GPIO utilityto be able to troubleshoot your electronics. The "gpio reset" command is pretty handy... After that, just type

$ npm start

and you should see something like that:


You can curl a request like that one:

POST /turnOnOff/v1/receive/switchRequest HTTP/1.1
Content-Type: application/json
{ "input" : { "IOPort" : "18"}}

Here are some more details to complete the project. Our GPIO service uses 4 pins:

  • Pin 1: 3.3V
  • Pin 6: GND
  • Pin 12: BCM GPIO18 (configured as an input)
  • Pin 24: BCM GPIO08 (configured as an output)

We turned off the ability to read/write other pins, but the code is easy enough to modify to use the port value passed in the API invocation should you want to extend the project. Raspberry Pi boards come in different sizes and shapes. We recommend that you check your connector rather than trusting our schematics.



The input as such is configured with a switch to change its value:



A LED is mounted on the output. The cord should turn the LED on, then off.


There is an area in Service and API engineering that has never been really solved: how do you associate the request and response message formats to their underlying information model? We are not talking about a physical data model, the underlying information model is often known as the “Common Information Model” or “Logical Information Model” and provides a coherent structure to all the message types of a service/API or set of services/APIs. 

Developers design message formats and interactions by hand without a deliberate traceability to any kind of information model. 

People usually try to factor of their XML Schemas into an information model (complex types) and a message model (derived from these types). However these initiatives end up in failure as the information model and the message model need to evolve independently but the structure XML Schemas do not allow for that to happen. When you use XML Schema imports and includes, a change in the information model definition would immediately be reflected in all message definitions that depend on it, making it impossible to deploy that hierarchy at runtime. So you would have to create individual copies of all the versions of your XML Schema based information model. Eventually people realize that this approach amounts to creating self-standing message models.  However, in doing so, message types quickly become misaligned with one another. 

Due to the same broken design, XML Schema cannot solve another important problem. There is an asymmetry between queries and commands. Queries are generally implemented using a “Query by example (QBE)” mechanism, and require a flexible data structure where most if not all the data elements are optional. On the other hand commands require a rigid format to help validate the input data prior to executing the command. Yes, both query and command types are derived from the Common Information Model. Developers typically result in using the common denominator and making all elements of a type optional rather than having types for queries and other (albeit similar) types for commands. 

This is not specific to XML Schema, any Schema language that would be built following the same principles would feature the same problems. For instance, the Swagger’s Pet Shop example has all entities are defined with optional parameter:


Figure 1. Swagger's Order Definition (note everything is optional)


Since when a purchase order is a valid structure when it does not have an id? an item id? a quantity? ... Is that truly “the model”? The reason for that is that this particular structure fits all usage scenarios: create (where the id is not known yet, query by example, or even partial update a particular property (e.g. quantity).

Chorus.js solves that problem directly at the language level by introducing a new concept, projections, which connects with precision message definitions to the Common Information Model. This approach is based on an an article published on InfoQ in 2009: “Message Type Architecture”.

With Cord you can define your Common Information Model (in this case, the Pet Order, Figure 2) and then define message projections for various scenarios (create, query, update...). For instance the  CreateOrderMessage expresses that to create an order, you must specify a petId, a quantity, a status, but the shipDate is optional, as it may only be known (and updated) later, once you have made shipping arrangements. Note that the order id is now optional, and that’s all we had to specify, the “deltas”, between the base type and the message type. 

Figure 2. Cord's Order definiton (note the Order's projections to create and query orders)

Similarly, the query by example message definition specifies that we can query by petId, by status or both, by id also, but we cannot query by quantity. The quantity property has been “subtracted” from the order base type

We can now create a service definition. Let’s start with the operation definitions:

Figure 3. Cord's Service definition

The message type definitions use the base entity and the projection, for proper validation. Here is the WSDL file that was generated by Chorus.

As usual, Chorus also generated a PlantUML class diagram. 

Figure 4. Class Diagram of the PetShop example, as generated by Chorus.js

The past few weeks have been really productive on the Chorus.js front. This week-end I was able to implement the sample Chan N. Nguyen used in his great BPEL tutorial.

The goal of this sample is to invoke a few Math apis to calculate the area in blue in this figure:

Chorus' Cord for that sample looks like this:

Of course, Chorus generates the (PlantUML) sequence diagram for it:

and here are the three files generated to run the project (compute_v1_0.js is the set of Math apis):

 As usual, to run this project (after installing node and mongodb):

$ cd /directory/of/the/sample
$ mkdir logs

[copy the sample files in the sample directory]

$ npm install
$ node calculateArea2_v1.js

You can send a sample request using curl:

curl -H "Content-Type: application/json" -d '{ "input" : { "a" : "60" , "b" : "200" }}' 

As always feedback is welcomed!



I am very excited to announce a project I have been contemplating doing for a long time. I have published today chorus.js, an open source API orchestration capability (not engine) for node.js apps.

Chorus.js comes with its own orchestration language: cord and is using MongoDB to store the context of orchestration instances. 

1 2 3 4 5 6 7 8 9 10 11 ... 22 >>



blog engine