SOA, STAR

State Machines and Computing


 

Abstract

The goal of this post is to start developing the metamodel of computing, i.e. when you write an algorithm, or any piece of code, what are the foundamental elements that you need to build and assemble to achieve the desired goal(s). The post itself was inspired by, and heavily relies on, Dr. Lamport’s article on “Computation and State Machines”.

The major issue discussed here is the concept of state. The general view of “state” in our industry is “an assignment of variables”. “Control state” (the state that enables or disables actions) is thus reified as a variable assignment. That definition appears erroneous, and the reification unnecessary. We suggest that assignment of variables are just that, and inherently controlled by  a “Type”, while (control) state can be more effectively associated to ranges of values of specific properties. From there, we derive a simple metamodel for computing where State, Type, Actions are precisely articulated, without reification, and provide a DSL to illustrate it.

_________

I have always wondered about the relationship between Mathematics and Computing. This spectacular paper from Dr. Lamport: Computation and State Machines provides clear insights about that question. Anyone who writes software should read it. 

The core thesis of the paper, and the foundation of TLA+, is that an expression (in a programming language) can be viewed as a transition from one state to another. I am at odd with that thesis and its main corollary:

a state is an assignment of values to all possible variables.

I understand that it is difficult to disagree with such a statement, however, there is an example in the paper, the Factorial function example, that illustrates well the point I am trying to make.

Beyond this axiomatic definition of state, Dr. Lamport uses that example to ask several important questions:

we must decide what constitutes an [action]. How many [actions] are performed in executing a statement such as f = i ∗ f?

Does a single [action] change the value of f to the product of the current values of i and f ?

Are the evaluation of i ∗ f and the assignment to f two [actions]? Are there three actions? the reading of i, the reading of f, and the assignment to f ?

[Since] there is no standard way to describe a relation on the set of states, [should we] use a transition predicate, which is [defined as] a Boolean-valued formula N whose free variables are primed and/or unprimed state variables?

 
Fig. 1 – The Factiorial Algorithm (a- Program, b- Corresponding State Machine, c- TLA+ implementation)

 

Dr. Lamport uses labels (Fig. 1a) to align the computation of the algorithm with the states of the corresponding state machine (incidentally, I have a new appreciation for the infamous goto statement…). 

The state machine (Fig. 1b) is my representation of the algorithm’s state machine. Nothing out of the ordinary: an initialize action bootstraps the process and the transitions operate until the test automatically transitions to done (i.e.  when i < 2). The multiply action performs the computation:

The question that comes to mind is, assuming a state to be “an assignment of all possible variables”: what (control) state are you in at the end of that computation? When Dr. Lamport translates the factorial algorithm in a TLA+ definition (Fig. 1c), he is forced to introduce a new variable (pc) which possible values are exactly the set of (control) states of the corresponding state machine (test, mult and done). In one of the simplest case possible, the definition of a state (as an assignment of values), fails to be correct, unless you introduce a new variable which possible values are the (control) states themselves. I believe that’s a reification and that reification leads to a definition of state that is incorrect. If that definition is incorrect, the key implication is that an expression is not a transition. I would actually argue that an expression is an action.

In the TLA+ example, we can see that variables (i.e. types), states, actions and transitions are fully coupled. This is an unwanted and unnecessary coupling. It may well be mathematically correct (i.e. it does not violate the semantics of mathematics), but fails to represent the semantics of states, types, actions and transitions.

As I suggested in my previous post, the semantics of the system should include a “type” which is solely responsible for deciding which state the state machine is in once an action has been applied, and new values have been assigned to the variables.

Fig. 2 – Proposed Metamodel of Computation (BOLT notation)

With these kinds of semantics, the factorial algorithm could be represented as:

The system is by default in state “done” which allows a single action: initialize (which in the real world would be parameterized), when the system is in the state “mult” the action multiply can be applied, until it reaches the state “done”, in which case, the system can only be reinitialized. Incidentally, I prefer the notion of default state to init state. 

You may also have noticed that the synthetic state “test” is no longer part of the definition of the system. I strongly question the fact that “test” could be a state at all since its meaning is quite different in nature from “done” or “mult”. If it is not, then why this value can be assigned to the same variable as “done” and “mult”?

This new (meta)model no longer requires that “IF … THEN … ELSE” statements “be added to mathematics” as it was necessary when Dr. Lamport designed TLA+. As a matter of fact, they can be re-interpreted, mathematically, as “ranges”, the predicate i > 1 simply means, when i is in the range [2,], we are in the “mult” state, i.e. they are the very fabric of “state”.  

The next question is then: what is the best (hence lowest) coupling between action and types? In other words, are actions only coupled to a subset of the type properties, or are they requests to type operations? For instance, to align with Abstract Data Types, I could have written the system’s description as:

I prefer the former, which is far better aligned with Functional Programming, and provides the lowest coupling with a type (since I could reuse actions over different types which share the same set of properties).  The resulting property values are passed to the type (as a request to change its state) which validates a set of constraints (not represented here) before it decides to accept (or refuse) the new values.

With the exponential growth of Compute, Network and Data resources (e.g. Cloud, Mobile, IoT…) the complexity of distributed systems is about to increase by an order of magnitude or more. I suggest that it may be time to realign the foundation of computing towards a proper articulation of its core concepts (Fig.3): State, Type, Action and Relationship (STAR). As Dr. Lamport suggests:

Computer Science should be about concepts, not languages.

Fig. 3 The Double-Symmetry of the core concepts of Computation

Leave a Reply

Your email address will not be published. Required fields are marked *

49 − 45 =