I am very excited to announce the SCM project, an open source STAR-based Component Model. SCM is derived from the TLA+ Specification Language and is the direct product of email conversations Dr. Lamport had the kindness to have with me during the month of January. 

The SCM project is open source (Apache 2.0) and is currently implemented in Java only. Javascript will be next. 

Formal methods are increasingly used in our industry to solve complex problems. For instance, the AWS team recently published this article summarizing their experience using TLA+:

Dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular “rare” scenario. We have found that testing the code is inadequate as a method for finding subtle errors in design, as the number of reachable states of the code is astronomical. 

According to the AWS team, the key to solving that problem lies in increasing the precision of its designs. They found in the TLA+ Specification Language an ability to express all the legal behavior of the system. 

The SCM project delivers some of the semantics of TLA+ directly at the programming level (with a couple of noticeable deviations) and as such, enables a broad community of developers to harness the power of formal methods. What kind of power can you expect? Here is how the AWS teams sees the benefits of TLA+:

In industry, formal methods have a reputation for requiring a huge amount of training and effort to verify a tiny piece of relatively straightforward code, so the return on investment is justified only in safety-critical domains (such as medical systems and avionics). Our experience with TLA+ shows this perception to be wrong. At the time of this writing, Amazon engineers have used TLA+ on 10 large complex real-world systems. In each, TLA+ has added significant value, either finding subtle bugs we are sure we would not have found by other means, or giving us enough understanding and confidence to make aggressive performance optimizations without sacrificing correctness. Amazon now has seven teams using TLA+, with encouragement from senior management and technical leadership.

Just like in the case of TLA+, SCM allows you to express:

  • Safety. What the system is allowed to do. 
  • Liveness. What the system must eventually do.

Even though, SCM amounts to writing code, which could potentially be deployed in a running system (as a Component), it is not its primary intent. An SCM Component is optimized for rapidly converging to its most precise design, which in turn, may not be the highest performance implementation possible.

SCM is based on the STAR conceptual framework (State, Type, Action, Relation):

STAR framework

Fig. 1 - The STAR conceptual framework

STAR is a style where every concept we express is clearly positioned as a State, Type, Action or Relationship. STAR has applications in Strategy/Innovation, Problem/Solution Specification and today with SCM, in Programming/Verification as well. 

Combining the foundations of STAR and TLA+, I derived the following SCM Metamodel: 

Fig. 2 - The SCM Metamodel

What the model tells us is that an SCM component has a set of behaviors which is defined by a combination of States, Action, Types (and Relationships). The models deviates somewhat from TLA+ in the sense that the semantics of State (in STAR) would be called "Control State" in TLA+. In TLA+ a state is simply an assignment of variables (which is embodied by the Type) and control states appear as a variable (generally named pc) which is assigned values which are the labels of the control states. However, unlike TLA+, the Type and not the Action is responsible for the determination of the resulting state. That is why the second association between Actions and State is dashed, because it is infered, not specified. In SCM, Actions are purely functional and their results are presented to the Type which may or may not accept them, based on internal constraints. Once the Type accepts the new values, a determination of the new state is made, based on the mapping between a set of Ranges of values and States. (Control) States seem to be Ranges of values, not "assignments of values".

This is how the model operates:

Of course, these semantics can be implemented in TLA+ but with a substantial complexity, even though the notion of "Range of Values" clearly belongs to mathematics. When the action is not in the position to deterministically determine the target state, at design time (which is a very common case), people are forced to implement a synthetic state (generatlly called "test"), which purpose is to make the the determination. This can be seen in the Factorial example of Dr. Lamport's paper on "Computation and State Machines" (see page 12 , TLA+ definition).

Let's explore now how we use SCM. A couple of examples are provided in the SCM repository (the Factorial and DieHard algorithms). 

Based on Dr. Lamport's article, the design of the Factorial algorithm can be summarize as (BOLT Notation):

Note that:

  • Start is a standard action of the component model which determines the initial state of the component based on the values
  • Multiply and Set are associated to an automatic transition, i.e. as long as the control state of the system is "mult", the multiply action is invoked automatically
  • We chose not to implement a Zero state and rather forbid any initial value which is less than one

The initialize action illustrate well the relationship between action-type-state since the resulting state can only be determined once the action has been completed. Of course the action could make that decision, but that would be an obvious and unwanted coupling between the type and the action, since the resulting state could be influenced by parameters outside the scope of the action.

It is that decoupling between action and types which confer to SCM a balanced articulation between Functional Programming (Action), Object Orientation (Types/Actions) and State Machine (State/Actions). 

The design of the component is expressed in the Component constructor's, this is where elements of the metamodel are assembled. 

public Factorial(int i) throws Exception {
	fact = new FactorialType(i) ;
	multiply = new MultiplyAction("multiply()",fact) ;
	initialize = new InitializeAction("initialize()",fact) ;

	mult = new State("mult", multiply,true) ;
	def = new State("default", initialize) ;
	forbidden = new ForbiddenState("inputLessThanOne") ;
	desired = new DesiredState("lastMultState") ;
	//SCM comes with a couple of configurable ranges
	//SimpleRange and Interval
	SimpleRange defaultRange = new SimpleRange("i",Operator.equals,new Integer(1)) ;
	SimpleRange multRange = new SimpleRange("i",Operator.greaterThan,new Integer(1)) ;
	//Concrete ranges are OK too
	LessThanOneRange fRange = new LessThanOneRange() ;
	EqualsTwoRange desiredRange = new EqualsTwoRange() ;
	fact.addRange(mult, multRange)
	    .addRange(forbidden, fRange) 
	    .addRange(desired, desiredRange) ;

	behavior = new FactorialBehavior(fact) ;
	        .add(def) ;
	this.add(behavior) ;

	State currentState = fact.currentState() ;

	//Add trace

	trace = new Trace("fact_trace") ;

To facilitate the study of the Component's behavior, SCM generates PlantUML activity and state diagrams both as traces of execution or after a "walk". For instance the Factorial algorithm will generate that state diagram:

Factorial Algorithm State Diagram

Please note that the "d1" state is a desired state, not a control state. Here is corresponding PlantUML descriptor (generated by SCM):




I'd be more than honored if you would give SCM a try. If you have any question, feel free to reach me at jdubray@xgen.io


Hypermedia: don't bother


It is no secret that I am not a big fan of applying Roy's REST to something other than the Web Architecture. Hipsters have pushed "REST" to architect distributed systems and craft APIs, but after a decade of RESTfulness, the RESTafarians have little to show for:  the so-called "uniform interface" is not that uniform (it's rather verbose), "resource-orientation" is just a fancy name for CRUD, schema-less contracts lead to ... Swagger and ... back to code generation, and so forth. I can't think of a single recommendation from Tilkov, Vinoski, Bray, Burke... that turned out to be correct. 

Since by definition a hipster would never admit defeat, the only strategy possible is the "fuite en avant": everything else was wrong, no worries, we got it right this time: the correct way to do REST is "hypermedia". Right?

I find it fascinating that the hardcore RESTafarians like Stefan Tilkov or Mike Amundsen are now using some of my arguments to "REST" correctly as they introduce the concept of a state machine to identify the verbs, yes, you heard it right, the verbs  of "the" once thought to be uniform interface.

In case you are not familiar with the role of state machines in computation, please I beg you to read this outstanding paper from Dr. Lamport, on Computation and State Machines. Anyone who writes code and designs interfaces must read it. 

With that in mind, let's explore the use of Hypermedia in distributed systems (M2M). Five years ago, I published this post detailing the metamodel behind REST to surface its semantics (interestingly RESTafarians are allergic to metamodels...):

There are three parts to REST:

  1. REST core (derived from Roy’s Thesis),
  2. What I call the practical REST which added query parameters, collections and properties of resources.
  3. What I call the “full REST” which of course includes states, transitions and actions.

If you disagree with me on the "full REST" and the connection of non-deterministic state machines with hypermedia, look no further than the great work of Ivan Zuzak et al on the “Formal Modeling of RESTful Systems using State Machines”.

I do agree that Hypermedia is the keystone of the architecture of the Web and without it, the Web, or what's left of it, would look very different, but unless you never wrote a single line of code in your life, you’d quickly spot that Hypermedia works because there is a user in the loop, in other words, a human, which is capable of interpreting the links and deciding which action to take next. Arbitrary strings of characters are as good as graffiti on the wall facing a computer’s “web”cam when it comes to helping a machine decide what to do next.

What can Hypermedia teach us about API design? Hypermedia clearly shows that the interface to a resource is not “uniform” never has been, never will be. Duh!! At last, after nearly a decade of "nounsense", Stefan and Mike use verbs... how refreshing. 

Is hypermedia hyped today with the same kind of bullshit intensity we heard when REST started? you bet! Look no further than this summary of RESTFest. Imagine... with Hypermedia you can build "bots", which, without the knowledge of chess, can play chess against each other... OMFG, Hypermedia can even do machine learning. I can't wait to see how Amazon's drones and Google's self-driving car use Hypermedia.

But, to be fair, when you take away the people who are financially and/or emotionally attached to REST, you get a reasonable set of arguments behind the use of hypermedia, or so it looks. For instance, Glenn Block wrote this post in 2012, where he explains:

In a hypermedia based system, links are generated based on a state machine. Depending on the state of the resource, different links apply. [This means that] you will have to deal with how to handle the generation [and interpretation] of links.

Model-driven engineering has been around for quite some time, and I have been building model-driven software since 1991 (thanks to NeXTStep and Interface Builder, and things like EOF…). Embedding metadata in a server’s response for the client to interpret has probably been around since the 60s, I would guess, the first distributed system ever built was using that "pattern" already.

So what do you gain by paying the penalty of inflating your responses with next-action metadata? Can you spontaneously teach a machine how to play chess that way? Well, no… From Roy’s thesis:

Machines can follow links when they understand the data format and relationship types.

Yes, a small detail that a lot of hipters fail to surface is that, machines need to form a shared understanding (out-of-band) of the data formats and the relationship types. Since they need to form a shared understanding in the first place, what could be the value of redundently Hypermediating machine responses in M2M scenarios?

First, in complex non-deterministic state machines, it might be great to give some hints (a.k.a. constraints) to the client as to what is possible in case the developer(s) of the client(s) can’t read the doc and create a client that works properly. Ah, but I almost forgot, you remember what the Vinoskis of the world were telling us in 2007? that client SDKs were no longer needed with REST, that simple human readable doc was enough? How did that pan out? Yep, the serious API providers have to hand-code at great cost what WSDL was generating for free. So it is unlikely that this kind of metadata will ever become a major driver for Hypermedia adoption, because the hypermedia will be buried behind the client SDK, and since the API providers write both the SDK and the APIs, they don't really need hypermedia to support the shared understanding between client and API.

Second, one could use the link metadata to construct UIs (with queries, actions…). RoFL!! You mean the RESTafarians could convince someone to rewrite a proprietary HTML engine, just for the privilege of using hypermedia? I am sure some people will pay top $$$ just for that privilege, just ask some of the people who paid north of $300/hr to learn how to convert verbs into nouns. After all they have already convinced our industry that it was cool to hand-code ESBs, client SDKs… So why not add an HTML engine, if they can find people who are ready to pay for learning how to do that?

The problem in finding value for M2M-Hypermedia, however, is not just shared understanding, it is shared state. Again, most of the Web’s architecture was built with the user in mind, when we talk about the “application” state, we talk about the user session, not much more. When we talk about M2M, we talk about many machines interacting with (and therefore changing the state of) each other (imagine a server booting, what does it mean to hold a representation of its state?). REST is essentially client/server, and therefore not an architecture suitable for distributed computing. REST's fundamental assumption is that the state you are handed off changes infrequently (as in never) and even if it changes, the stale state is still actionable (as in you can navigate to something that makes sense regardless). Web Apps have been architected with the constraint that the state is solely changed under the circumstances of the client. 

In M2M, state is likely to be stale, irrelevant, because another machine would have changed that state before any action could be taken on a given representation. IoT is not the WoT, otherwise it would be called the Web of Things. So, what is the point of having the knowledge of the "affordances" on a piece of state that is no longer relevant? Interesting question, isn't it?

So why are the RESTafarians pushing hypermedia so hard today when there is absolutely no value to show for?

The true reason is because REST has a flaw at its core, a major flaw, that the hipsters don’t know how to deal with, perhaps other than by pulling in the machinery of hypermedia. They won’t tell it to you that way, but I reckon most of the hypermedia you will see will be used to deal with that issue.

The problem with REST is that it couples access with identity.

When you write:



you are coupling the identity of the order (123) with the way you access its "state". That kind of URI can't be opaque to a machine/client (again all this works nicely when a user is in the loop). So at a minimum, REST needs an identity relationship that provides a iURId (as in: immutable Uniform Resource Identity) that a client can rely on to retrieve that resource at a later time (generally via a mapping from their own internal identity of the resource).

When you don't have a iURId to rely on, when a new version comes out and all the plain old URIs of the orders you hold on the client side become obsolete.

So, how do you know that these two URIs http://api.blackforest.com/v1/orders/123 and http://api.blackforest.com/v2/orders/123 point to the same order?

This is also true of “views” of a resource, how do you know that these views point to the same order?



Well, you really don’t know they are the same, unless you have a shared understanding with the resource owner that 123 is the identity of the order and http://api.blackforest.com/v1/orders/ is (one of) the way you access it.

Yep, you don’t hear much the hipsters telling you about these little rough edges of REST. My good friend Pete Williams does explain it, but in a different way:

A URI that is constructed by a client constitutes a permanent, potentially huge, commitment by the server. Any resource that may be addressed by the constructed URIs must forever live on that particular server (or set of servers) and the URI patterns must be supported forever

Effectively, you are trading a small one time development cost on the client side for an ongoing, and ever increasing, maintenance cost on the server side.

But who, who in their right mind, would let his client trust and follow a dynamic link based on a label (relationship name)? Are client-side URIs really such an acute problem? Isn’t compatible versioning enough to deal with these server-side cost?

The bottom line is that Hypermedia has zero value, especially in the context of M2M (again, I am not talking about the Web’s Architecture when a human is in the loop). I am certain that someone somewhere will find an application (or two) that shows some value, just like any interesting software pattern, beyond that, hypermedia will be a big waste of time for most people, just like the uniform interface, the coupling of identity with access, http caching, verbs vs nouns, human readable documentation… have been thus far.

Can we just turn the page (pun intended) on REST, once and for all? 


The Essence of (my) Life

Today was a beautiful day in the Puget Sound. I took this picture in downtown Bellevue and it got me thinking. 


The Essence of (my) Life

My kids are going to be 14 and 18 in just a few months, and I just crossed 50. So it's probably a good time to tell them the handful of things that I learned in life. 

1. There is nothing more powerful than Human Dignity

When you hesitate about which way to go, pick Human Dignity above all. There is never a time you will regret traveling that path, ever.

2. Be an Heartist

Saint-Exupery wrote in the Little Prince that "It is only with the heart that one can see rightly; what is essential is invisible to the eye". There is simply no amount of rationale that will ever weigh as much as a heart beat. 

3. Don't let "things" dictate what you do

There is no possession that's worth anything: a house, a car, a boat, jewelry whatever it is you think you should own, it's not worth the time you spend working for it. Yes, you need some money to live and raise a family, but there is always a way to make it work. Connecting with someone’s heart is priceless.

4.  Stay away from Sociopaths

It took me a while to understand that Sociopaths are actually quite common in our society. They are easy to spot once you understand how they operate. When you meet someone and they behave in a sociopathic way, run, run as fast and as far as you can

5. Live free

You can introspect everything you do, every action, every choice you make and figure out if it is dedicated to reach its purpose. There is nothing you are not in control of as long as you don’t react to what someone else does. Freedom is a state of mind, it is not about being able to do anything you want, it is about choosing everything you do, with purpose, heart and human dignity, in otherwords it is about becoming. 

I know that's pretty simple, perhaps even pathetically naive, but I don't have stronger certainties today. I know that's all I would have needed to know to guide my life properly.


I'd like to add a couple of notes:

1) Seattle is a pretty interesting place with people like Bill Gates or Jeff Bezos who are constantly in the news. What's fascinating about them is that we can now see that their fortune means nothing. Bill Gates can buy Da Vinci's manuscripts, but he’ll never be his equal, not even a fraction. Bezos may have built the best commerce engine in the world and funded Cloud Computing, but where's the human component in Amazon? is Bezos shaping the future of humanity as algorithms, drones and robotic workers who help us buy more crap?

Even Bill Gates Fortune is puny when you put it in human terms. Bill Gates, the richest man in the world, who was able to collect for himself, one way or another, $70/Microsoft customer (~1 Billion) can only ... hire the extended city of Seattle (Everett, Bellevue, Tacoma) for one year! (700,000 people at $100,000 GDP per worker = $70B). That's it, 1 person out of every 1000 on earth would work for him for a year and he'll run out of money. In the end he'll just be remembered as the guy who brought us Windows 95, or Clippy, not to forget bullet point thinking. 

On the other hand there are also some very successful people who seem to live these values every day like Howard Schultz or companies like REI. What can be more satisfying than giving decent, human-centric, sustainable jobs across the world with health care and education?

2) At the time of this writing, I wanted to add that I am not able to see my children when they need me. I suggested their mother that a teenager has a complex enough life between school and friends that a parental schedule should not be a document that can dictate his life. He should be in charge of the schedule, not me, not his mother, he should be free to follow his heart, and become a fully functioning human on his term. His mother replied to me:

"You cannot decide when you will see [your son] or not.  I do not want that sort of thing for [him] as it can hurt him. I want him to have clear expectations. There is a schedule for that reason. All divorcee parents work with it. So we will do as everybody else. You will not rule. Bringing the argument that I don't want you to see [him] WILL NOT work."

This is what the parental plan my ex-wife demanded that I sign so she could maximize the amount of child support she could get:

"The father may have up to 8 hours of residential time with the children every weekend, provided that he gives mother 24 hours’ notice that he intends to exercise the time.

The father may also see the children every Wednesday, or another mutually agreed upon weekday, from 5 pm to 8 pm provided that he gives the mother 24 hours’ notice that he intends to exercise the time."

As part of the divorce settlement, my ex-wife, a finance manager in a large Telco, got our two houses and condo, with a rental income of $30,000/year. She also demanded to have full decision control over the children. 

During that beautiful day, yesterday, my son was left alone at home until 2pm as her mother had things to do. That is the kind people our society rewards, people who take everything, have no heart, no empathy and want to control everyone else's actions.

I am free.


Following my post last summer detailing how Chorus.js would solve the problem of expressing message formats in relation to a "Common Information Model", I am happy to report that I have completed the development of a DSL, which I believe is a major advance in the way we manage API, Resource and Service contracts (Yes Cord includes an equivalence between Resource-Orientation and Operation-Orientation, how could they not be equivalent?). 

This DSL solves a key problem of schema technologies (XSD, JSON-Schema,...) which cannot properly manage the relationship between a data model and message formats. These technologies are capable of either defining the structure of a data model, or the validation rules of message formats, but they are not capable of managing both at the same time. The flaw in these schema languages stems from the import mechanism which forces any change to the data model to become visible to message format definitions without any possible isolation. For more details, please read my earlier post.

The solution I developed, not only decouples the evolution of the data model from the message formats, but the DSL explicitely manages the versions of both the entities of the data model and the message formats, which is unheard of in most software technologies (Object Orientation, Functional Programming, ...). 

I always found it quite ironic for an industry which is proud to enable "change" that none of its technologies, languages and frameworks can deal with versioning (i.e. change) explicitely, let alone effectively. Versioning code, schemas, config files... ends up as a nightmare of epic proportion. This is particularly true of message formats.

The CIM DSL is part of the "Cord" language, developed as part of the Chorus.js project. The DSL was developed using Eclipse Xtext. Currently the plugin generates XSDs, WSDLs and PlantUML class diagrams. I am currently working on generating Swagger API definitions as well.

There are only four steps you need to do to use this tools:

  1. Download Eclipse Xtext here
  2. Install the Chorus.js plugin into Eclipse: http://www.chorusjs.com/latest/site.xml
  3. Create a new project 

  4. Add a new file to the project (with a .cord extension) and copy the code below (please copy/paste the code below in the .cord file, the WSDL/XSD generations happens automatically when you save the file): 
package cim {
	native integer 
	native string 
	native boolean 
	native any 
	entity Entity1 {
	       identifier id : integer { 
                 ## 'unique identifier' 
               prop1 : string 
               prop2 : string? 
               prop3 : integer 
               prop4 : string* 
               version 2.0 { 
                     ## 'Here are the properties of version 2' 
                     prop5_v2 : integer 
                     prop6_v2 : integer 
        entity Entity2 { 
               identifier id : integer 
               prop1 : integer 
               prop2 : Entity1 

               version 2.0 { 
                       //This version substitutes the Entity1 type 
 //for its 2.0 version 
 //note: the _v2 suffix is not required, it is just added for clarity 
                       alter prop2 { cast Entity1(2.0) } 
                       prop3_v2 : string 
     projection QueryEntity1[Entity1] { 
             // We alter the multiplicity and remove properties of Entity1 
             alter id { min 0 } 
             alter prop1 { min 0 } 
             - prop2 
             - prop3 
             - prop4 

     projection CreateEntity2[Entity2] { 
             alter id {min 0} 

      projection CreateEntity2_v2[Entity2(2.0)] { 
             alter id {min 0} 

     message QueryEntity1Request { 
             part input : QueryEntity1 
   message Entity1Response { 
       part output : Entity1 

   operation Entity1Response queryEntity1(QueryEntity1Request) { 

   message CreateEntity2Request { 
        part input : CreateEntity2 
        //message versions match service interface versions 
 //this message type will be used to generate 
 //the v1.1 wsdl 
        version 1.1 { 
            //projections cannot be versioned 
 //we create a new projection  
 //which is based on Entity2 v2.0 
            part input : CreateEntity2_v2 

  message CreateEntity2Response { 
        part output : Entity2  version 1.1 { 
              part output : Entity2(2.0) 

  operation CreateEntity2Response createEntity2(CreateEntity2Request) 

  service example { 

       version 1.0 { 
              interface crud { 
              port test { address 'v1/entities/' } 

       version 1.1 { 
              interface crud { 
              port test { address 'v1.1/entities/' } 

Here is the PlantUML diagram which is generated from the CIM definition:

The cim.cord file can also be downloaded here.

The generated WSDLs file can be downloaded here (v1.0) and here (v1.1)

Here is the full documentation of the DSL:


I don't think there is an industry that is more pompous than the Information industry. Yet, the hype went up a notch lately: "I" no longer stands for Information but for Intelligence. I can already see the who's who of Gartner's new shiny IT magic quadrant!

But who could be so stupid as to believe that Artificial Intelligence is around the corner?

You want some tangible proof? just look at Google, the company who mastered Information, now touts its "Intelligence" in just about every network packet.

What happened this week is beyond me, and a sad reminder that we, as humans, need to understand what we are so "intelligently" designing.

I have run my blog ebpml.org since 2001. I am proud to be one of the oldest blog talking about Service Oriented Architecture. I never engaged in SEO for myself or someone else. I am a scientist, I respect people's ideas and work, and provide references (i.e. links) religiously. This is not true for everyone, recently someone took one of my posts to build half of a conference's presentation without any acknowledgement. I have no respect for that kind of behavior whatsoever. His name is not even worth mentioning here. Strangely enough, Google's algorithms don't pick up on such a despicable behavior. They might even reward it...

I just received the following response from Google after they had sent me an email complaining about a suspicious link on my blog (the name of the page was pointing clearly at a hack, and it was not even there), so after checking that the link was not there I asked Google for a "reconsideration request":



Yes, that was not good enough, the fact that the link was not there, was not enough to return to a clean status: I was "violating" Google's rules and 15 years of work needed to be thrown into darkness "to preserve the quality of Google's Search Engine".

A few days earlier, I had also received a panicked email from a web site I reference for the simple reason that I use their content for free, per their licensing policy.



And this is what happened to Jordi Cabot who runs a popular and very useful blog on Model-Driven Engineering


What's next? self-driving cars texting drivers up the road to Get out of the way, or else?

So here is what I have to say -if there is anyone from Google who reads this post:

Dear Google,

Do you think that as software engineers we can believe more than a nanosecond that humans should have to take any action to facilitate the work of your algorithms?

Do you think, as humans, it is fair that you tell us that "something is wrong with our online presence and you'll censor it if we don't change it"? When you say "violated", are you referring to some kind of crime? (bonus question: are you some kind of sociopath?)

While we are at it, I also wanted to thank you for giving us a glimpse at the world to come, we'll no longer have to deal senseless "processes" or even cruel "dictators", we'll have to deal with your "intelligent" algorithms.

Congratulations, you've just earned the #1 spot in the "superficial intelligence" magic quadrant. Your depth of vision and market leadership is simply in a class of its own.


Jean-Jacques Dubray

P.S.: I never used Bing before, but it is now my default search engine. Bing Maps are actually a log better than Google Maps.


1 3 4 5 6 7 8 9 10 11 ... 24



blog engine