Fiction in software – part one

Designing software that tells a story

In the winter of 1900, and after he had exhausted all common-sense ideas to address the Ultraviolet Catastrophe at the core of how Classical Physics treated radiation, the German physicist Max Planck decided to try out a kind of preposterous mathematical trick to solve it. By assuming that radiation could only be transmitted and absorbed in spatially isolated packets (later called photons), he could obtain the formula that matched all the experimental observations so far.

Without knowing at the time, Planck’s change in the narrative of describing radiation from a continuous, stream-based description to a discrete, quantum-based one would stand and eventually become one of the foundational pillars of the Quantum Electro Dynamics theory, the most accurate theory in terms of matching experimental results that humankind has ever produced.

Like Physics, designing software applications rely profoundly on narratives in order to come up with ways to address whatever problem software designers have on hand. Narratives that crystallize during software design sessions, get portrayed in the code and will stay there after the people who came up with them are long gone.

In software, as in any other field, narratives are fiction. Narratives have actors, actions, and a certain sense of purpose. They organize reality in ways that make some part of it comprehensible, usually taking the form of stories. In software architecture, there is an additional restriction to the kind of stories we can tell, for narratives must be useful to build software applications upon them. Hence, software applications do not respond to reality itself but to the narrative that the participants in the design of those software applications invented.

This process of creating a consistent narrative to underpin the development of valuable software is a key element of software designing. As it happens with Classical and Quantum Physics, different outcomes result from organizing the same real facts into different stories.

As I explained in the post “But, what is the Domain?“, the software community calls the Domain to the collection of true data of any kind that may be relevant for the production of a new application. Within any given Domain (for instance, Invoice Accounting or Message Posting in a social network), there are alternative narratives to tell stories that make sense inside the Domain and, at the same time, that yield useful outcome once they are implemented in the code.

What kind of narratives are the most commonly used in the software industry? Well, as a matter of curious coincidence, two of them are the continuous one, which sees business processes in the Domain as a continuous entity that must be handled as a whole, and the discrete one, which models individual, though often correlated, facts instead.

Let’s explore both.

A continuous narrative of change

Messages and pictures posted on a social network, shipping goods from one place to another around the planet, a couple getting a mortgage loan for their new house, or drugs prescribed to a patient by her doctor are tasks that people rely on software to manage.

A thorough observation of any of those processes above shows that they are made of many small steps which together lead to the fulfilment of some goal. These steps often last a very short time and are wrapped around by longer periods during which nothing seems to happen.

For years, software designers have been telling stories like those above with the fulfilment of that goal in mind. So they invented a continuous entity called Business Transactions. Depending on the context, these Transactions might be called Orders, Loans, or Stories, even though the concept behind them is always the same: a unique process that is compound by a chained series of steps that must be completed in sequence from the beginning to the end.

If, for example, a client orders a coffee in a coffee shop, the software application in the shop will tell the story of an Order. There is a Client, there is a list of coffee variants, there are cups or carton glasses, there is sugar, a price, etc. And, most importantly, there is a sequence of steps to follow: order taken; price paid; coffee prepared; coffee grabbed.

Software applications to handle business transactions like Orders have been designed for decades. In general, Order-based Codomains tend to be compact pieces of code with one central persistence layer (a database) attached. Although there is nothing in the Domain that prescribes grabbing coffee to be modelled as an Order, it feels quite natural due to the “our business serves client’s orders of coffeenarrative that lays behind.

By the way, if you got lost with the term Codomain that I just used, check this post out: But, what is the Domain?.

With the coffee on her hands, the same client might sit on a chair and check the status of her Loan in a bank’s mobile app. Loans would be like Mortage Orders if only people in Finance would have got used to that name centuries ago.

Let’s see how the same client ordering some furniture for her new house would look like in a picture:

Keeping track of the whole picture of Orders tend to prevent local improvements in warehouse management or shipping operations, for example, from being implemented. And yet, that is what eventually happened. Organizations started re-organizing themselves, and their software applications, in pursuit of local improvements instead of venturing into overall transformations that are riskier and more expensive.

Once this vision shift occurred, software designers started telling different stories.

A quantum narrative of change

As Max Planck did, eventually software designers started thinking of a different way of telling stories about clients grabbing coffees. They stopped modelling business processes like this as a continuous transaction and started focusing on those moments when something actually happens.

This new narrative makes sense because, from the point of view of the software application handling clients grabbing a coffee, the whole process can be seen as a chain of correlated moments (i.e., taking the order, preparing the coffee, getting the money from the client, etc.) within a long space in time at all effects almost empty.

Based on this narrative, software designers pulled out Event-based Codomains.

As you may remember, Facts are the smallest unit of change in any Domain. In the example of a client grabbing a coffee, taking her order would be a fact, getting her money another fact, and giving her the coffee once ready would be a third fact. Additional facts might be considered too, depending on how every particular coffee shop operates its business.

Yet, software applications do not deal with Facts, but with their representation. In software, it is common to call Events the representation of Facts. Events are what gives their name to the Event-based Codomains. Let’s see a picture of the same flow above, this time modelling it as an Event-based Codomain:

The major benefit that Event-based representations bring to software applications is decoupling. Decoupling is what enables those local improvements that I mentioned above. Certainly, a software component that handles Items in a warehouse does not really care about the existence of Customers or Bills. It must be proficient in activities and properties of Items that are stored, counted, and moved around for any valid reason. That is what the component does, and it can improve it way easier when that is only what it does.

Thanks to decoupling, it is not necessary anymore for a component to wait for a response from another one located immediately after it in the chain of correlated components that belong to the application. In other words, from the moment every autonomous piece of code can operate on its own, there are no waiting times even though the goal to reach relies on all the pieces completing their tasks. This property is called asynchronicity, and it underpins plenty of modern software applications.

Another benefit of decoupling in Event-based Codomains is that it relaxes the need to correlate data shared among components. In a truly decoupled Event-based application, Events should not depend on their future use. The software design pattern that breaks this rule and hence defines the content of an Event according to whatever other components are going to do with it is called Teleology.

Teleology means that there must be some kind of negotiation to coordinate the definition of terms, like the name, the format, and the meaning of the data that is going to be shared. Coordination is another word to mean coupling, so it is not good news. However, this is one of the toughest problems to address in decoupled software architecture and, of course, even though Teleology breaks the very definition of what Events are, a representation of Facts, it is a pattern used a lot in practice.

Most of the times the correlation among Events comes out of causal relations among Facts (i.e. we are shipping these goods somewhere because a Client submitted a request for them). Sometimes one Fact intervenes in some Policy to be triggered without any causal relationship behind it. This would be the case when a Replenishment Request is sent to a Supplier because the minimum stock of an Item is reached after some units of that Item were shipped.

How to represent Causality in general is a major topic in software design and so it will be the topic of our future post.

Copyright notice: picture taken from The Immortal Fitness Blog without permission.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: