Global Software Architecture Summit 2019 – episode I

A picture of Peter Naur by Duo Duo Zang

This is the first episode in a series of posts reviewing the first Global Software Architecture Summit that took place in Barcelona the 10th of October. To see the whole list of episodes, please visit this.

Code is your partner in thought

George Fairbanks
George Fairbanks

George Fairbanks (@Fairbanks) was the first speaker, and introduced to us some ideas which originally appeared in a book by Peter Naur, in 1985, being the most relevant of them the realization that the act of Programming, understood as the overall set of activities involved in the production of software, is by itself a task of invention of theories to understand a problem and finding suitable solutions to address it. Actually, the title of the Naur’s book was “Programming as Theory building“.

To achieve a solution to a given problem, we first invent theories, and the code we eventually write is a representation of that theory. Which is another powerful reason why the code must be as understandable as possible: because the code is the main instrument of communication of that theory to other people, now and in the coming future; a purpose, by the way, that the old-fashioned procedure of writing tidy documentation upfront of the code was unable to fulfill ever.

Computer programs only make sense to the team who have developed it

Peter Naur (1985)

In other words, the code repository is not only where the documentation of the solution, and of the problem it addresses, live. Since that code is also a representation of the theory we invented in the process of understanding the problem and generating ideas to handle it, git is not only in charge of handling a repository of code, but also a repository of knowledge.

Actually, this is an idea which stands out as the pivotal summary of the whole event: the creation of software is an exploration of thoughts inspired by the necessity to address some problem, and it looks, and it should be practiced, more as Science is, via the scientific method. Come up with an idea, produce some experiments to test it, and, if the experiment ends well, make it happen in Production, just before keeping the exploration going on.

Even more, this procedure of coming up with a theory, test it in practice, and eventually put it into production in the form of runnable software, is a long term learning process. As metaphor, we might imagine a wider cycle around the quicker Agile cycles that we are used to call sprints: a process extremely important, in my opinion, to keep Scrum teams from falling to the sickness of delivering new features time and again, but never reaching any goal.

Fairbanks dropped other interesting thoughts, as the concept of the Model-code gap (in essence, the troubles which arise whenever the representation of our thoughts that we put in our code is crooked, or biased, instead of the finest match to them possible), or that documentation is a good place to explain those ideas or decisions which did not get through: precisely because the code does not represent them, it may be a good practice to include them in the code repo as README files, for instance.

In essence, it was a very nice introduction of the main topics that we were going to see showing up throughout the whole event.

Copyright notice: Painted portrait of Peter Naur by Duo Duo Zhuang.

Monoliths are averse to change

Angkor wat view

The fight on microservices

The debate between those who advocate for keeping monolithic architectures and those who uphold decoupled architectures, such as the popular microservices, heats and cools time and again with no views of soon remission. And pros and cons of monoliths (or, in reverse, microservices), do keep mounting on both sides, for it is a debate in which plenty of different perspectives (pure architectural, economical, organizational, even social) confluence.

Should be keep our current monolith just because our team is not skilled enough to deal with microservices, and we are short of resources either to train them (for years!), or to fire them and form a brand new team?

Should we embrace microservices because they are what everyone is talking about, because they are the newest thing? Or because microservices are what the biggest companies, with their almost endless resources, are doing? Bearing in mind those very companies make enormous profits, and gain market influence, thanks to selling us tools designed for handling microservices, this seems at least questionable.

As a matter of fact, I have always been an advocate for change, in all areas in my life, so, being monoliths the first of both software architectures to come to life, I would say I am even naively inclined by nature in favor of decoupled software architectures just because they stand for a change in the status quo.

However, I am a software engineer for profession, not for inclination. Actually, I obtained my degree on Physics, not Engineering, and so my mindset is settled around Science and scientific method. In other words, I am used to doubting, of everything, all the time, and to an ongoing seek for evidence.

And evidence, in favor or against monoliths, is what I have been struggling to find for months. This below is what I learned in my seek.

Change as a motive

After more than 3 years working intensively, and almost exclusively, in decoupled architectures, I’ve arrived at the conclusion of a key feature of their opposite, monoliths: everything is easier with monoliths, except to make them change. Such an annoying feature to exhibit, one that hangs over our decisions every time we must pick a team and produce some code to face the tough challenges that organizations often need to overrun in order to thrive, just keep ongoing, or plainly survive.

The defining feature of monoliths is their aversion to change

Our community is full of stories about how hard to make monoliths is. Even refactors look difficult: to make, to test, and to deploy them. Think of it for a moment: is that a reasonable tradeoff for keeping things simple?

On the contrary, facing change is a crucial leverage for businesses, and history is full of human enterprises of all kind which died just because of their inability to handle changes: environmental, social, technological, pick the area you like most and see the examples piling.

Collapse, by Jared Diamond
Jared Diamond’s Collapse

Just to cite a famous example, take the book by Jared Diamond, Collapse. In this case, not only the book is that illuminating; the foreground that the author includes in his webpage (please, do follow the link above) is worth reading.

Another popular topic is the Joseph Schumpeter’s idea of Creative Destruction. Though I guess this would be less appealing for those who found their jobs destructed by creative rival companies.

How change produces value?

So, yes, this ability to facilitate change in software production that deeply decoupled architectures provide, is invaluable. At least, as invaluable as history shows it is in other planes of human existence. But, how is this value delivered, in practice?

There are two ways change drives value into organizations. The first one is immediate: by facilitating (i.e., making it quicker and less painful) the upgrade to newer versions of support software (OSs, databases, programming languages, assistant applications); to produce, and refactor, code; to deploy more often and quickly, to recover from disasters; etc.

Also, since everything you might try out is easier to test when the code is less reluctant to change, you get used to trying out more, and more often. And you can measure the outcome, and so you learn more too (meaning, more, and in a faster pace, that you may do it with a monolith). Which also makes future tests smarter, even when the output is considered failed and is discarded. Which, by the way, is not a tragedy of expensive loss, precisely because a system easy to change requires, by definition, far less resources to change as well.

A system easy to change is cheaper to change too

Marketing teams have been gathering benefits out of this for years, precisely because the results overcome the costs by far. With decoupling the software, teams who produce code get empowered to do the same.

Actually, software teams have also been doing it for years, for instance via feature flags. However, they are not used to do it with entirely expendable services. But now this is a dream come true: A/B tests may not only be a common practice in marketing, software teams may also be producing, even testing in production, alternative versions of a given autonomous piece of code (a microservice, or a lambda) to measure (instead of presuming in advance) which version performs better, which one is more reliable, or which one takes less time to get up and running. And getting rid of all the others with no regrets: so cheap they were to produce.

The second outcome of facilitating the change takes more time to unfold. Since everything in organizations becomes cultural (which is, by the way, what explains the Conway’s law), embracing code that is easier and easier to change would impact on the kind of people in your team, in the procedures they follow, which consequently would impact on the kind of solutions you produce. Ultimately, it would impact on how you thrive.

Years ago I had the opportunity to work as a consultant for organizations which got stuck on expired technologies (OS, database, and programming language all in the same metal), precisely because their teams had been picked to work with those technologies. It was so tough for them to change that eventually all become fossilized: the code, the applications, but also the people, and the overall organization.

To avoid becoming fossilized, organizations need to accept they must change

Unfortunately, I keep seeing similar attitudes, though not so extreme, right now. Plenty of companies are recruiting developers according to the technologies they can work with already. I understand they are doing this because they believe their priority must be to ensure that their teams deliver code in every sprint; but, what code?

Would not be more reasonable to recruit people according to how they think, how eager to participate in a team effort they feel, or how adaptable they are? To achieve this, organizations need to treat the technologies currently in use as easily replaceable. Easy changeable, either by substitution, or by coexistence. Remember, the point is not be changing things all the time, but to avoid keeping anything running as it is just because it would be difficult to replace it.

Achieving this is hard. For the sponsors of change, because they must focus on strategy more than on tactics; more on what it might be done, that on what everyone knows it can be done. And for developers too, for they should choose expanding their skills over digging deeper in a few, which would likely make them look less employable on the eyes of conventional organizations.

Ironically, it should be easier for CEOs in comparison: they are supposed to have already learned this, either at college, or by simple observation of the market.

Angkor Wat potograph credit: https://pixabay.com/users/falco-81448/

Going further with Events

Recently Mathias Verraes published several posts in his blog around patterns for Events. This was very fortunate indeed, not only because these are valuable insights for those who architect decoupled applications using Events as a key instrument, but also, quite incidentally, because it gave me the opportunity to streamline my own ideas on Events and asynchronous decoupled architectures in general.

Inspired by some of the ideas he introduces there, I came up with the concept of Story as an implementation of the Event Summary pattern. Let’s see how and why.

Event Summary pattern

One of the patterns that intrigued me more is the Event Summary pattern. In short, this would be an individual Event whose body gathers data already present in the bodies of other, previously dispatched Events, according to some relationship they share that makes them convenient for being dispatched together, and consumed all at once as part of an aggregate.

Let me give you an example. Just imagine a music concert and a service that dispatches a SongPlayed Event every time the band ends playing a song. Now, also imagine another service that only acts on music concerts as they end, for instance, to communicate the list of songs played to a music rights management entity.

Clearly, all the SongPlayed Events dispatched during a particular music concert share a common relationship, which is that they were played that day, in that music concert. This is the fact by which we may found useful to dispatch a summarising message at the end of the concert.

In this scenario, would not be easier (and cheaper) for that latter service to consume just  an Event Summary dispatched at the end of the concert, with information gathered whenever a song was played, or even taken from the bodies of those SongPlayed events, instead of consuming all those annoying SongPlayed events one by one?

Likely everyone, even I myself, would answer YES.

Regardless of my affirmative answer, there are some details around this Event Summary pattern that, in my opinion, deserve further attention before we move forward with my proposal for an implementation.

A tale of species

The circumstance that we have got an infrastructure of asynchronous messaging should not drive us to say that every structured message travelling up and down throughout that infrastructure is an Event.

In my opinion, we should define a kind of a taxonomy of messages so that applications may behave one way or another depending of what kind of message they got to have to deal with. In practice, automated services are ready to handle a short list of messages, and silently let the others go away. However, the list of message patterns is growing, and IMO we architects would appreciate to have them organised somehow around their features: headers, body, name, etc.

Following the thread of posts presented by Mathias Verraes in his blog, there are at least two main features which may help us build that taxonomy:

  • Fullness, which allows us to classify messages depending on whether they bring all the information available in the context of the message dispatcher, or they just left some data behind for any reason, in particular because stayed unchanged (i.e., their values were the same before and after the fact that the message informs about happened).
  • Time Atomicity, which allows us to identify messages as atomic, were they representing a fact occurred in a given instant of time, or not atomic, were they bringing information related to a happening broader in time.

A definition of Event

With all that arsenal on hand, let’s start by streamlining the definition of Event:

Events are full representations of facts that happened in a moment in time.

When we say the Events represent Facts we mean that Events bring what happened in the real world to the application using the terms the application understands. So Events not only transport information, but translate it as well from the common language words we all understand to the specific terms that we would find in the code were we reading it, and that were invented in the process of designing the application.

If we look at the definition of Event under the light of the two message features above:

  • All the raw information available in the context of the fact should be included in the  body of the Event that represents it. There are operative details to account here, but let’s just say that the context of a fact is all the data that we may collect about it without doing any additional operation, like querying a data source or make some calculation.
  • Facts refer to an instant, roughly a date and time, up to the second or as minutely as needed. This usually means that facts got an Event representing them as soon as they end. In English, this is indicated by past participles of verbs.

So, according to these remarks above, it seems clear that messages from the past which are not exhaustive, but selective, when taking data from the fact being represented, should not be considered Events.

This condition of Fullness avoids dispatching Events with only the data any given Consumer would claim for. Consumers must take all or nothing, and simply ignore whatever data they do not need from the body of the Events they consume. 

No matter reasonable this Fullness may look, it is just convenient. Actually, it is easy to imagine scenarios in which breaking this Fullness rule is far a better option. Mathias Verraes also talks about this in the post Segregated Event Layers in his series.

Fullness is not Completeness

I think it is worth mentioning that the Fullness condition introduced above refers to every particular message dispatched in the application. Actually we may be dispatching messages which are selective, in the sense that some properties available in the context of the dispatcher at dispatch time are not included in the message body, instead of exhaustive, so that they could not be called Events.

Even though, we may be dispatching an Event whenever a fact happens in the business context mapped in our application via one, or more, services. In this case, we would be talking about Completeness. I took this term also out of Mathias Verraes blog, where he calls it a Completeness Guarantee.

Obviously, it takes much more effort to accomplish that Completeness Guarantee that just to ensure that every message dispatched is full. In my opinion, it is a feature of our applications that is achieved incrementally, in a journey which might likely never reach its end (unless we started up with a requirement for Completeness Guarantee fulfilment in mind from minute zero).

Stories

Summary Events is a message pattern which breaks the rule of Atomicity because they represent facts which life spans longer than just an instant. These messages may, or may not, fulfil the Fullness rule. Whenever they do, I propose to call them Stories.

Stories are full representations of facts that spanned during a period of time.

(I must recognise that this is not so an original name: we are living in the times of Snapchat’s stories and Instagram’s instastories.)

The key point in the definition of Stories is the past tense. Nothing can be considered a fact if is still going on, not only because otherwise we cannot know when it ends, but also because we cannot say for sure that it will end.

Therefore, anything that should be triggered in an application during the lifespan of a still ongoing Story, should come out of true facts (for instance, an already played song in a still ongoing music concert), which means an Event. This is how the Event Summary pattern comes implemented with Stories.

In general, uninterrupted periods of time can easily be represented as Stories. However, there are situations in which consistently unique facts exist, even though they might be interrupted. For instance, a Wimbledon tennis match which was interrupted by the rain. That interruption should be included in the Story’s body, in order to make sense of the partial facts, or Events, which are part of it.

In practice, there are (at least) two ways of implementing Stories:

  • As a deferred fact.
    We know for sure that the facts are going to end some moment in the future, and we keep gathering information out of its context up to the very moment it ends, and then we dispatch the whole Story downstream.
  • As a projection produced after the last Event in the Story is dispatched.

An example of a Story: a motorbikes race

Let me finish this long post with an example taken from the company I am currently working at, and consider a motorbikes race of the World MotoGP Championship.

A race has a clear starting at point in time, but it is not possible to know the exact time at which it will end. However, it constitutes a unique fact in time, which makes it eligible to be represented as a Story.

There are plenty of facts that happen in precise an instant during a race, some of them expected (i.e., the first rider crosses the line after completing the first round, etc.), whereas some of them are unexpected (i.e., accidents, rider falls, red flags, etc.). Every time any relevant fact happens, an Event should be dispatched. This is what makes this a Story, instead of simply a collection of data we may produce in anticipation.

Actually, this kind of Story is extremely simple to implement: the data is gathered from the context as it is, with no further operation. And, just at the end of the race, whenever the last rider crosses the line, the Story can be dispatched.

Event Summary pattern revisited

However broad the bunch of real situations in which they may be the rightest message representation of choice, Stories are not a simple aggregation of data. They collect the information available of a longer-than-one-instant fact and bring it into our application.

Going back to the post in the blog of Mathias Verraes where the Event Summary pattern is presented, a daily sales report summarising all the sales in a given day is clearly not a Story because all those sales operations are not related whatsoever, except for the circumstance that they happened the same day.

Did I came up with a name for this third kind of messages? Not with one that convinces me: I’d call them Reports. But let’s leave them for another day.

Photograph credit: https://pixabay.com/users/geralt-9301/

TransIDs

Transactions ids

An ID for every transaction

A decoupled application may be depicted as a collection of autonomous services talking to each other via common protocols like HTTP, and relying on common standards like REST, asynchronous messaging, events, etc.

According to the Holographic Principle,  we will say a decoupled application is holographic whenever all information any given service has of any other service in the application comes out of their contact surface, i.e., the place where their transaction contracts live.

One of the key consequences of this approach to implement decoupled applications is this:

Every Entity id must be solely local, only transaction ids (ephemeral ids) are shared

In other words, unique ids must be kept private inside the Service they belong to. In DDD terminology, we would say that Entity Ids must live strictly inside their bounded context, and only there.

This severe restriction in how decoupled Services may communicate is, in fact, an extremely powerful vector for providing freedom to our applications. Let’s look how.

Identify access, not resources

Mutual communication between services must be preceded by a negotiation on terms and conditions. Once an agreement is achieved, the whole list of agreed terms and conditions would be called a contract. These contracts are also going to be furnished with a unique identification.

Once a contract is considered valid by both Service Provider and Service Consumer, every new conversation initiated by either of them both would be uniquely identified by one transaction id (or, transid). This transid shall be shared throughout the conversation’s lifespan, to be ultimately expired and archived afterwards.

Under this schema, Transids carry plenty of information, specially in comparison to traditional URIs, which identify solely resources:

  • The resource which is going to be accessed.
  • The contract partners, in particular the Consumer willing to access that resource (since the Service providing the resource is always the same, of course).
  • The actual transaction happening between both Services. This includes details like the actual date and time when the activity started, its lifespan, the price for that consumption, as well as the currency and other payment conditions, etc.

In other words, it should be clear now that:

Transaction Ids do not uniquely identify resources, but accesses to resources.

As I mentioned above, a key consequence of the Holographic Principle, and of the definition of the Transids, is that endpoints do not pinpoint resources anymore.

For instance, a Products Service exposing an endpoint to get a Product’s information (i.e., name, description, features,  etc.), under a holographic architecture would expose different endpoints for every Service willing to access and get that Product’s information.

Since contract specifics’ are included (implicitly) in every hit to a given endpoint, now we can implement automated software to filter the access to our public resources depending on the identity of the Consumer wanting to access them, exactly like firewalls filter network traffic, just by yielding the data inscribed in the transid.

You may read about other examples of these kind of Services here and here.

Perdurable transIds

May transIds extend their lives further? Let’s say, for as long a period as safety measures advice, which actually is a topic I am elaborating in my next post; or, given the fact that every combination of one resource, one contractor, and one contract is unique, let their transId stay alive (meaning reachable) as long as the shortest of them all, the contract.

The answer is, of course! Actually lots of situations, and likely the cheapest ones, in implementation terms, fit this perdurable transIds definition.

For example, a Service A using a Picture Storage Service to store and read some pictures for, say, as much as 1000 times a week. It seems that the easiest way to implement this use case is to provide A with a perdurable transId for every picture; these permanent transIds would last either 1 week, or until A had (successfully) accessed them up to 1000 times in less than 1 week, and would be withdrawn afterwards.

However, there may be other alternatives to implement this use case. Just imagine that the Picture Storage Service is not the actual storage provider, but just a mediator, and it is actually proxying some other storage providers; depending on the fee and the actual usage of pictures, the Picture Storage Service shall be relocating them from one storage to another, in seek of better margins for itself, as it happens in other industries where specialisation is more mature than in software.

Production of transIds

The actual form of these transids is an issue every actual implementation must address. Alternatives do not stop at those defined by the RFC4122, which is the IETF standard most commonly used right now. Other examples might be Twitter’s Snowflake, the prefixed URNs in LinkedIn, etc.

Summary

Compliance of the Holographic Principle in a way which is not anaemic, i.e., a simple translation of internal ids to external ids, one by one, suggests the introduction of Transaction Ids (or transids for short) which do not identify resources, but the access to resources.

The condition every unique id must meet to be a valid transid is to identify one, and only one, combination of resource + consumer + contract. Regarding transids lifespan, they may:

  • Be single use, meaning the transid is expended after one transaction is completed.
  • Be perdurable, meaning the transid may last either a fixed period of time, say 1 week, or until a fixed condition is met, say 100 transactions in less than 1 week.
  • Be permanent, meaning the transid is valid throughout the whole lifespan of the contract.

These access-oriented identifiers provide decoupled applications with a whole new ways to handle relationships between Services, which might be useful specially whenever the Service providers are independent from each other.

 

Photograph credit: https://pixabay.com/users/geralt-9301/

No persistent holographic services

network

In the first post on this series about the Holographic Principle I presented an example of a Cat Service. In that example, a service provider is available for other services to store pictures of cats. This example comes out of a common situation in applications, but there are other typical examples of services which do not involve persistence, and which may be holographic as well.

No Persistent Holographic Services

Let’s imagine a geolocation service, which returns the country a device is located given its public IP, or a currency exchange rate service, which returns the current exchange rate between two given currencies.

Both services, as well as plenty of other similar services, have in common that they do not provide a persistence feature to their clients. In other words, unless the Cat Service, transactions are utterly ephemeral. Therefore there is no need for the service provider to return any trans-uuid to the consumer, since there is no future transaction related to any particular resource which requires identification.

However, by definition Holography happens on contact surfaces, and those surfaces must be uniquely identified because they relate services one to one. This is what allows the service provider (and the consumer as well) to hide its internals, and at the same time it settles a commercial infrastructure.

To operate, this commercial infrastructure makes use of the transaction uuids, because they not only identify the resources shared between the services involved in the transaction, but, and most importantly, they uniquely identify the service consumers and the contract they signed with the service provider as well.

In other words, urls like these:

may pinpoint to exactly the same endpoint in the Currency Exchange Rates service, but they are unique because bring information which is only shared between this service and the contractor A.

A final note: the urls above are examples which show the service endpoint in plain. In real cases, they should likely take the form of dynamic links, like those provided by Google’s Firebase, Branch, etc, shorten links, as provided by Bit.ly, or similar.

 

Photograph credit: https://pixabay.com/users/free-photos-242387/

The Holographic Principle

cosmic black hole

This is the start of a series of posts around a new Principle I came up with some months ago, and which I have been refining up to the preliminary form I present here for the first time in public. Hopefully comments and suggestions help either to throw it away for useless, or to make it a valid alternative to produce new software.

Inspiration

The Holographic Principle in software is inspired by the Holographic Principle in Physics, more specifically in Cosmology and Quantum Gravity. In essence, it states that all the information we may get about certain Spacetime volumes (the so-called hidden regions, which are completely forbidden to probe by all external observers) is the information encoded in the surface which encloses the volume. The most common case of these volumes are black holes, whose Event Horizons enclose all the information we might ever gather from them.

Here you have a short video explaining all this, from Quanta Magazine:

The Holographic Principle in software

In short, what this Principle says is that everything we may know about a given Service must come from the contact surfaces it holds to any other Service.

The Contact surface between two Services is the collection of all contracts between them on behalf of which their relationship takes place. No further restrictions on the kind of these relationships, or their actual implementation, are imposed by the Principle.

In other words, what the Holographic Principle states is that any Service’s internals, meaning any information about it which is not needed to fulfil the conditions of any of its contracts with its Related Services (any Service with which a contact surface exists), must be kept hidden.

The meaning of the Holographic Principle

The Holographic Principle is not new technology, just a different way of organising the moving pieces which we are already dealing with when handling software.

In summary, the Holographic Principle proposes a shift in our common view of decoupled applications, focused on semantics and operations with entities, to a new relation-centric view. In other words, Holographic software deals more with relationships among services, that with the data actually transferred.

An example: a Cat Picture service

Let’s start with a Cat Picture service which allows Users to send, store, and collect pictures of cats. In its persistence layer, our Cat Picture service would assign a uuid to every picture, as well as other attributes it may consider of interest for those aforementioned Users.

So Cat Picture service exposes two main endpoints:

This Cat Picture service would likely expose other convenient endpoints, for instance:

We may also secure this Cat Service so that only authorised Users may make use of it to store their pictures, which means that an authentication phase must be completed successfully somehow, before any of the former endpoints may be requested.

In addition, we may gather stats information about those Users, as well as of the whole usage of the Cat Picture service (date and time during the day, geolocation of Users, etc).

Now let’s talk about the same Cat Picture service under the Holographic Principle.

First of all, Cat Picture service publisher should advertise the Service so that it is available for discovery in the market. Don’t freak out with those terms, just translate them to deploy into production, and the application, if you need to stay in a more recognisable scenario.

Once available for discovery, Cat Picture service might be contacted by other Services around in seek of a potential provider for cat pictures storage. An interested Service BB would then open a contract negotiation with Cat Picture service and, eventually, a contract may be signed between both. This contract should stipulate under which conditions BB may make use of the features provided by Cat Picture service, as well as which kind of retribution (call it gas, if you like Ethereum terminology, or simply a fee), the period of time Cat Picture service would accept requests from BB, the maximum number of requests per hour accepted before a denial of service signal would be sent, etc.

All these steps fulfilled, BB may start sending requests to the Cat Picture service. Since no other than its own pictures might be collected from the Cat Picture service, the first request must be one of this:

But now the uuid produced by the Cat Picture service, and sent to BB afterwards in its Response, is not the uuid of the Picture in its persistence layer, but a uuid identifying uniquely the request with BB. These transaction ids are not unique per Picture, because in that case they would expose Cat Picture internals to BB (at least), exposition which is forbidden by the Holographic Principle.

From the point of view of BB, these transaction ids might be seen as common uuids. But they might be not, because BB may be storing the same pictures in another Cat Picture service. Also, it may likely have an internal uuid assigned to them, which are supposed to be permanent while the transaction id which relates them to the Cat Picture Service are temporary.

Once stored in the Cat Picture service, this other endpoint is also available to BB:

Further details of this examples, as well as other examples, are coming in the second instalment in this series.

Benefits of the Holographic Principle

The most immediate benefit of applications following the Holographic Principle is that the focus is kept on the relationships among services instead of on the data they share. This allows service providers to set security access levels, as well as service usage quotas, to individual services instead of per roles.

Also, since every url brings information about the requester service as well as about the resource being handled, it provides service providers with additional instruments to secure its services.

Afterword

This is the first of a series if posts I am writing about some specifics of the Holographic Principle. But please do not hesitate to send me your comments, our collected minds are surely far more smart than mine alone.

 

Photograph credit: https://pixabay.com/users/deselect-521336/ 

The fallacy of state recovery from event storage

Here are the contents of my talk in the 3d DDDBarcelona meetup, the 8th of November. It was an excellent occasion to learn about the experiences with DDD of other fellows in the audience, as well as learn from the other two speakers: Aleix Morgadas, who talk about Adopting DDD in the teams, and Daniel Solé, who introduced us to the techniques of Domain Storytelling.

My talk was about how possible it may be to reproduce knowledge about the Domain from an Event Storage. However, and for a better understanding of the topic, I thought that a brief introduction might be helpful. You have the slides and video below.

Events

Events are representations of Facts, and Facts are those petty pieces of external reality such as A Customer purchased 3 units of Item A for 15.99€, for instance.

In other words, facts are taken as atoms of truth. Hence actual facts might be wrong (i.e. that did not happen that way, or did not happen at all), but they cannot be false.

For software production purposes, facts are the atoms of truth.

That truth is carried over to the Domain by making Events immutable: once they are dispatched, Events stay readonly.

Events map Facts to the Domain, meaning that they express facts with the words of the Ubiquitous Language, which is the language the Domain can talk.

In that sense, Events are not neutral. As any other representation, some details from the Fact are taken, and translated, into the Events whereas other details are ignored.

State

Events bring change into the Domain by triggering Commands, Application Services, or Domain Services to run. From the point of view of the Domain, this change is perceived as modifications in the values of properties of some Entities, as the creation of new instances of some other Entities, or the removal of instances of other Entities.

These changes in the Domain details triggered by Events allow us to define States.

A State is any summary of the values of some properties of Entities at any particular moment in time

There is no restriction on what kind of States we may define. In Finance and Accounting they are being dealing with States for years, even centuries: Profits and Losses, as well as Balance Sheets, are States as we just defined them.

Example: inventory

This slideshow requires JavaScript.

Provided Events bring change into the Domain, it looks like a healthy measure for data consistency (i.e., to ensure that every change in data is traceable) to reverse the scenario and set that no change in the Domain should happen unless triggered by some Event.

No matter how hard this goal might look, it is not impossible: Accounting stands as the proof that it is possible, thanks to a very short list of States to keep an eye on, and an Event storage (in this case, the ledger books) filled under very strict a set of rules.

State Recovery

So, provided that every State was triggered by Events, we should we able, in theory, to rewind and rerun Events from start and up to any specific point in time and produce any State exactly as it was then.

Trouble comes when we have to deal with either uncompleted, or mistaken information. Is it possible to fix the past, i.e., to act retrospectively on States? Being the Events immutable as they are, how could we, for instance, modify right now a State referring to yesterday 8:00pm?

This was the topic of my talk, which I hope you’ll enjoy.

Video

Slides