We have nothing forbidden to learn

FeaturedRyoanJi garden

Coding by request

Even though it is not a common discussion topic in our community, we all would agree that software is created under the implicit assumption that it is going to last forever. Engineered as a definitive solution to a particular feature demand, or as a fix to a reported bug, once our code gets deployed into Production, we never think we may be going to revisit it (new bugs aside), unless additional requests arise, or to copy some lines of it to paste them somewhere else.

Of course, some code is more prone to change than others. For instance, public APIs and integration code tend to stay more stable, as Domain entities and services do too, whereas Application services (use cases) and User Interfaces may change very often. But overall it is assumed that code will remain still unless there is a reason for changing it. I dare say that to stay stable and giving service is the Nirnava of code.

There are pretty obvious reasons for this: to optimize the cost-benefit ratio, to invest our always limited resources in new things, to minimize risks due to human error, etc. The old “if ain’t broken, don’t fix it”.

The Nirvana of code is to stay unchanged and continue giving service

Although, all tactics have trade-offs. For instance, let’s see this true story taken out of my own experience at the job.


Months ago in 2019, we were developing some new software with a feature that required integration with a 3-party application. This integration works through the interchange of messages written in a proprietary data format. As files, those messages are not formatted in any of the standard file formats commonly in use, as JSON, YML, or XML. Nor even CSV. This proprietary format defines content via two mechanisms: the column number and the line prefix. For instance, some figures would stand for tax amount if they are located in any line with prefix T and aligned at column 42, whereas they would stand for a net discount if they appear aligned at column 50 in the same line (or in another line with the same prefix). Files formatted this way do not include any headers line either.

Though it looks old-fashioned, this approach can perfectly work, and actually has been working for 20 years. But it exhibits a true fault: meaning is not included, and it must be inferred out of elsewhere, beyond the file itself. Parsing files formatted so, it is not possible to say what any given value stands for, only its location (horizontal and vertical axis, plus the line prefix), so that the code does not map values to meaning, it maps locations to meaning. Since we have self-descriptive file formats at our disposal, picking a format that explicitly avoids self-description looks quite as an anti-pattern to me, as it is an easily redeemable weakness they decided to keep.

Why would anyone give up standards widely used, widely supported, with literally thousands of free tools available to help developers do their jobs, and instead keep using an anti-pattern, one which was already old-fashioned at least fifteen years ago? And what kind of developer would fit in an organization that voluntarily decides to fossilize its software?


Inertia

I am sure you all had similar experiences like that above, and I am convinced it is a legitimate question for us in charge of creating software to ask ourselves, why are we so eager to keep our code unchanged once deployed? Did we just surrender to people sitting at the cost management desk? Or do we feel so proud of our work that we truly believe our first operable version of any code is surely going to be the best possible version of it?

Or maybe we just love playing with new toys as much as we can, so we’d rather start working with new code, and forget the one just finished, as fast as possible. Or maybe we think that to pursue a greater quality prevents us from fast delivering something usable and that meets the business requirements.

Whatever the reason we might have to accept, or do, this, plain observation shows that we end up producing huge structures of code with a shocking resistance to change. From an architectural point of view, we may say that this inertia sinks its roots in Coupling: there are so many tightly entangled pieces in coupled applications that we’d rather not touch them unless it is strictly necessary.

Certainly, some progress towards decoupling has been rolling out for years. I still remember my first job in this industry, working with an IBM AS/400 with all the pieces (operating system, database, programming language) melted together as in one piece. That level of coupling is pretty uncommon right now. However, it keeps being a common idea that coupled is easier and for everyone, whereas decoupled applications are for advanced teams, and they should only be adopted with justified prevention.

In summary, here there are the trade-offs:

  • Coupled applications are easier to operate, but exhibit high inertia: code is harder to change, and so very tough, or even unaffordable, to keep updated. Besides, it becomes difficult to learn and read too.
  • Decoupled applications are harder to operate, but exhibit far lower inertia: code is easier to learn and read, to change, and to keep updated.

Tests Pass = Definition of Done

So, should we keep producing coupled software applications just because that is what we can afford? Before giving an answer to this question, there are more clues to disclose apart from architectural styles. For instance, methodology. Currently, almost everyone in our field works under the iterative procedures that got popular since the Agile Manifesto was signed. The Manifesto explicitly advocates for code’s responsiveness to change:

Responding to change over following a plan

And it even insists on the importance of the changeability of code:

Welcome changing requirements, even late in development.

We may say that code is produced under request compliance: code is considered done (and it is assumed that it should not change anymore) once the requirements are met. Therefore, code should not change unless requirements change. Eventually, this is what leads to TDD (Test Driven Development), for TDD relies on the assumption that the condition that all tests pass stands for all the requirements are fulfilled.

Even though this idea, once implemented, should allow us to change and deploy code as often as necessary, and it truly does, the condition behind is still there: nothing changes unless a requirement (or a bug) says so. (1)

This way of thinking is not original of our industry, but it is part of the luggage we took out of Engineering, considered as a set of practices to guarantee that, within a given context, all kinds of mechanisms (lamps, airplanes, bridges, etc.) will work as we designed them to do. This is important when dealing with engines because they are material, which means it would be extremely expensive to change them once they got out of the factory. Just imagine a world in which material artifacts as refrigerators, cars, or even utility infrastructures (water, electricity) were changing week after week.

Trouble comes when we realize that software applications are not made of matter, even though they rely on physical machines to be operated and yield their value. On the contrary, this capacity of being changed as much as we want is an essential property of code, and I believe it is a mistake not to make extensive use of it.

If I am not wrong, and this idea of “provided code meets all requirements (provided all tests pass), our job is done” comes from Engineering, and, as I pointed out just above, engines and software are not alike, should not we think of alternative ways of producing software? Is that even possible, I mean, are there alternative mindsets that might fit better?

I say there may be one: Science.

Science starts precisely where Engineering ends, just outside the boundaries of those safe contexts in which Engineering can fulfill its promised guarantees. For the Scientific Method stands for a continuous challenge of the theories accepted at the time, in a continuous exploration of the lands beyond the context where those theories proved that they work, until checking new hypotheses in broader and broader contexts reveals some failure in the theory behind those hypotheses. This process makes all scientific theories temporary.

In my opinion, Software Creation should move forward from the current methodologies based on request compliance, for they come out of that Engineering mindset reluctant to change items once finished, to adopt this temporary trait common in Science.

There is a key distinction between Software and Science though, for their purposes do not match. Science looks for knowledge, whereas Software Crafting, like Engineering, seeks practical purposes. That is precisely why software by requirement emerged and looks sensible. So what we need is to consider Software Crafting as something new. And the first step towards that end may come out of a combination of the practical sense of Engineering with the valuable addenda of the Scientific Method. Let’s see how.

Wabi-sabi

So the key element in our adoption of the Scientific Method in the creation of software is to consider all our applications as temporary: no matter how successful they may be right now, we must assume that software applications are just temporarily successful. Therefore, we must treat our code as temporary, even if all tests pass, and all requirements are fulfilled.

Assuming that the shining results of our work are plainly ephemeral should not be so hard once we realize that nothing around remains stable either. On the contrary, as we humans do, organizations we produce code for are continuously maneuvering to keep afloat in troubled waters of economic or political crisis, punched by competitors, laws and regulations, social conflicts, and whatever we may imagine. They are not static entities, but fools struggling for brief moments of stability that are nothing more than an illusion.

In this state of permanent flow, there is no guarantee that any option that now seems right, that fulfills its requirements and pass the tests, will keep right for long. Precisely because everything is temporary, no right option stays right forever.

No option stays right forever

Japanese aesthetics have a term for the acceptance that nothing is perfect, nothing is permanent, and nothing is complete: wabi-sabi or 寂.

Accepting that software is never perfect, never permanent, and never complete should lead us to create software on the premise that every piece of code we make is expendable. Wabi-sabi could be a very cool name for this methodology, though for now I’d decided to keep our community traditions and name it Continuous Change.

In summary, Continuous Change stands for treating every piece of software as temporary, no matter it is working as requirements dictate in Production, and holds as many procedures we may invent in order to implement this principle in practice, in our daily work.

Challenges

When Inspiration comes, that it find me working.

Pablo Picasso

How do we know what changes are worth trying out? As in Science, we must produce our code according to its requirements and, once it is up and running with no errors, cross the boundaries of that context and expand it farther. This means that we must continuously invent ways to challenge our applications. And there must be a method behind how we prepare, run, and study the results of those challenges, so that we learn something out of every one of them.

In short, these challenges must be planned, must be based on a single hypothesis, should be repeatable, and their outputs must be measurable.

Nothing yields value unless it is in code, and there is nothing worth coding without motive. 

Lucky us, this is not new: Chaos Engineering has been following this practice of throwing and proving hypotheses one at a time for some years already. The only difference is that whereas experiments in Chaos Engineering are limited to failure injection, challenges in Continuous Change may be related to anything that the team believes is relevant to check out.

We should not assume there are particular experiments better to run than others, and there are several, independent reasons for this:

  • The organization, and the maturity of the team, will surely impact on the kind of suggestions they may consider worth exploring. With some organizations and teams, this starting point might look quite poor. The good news is that the benefits of working out are incremental: every step makes the next one longer.
  • Every experiment, even the ones that fail, teach us something valuable.
  • Inspiration may come from anywhere, not only from inside the organization: a blog post, a book, a podcast, something a speaker said in a conference, a coffee chat with a colleague, etc, might suggest ideas worth trying out.
  • And then, there is also the unexpected you might find in your way.

Exploration is always exciting and deeply rewarding. Although it comes with its trade-offs too: because it requires to build new ways of producing code that make it as easy to change, or replace, as possible, Continuous Change is far more demanding than the usual development by request.

As a neverending story of continuous learning, Continuous Change can be exhausting. But we have nothing forbidden to learn, so I’d say not limit ourselves in advance, and embrace the change.

 

NOTE 1:

To add up more pressure against unsolicited changes in code, we also have the “Do not deploy on Fridays” rule: we are so convinced that shit happens, so convinced that there are dwarfs hidden inside our machines waiting for us losing attention to release havoc, so scared that we might not be in control that we’d rather stay put. Though, if that is so, why deploy on Thursdays?

I am not going to enter this debate for I consider it closed already. Just read Charity, she spent plenty of time explaining it so much better than I could possibly do. To be clear, let me rephrase this practice this way: we produce hardly changeable code because we hope this will guarantee that it will not fail unless operational conditions change (for instance, a power outage).

 

Top image credits: this image was downloaded from Pixabay: https://pixabay.com/illustrations/arrows-center-inside-middle-2034023/

Entities of a different kind

FeaturedHexagonal Architecture

Note: In what it follows, the term “Entities” refers to both Entities and Aggregates indistinctly.

The Hexagonal Architecture sets a clear distinction between Domain and Application layers, which drives us to locate all entities in the Domain layer for it is there where the meaning (what the software is about) has its home, whereas we must put whatever run factual changes in those entities, as use cases require, in the Application layer.

For example, Customers and Items, as entities, belong to the Domain layer, whereas Sale an Item, Bill an Order, or Ship some Items, belong to the Application layer because they represent in the application (translated to the language the software can understand) what in real life we would consider facts that happened concerning particular Customers and Items.

Just as a brief reminder, in this same Hexagonal Architecture, the UI layer would be where what triggers those changes live, whereas the Infrastructure layer would be where we put every piece of software needed to have the application run (frameworks, ports connecting to databases, file system, message transporters, etc.).

Not all entities are created equal

A simple inspection through the list of entities in any given application would show us that some entities play a role, and have a life span, fundamentally different than others:

  • There are Master Entities, which represent in the Domain model that every particular application implements what in real life we would call “entities” too: Customers, Items, Suppliers, Service Providers, Employees, etc. One of the elements of external reality (external to our software) they exhibit is an id of their own: either you decide relying on those ids to identifiy instances of these entities in your application or not, that is a property (ISBN, Tax ID, Social Security Number, etc.) you are likely to handle somehow.
    Master entities change in a very slow pace: once created, and approved, you would change them seldom, for instance, to add a new Customer Address, or update the description of an Item, or mark an Employee as not working in the organization anymore. Conversely, master entities show a long lifespan, ordinarily of years.
  • There are Transaction Entities, which play in the Domain model what in real life are Orders of any kind (sales, purchases, work orders, etc.), Bills, Shipments, etc.
    Transaction entities may suffer plenty of changes, for each change represents a step in the fulfillment of the business process in progress. However, once completed, these entities never change again, and so their lifespan is very short, commonly not more than days.
    In opposition to Master Entities, your application will surely need to provide unique ids for every instance of these Transaction Entities.

That distinction between master entities and transaction entities has indeed a long history behind if you accept database tables as true ancestors of entities. In fact, I took both names from the field of Business Intelligence, in which, quite approximately, transaction entities constitute the data origin read to feed the so-called fact tables, whereas master entities play the role of dimension tables, due to their role as axis in the charts where the calculations provided by the BI engine are pictured.

Incidentally, there is another feature that is also different in magnitude between master and transaction entities: their number. With time, the number of transactions grows far quicker than the number of instances of any master entity. Although this is not true in all cases, for it depends on every particular organization, it is often true that the number of Customers is far less than the number of Sales Orders.

So, we might be tempted to state that master entities are fundamental and transaction entities are accidental, for master entities are supposed to be there, perfectly defined and steady, even though while nothing happens to them.

Jose Ortega y Gasset
Ortega y Gasset

However, as the Spanish philosopher José Ortega y Gasset said, “I am me and my circumstance“; hence, Mr. Ortega would be a master entity, and his circumstances (what impacted, or happened to him), a plethora of transaction entities.
In other words, no matter how permanent we humans might look, our self is defined by a never-ending interaction with our surroundings, in a way that makes us a work in progress which never gets accomplished. It’s that dialogue what really makes us.

Everything in its right place

Time to come back to software. In my opinion, the different roles that both master entities and transaction entities play in the application constitute a fundamental trait which must be made apparent in how we organize the elements of the code.

One easy way to achieve that purpose is to keep them both in the same Domain layer, but segregating one set of entities of the other, for instance, in separated folders. This option is quite conservative, so it might avoid unexpected surprises to eyes dropping a casual gaze on a repo organized like this. But it looks wrong because it ignores the key fact that master entities exist before any use case might run, whereas transaction entities are tightly intertwined to the execution of use cases: with no execution of a particular use case, there cannot be any instance of the related transaction entity whatsoever.

We might say that master entities keep the major part of the meaning of the Domain, whereas the transaction entities only carry an operational meaning. Though appealing, this is wrong, because ultimately all entities are defined out of operational foundations. For instance, Customers are defined because of their action of buying something to the organization; Items are defined because they are what the organization sells, buys, or utilizes somehow.

All Entities, either master or transaction, are defined out of operational traits

Therefore, what looks more natural to me is to relocate the transaction entities in the Application layer. For they belong there as the use cases do too, and, as I said above, with no use cases, there are no transaction entities neither. IMHO, it is not enough that transaction entities are persisted as well to keep their place in the Domain layer, compared to the key facts we have seen:

  1. Master entities have a lifespan far longer than transaction entities.
  2. Master entities pre-exist transaction entities, for the former may exist without the latter, but the opposite is not true.
  3. Master entities have got their own unique ids, whereas transaction entities need to be provided with one by our software.

Does this feel right to you?

Copyright notice: featured image taken from the Java Viet Nam blog, though it is not original of the blogger.

Monoliths are averse to change

FeaturedAngkor wat view

The fight on microservices

The debate between those who advocate for keeping monolithic architectures and those who uphold decoupled architectures, such as the popular microservices, heats and cools time and again with no views of soon remission. And pros and cons of monoliths (or, in reverse, microservices), do keep mounting on both sides, for it is a debate in which plenty of different perspectives (pure architectural, economical, organizational, even social) confluence.

Should be keep our current monolith just because our team is not skilled enough to deal with microservices, and we are short of resources either to train them (for years!), or to fire them and form a brand new team?

Should we embrace microservices because they are what everyone is talking about, because they are the newest thing? Or because microservices are what the biggest companies, with their almost endless resources, are doing? Bearing in mind those very companies make enormous profits, and gain market influence, thanks to selling us tools designed for handling microservices, this seems at least questionable.

As a matter of fact, I have always been an advocate for change, in all areas in my life, so, being monoliths the first of both software architectures to come to life, I would say I am even naively inclined by nature in favor of decoupled software architectures just because they stand for a change in the status quo.

However, I am a software engineer for profession, not for inclination. Actually, I obtained my degree on Physics, not Engineering, and so my mindset is settled around Science and scientific method. In other words, I am used to doubting, of everything, all the time, and to an ongoing seek for evidence.

And evidence, in favor or against monoliths, is what I have been struggling to find for months. This below is what I learned in my seek.

Change as a motive

After more than 3 years working intensively, and almost exclusively, in decoupled architectures, I’ve arrived at the conclusion of a key feature of their opposite, monoliths: everything is easier with monoliths, except to make them change. Such an annoying feature to exhibit, one that hangs over our decisions every time we must pick a team and produce some code to face the tough challenges that organizations often need to overrun in order to thrive, just keep ongoing, or plainly survive.

The defining feature of monoliths is their aversion to change

Our community is full of stories about how hard to make monoliths is. Even refactors look difficult: to make, to test, and to deploy them. Think of it for a moment: is that a reasonable tradeoff for keeping things simple?

On the contrary, facing change is a crucial leverage for businesses, and history is full of human enterprises of all kind which died just because of their inability to handle changes: environmental, social, technological, pick the area you like most and see the examples piling.

Collapse, by Jared Diamond
Jared Diamond’s Collapse

Just to cite a famous example, take the book by Jared Diamond, Collapse. In this case, not only the book is that illuminating; the foreground that the author includes in his webpage (please, do follow the link above) is worth reading.

Another popular topic is the Joseph Schumpeter’s idea of Creative Destruction. Though I guess this would be less appealing for those who found their jobs destructed by creative rival companies.

How change produces value?

So, yes, this ability to facilitate change in software production that deeply decoupled architectures provide, is invaluable. At least, as invaluable as history shows it is in other planes of human existence. But, how is this value delivered, in practice?

There are two ways change drives value into organizations. The first one is immediate: by facilitating (i.e., making it quicker and less painful) the upgrade to newer versions of support software (OSs, databases, programming languages, assistant applications); to produce, and refactor, code; to deploy more often and quickly, to recover from disasters; etc.

Also, since everything you might try out is easier to test when the code is less reluctant to change, you get used to trying out more, and more often. And you can measure the outcome, and so you learn more too (meaning, more, and in a faster pace, that you may do it with a monolith). Which also makes future tests smarter, even when the output is considered failed and is discarded. Which, by the way, is not a tragedy of expensive loss, precisely because a system easy to change requires, by definition, far less resources to change as well.

A system easy to change is cheaper to change too

Marketing teams have been gathering benefits out of this for years, precisely because the results overcome the costs by far. With decoupling the software, teams who produce code get empowered to do the same.

Actually, software teams have also been doing it for years, for instance via feature flags. However, they are not used to do it with entirely expendable services. But now this is a dream come true: A/B tests may not only be a common practice in marketing, software teams may also be producing, even testing in production, alternative versions of a given autonomous piece of code (a microservice, or a lambda) to measure (instead of presuming in advance) which version performs better, which one is more reliable, or which one takes less time to get up and running. And getting rid of all the others with no regrets: so cheap they were to produce.

The second outcome of facilitating the change takes more time to unfold. Since everything in organizations becomes cultural (which is, by the way, what explains the Conway’s law), embracing code that is easier and easier to change would impact on the kind of people in your team, in the procedures they follow, which consequently would impact on the kind of solutions you produce. Ultimately, it would impact on how you thrive.

Years ago I had the opportunity to work as a consultant for organizations which got stuck on expired technologies (OS, database, and programming language all in the same metal), precisely because their teams had been picked to work with those technologies. It was so tough for them to change that eventually all become fossilized: the code, the applications, but also the people, and the overall organization.

To avoid becoming fossilized, organizations need to accept they must change

Unfortunately, I keep seeing similar attitudes, though not so extreme, right now. Plenty of companies are recruiting developers according to the technologies they can work with already. I understand they are doing this because they believe their priority must be to ensure that their teams deliver code in every sprint; but, what code?

Would not be more reasonable to recruit people according to how they think, how eager to participate in a team effort they feel, or how adaptable they are? To achieve this, organizations need to treat the technologies currently in use as easily replaceable. Easy changeable, either by substitution, or by coexistence. Remember, the point is not be changing things all the time, but to avoid keeping anything running as it is just because it would be difficult to replace it.

Achieving this is hard. For the sponsors of change, because they must focus on strategy more than on tactics; more on what it might be done, that on what everyone knows it can be done. And for developers too, for they should choose expanding their skills over digging deeper in a few, which would likely make them look less employable on the eyes of conventional organizations.

Ironically, it should be easier for CEOs in comparison: they are supposed to have already learned this, either at college, or by simple observation of the market.

Angkor Wat potograph credit: https://pixabay.com/users/falco-81448/

Global Software Architecture Summit 2019 – episode V

Sketch by Simon Brown on Software Architecture for Developers

This is the fifth, and last, episode in a series of posts reviewing the first Global Software Architecture Summit that took place in Barcelona the 10th of October. To see the whole list of episodes, please visit this.

Architectural Thinking

Mark Richards
Mark Richards

This talk, given by Mark Richards (@markrichardssa), was the best first class in Software Architecture that I have ever seen, and the best definition of what Software Architecture is that anyone might find out there. He also emphasized the natural existence of tradeoffs in every architecture alternative there is, and the key role of software architects in making explicit the prons and cons of them, in order to pick the one that fits in the scenario in hand.

So, should have you the opportunity to attend one of Mark’s courses, or attend to any of his talks, just go!

He also gave us the advice to spend the first 20 minutes of our days learning something, and in particular something we were not even aware that existed: the unknown unknowns. How could you learn about anything that you didn’t know that you did not know? Staying tune for what is said out there, in related media: podcats, blogs, aggregators, meetups, etc.

Effective Architecture Practices Debate

This third debate was moderated by Álvaro García (@alvarobiz), and the invitees on stage were Sandro Mancuso (@sandromancuso), George Fairbanks (@GHFairbanks), Juan Manuel Serrano (@juanshac), Carlos Blé (@carlosble), and Eoin Woods (@eoinwoodz).

Their interventions were eventually centered on how implement changes in architecture style in practice, with particularly wise advices from Sando Mancuso. For instance, it is always key to have a roadmap as an expression of a direction from now to where the organization envision to go, even though being concious that the straight line drawn in the roadmap is going to become a curvy road in real life.

Incidentally, someone mentioned the risk of overengineering that might arise when teams try out what influencers in technology are claiming. IMHO, this is a debate born out of a prejudice. In my more than 20 years in the profession, I only found one example of overengineering, whereas I had to struggle with plenty of examples of bad code, bad practices, or bad architecture. I am not saying overengineering is a fake problem, just that I feel like we should put our efforts on the 95% of the issues before facing the (being generous) 5% of them.

Choosing the Right Architecture Style Debate

The last debate, and the last session, of the event was moderated by Christian Cotes (@ccotesg) and had Alex Soto (@alexsotob), Viktor Farcic (@vfarcic), and Ana-Maria Mihalceanu (@ammbra1508) as guests.

Again, the debate circled around the adoption of microservices as architecture style, and more in general about how hard it seems to distinguish between what is hyped and what is reasonable. It was certainly a very lively conversation, mostly because of the funny style of Viktor Farcic, the author of the book “DevOps Paradox“.

Unfortunately, there was nothing new to add to what we had already listened during previous sessions: the microservices debate, and the fear of change some professionals exhibit, though they likely consider themselves wise and not fearful. Maybe the organizers should consider how to avoid this reiteration of topics in further editions of the event.

Data-Driven Scalability and Cost Analysis for Evolutionary Architecture In the Cloud

Between the two debates above, Ian Gorton presented the results of a study that the research team he leads in the University of Northampton in Seattle conducted to analyse how the cloud costs vary depending on the programming language, and also how those costs depend on which are the values we set in the scale configuration panels of cloud services.

In my opinion, was not a surprise that Go was the most performant programming language in Google Cloud, not because both are initiatives born at Google headquarters, but because Go community shows to have one, and only one, focus: performance.

No surprising neither that the default values suggested by Google Cloud were not eventually the cheapest. That is why companies which whole business model is help organizations reduce their cloud bills exist.

Copyright notice: Featured image taken from PlanetGeek.ch blog. Visit the blog for a full set of other enjoyable pictures.

Global Software Architecture Summit 2019 – episode IV

This is the fourth episode in a series of posts reviewing the first Global Software Architecture Summit that took place in Barcelona the 10th of October. To see the whole list of episodes, please visit this.

Reactive Architecture Patterns Debate

This second debate was moderated by Jakub Marchwicki (@kubem), and speakers were David Farley (@davefarley77), Mark Richards (@markrichardssa), Len Bass (@LenBass4), and Christian Ciceri. It had the Reactive architecture style, and incidentally the Reactive Manifesto and its advocacy of asynchronous procedures, as its key topic, deriving into complexity often too.

Actually, I signed the Manifesto 5 years ago (under the name of my company at a time, though), and have been working with asynchronous messaging since, whenever no dialogue with humans is required, because it improves performance at the UI level, and this is something that, as it is immediately obvious, makes sense.

Reactive architecture style has become extremely popular: Node.js was asynchronous from scratch, all programming languages got their implementations of asynchronicity in recent years, and several flavours of Event-centric architecture styles arose as well. Even a Reactive Foundation showed up, now as part of the Linux Foundation. Clearly, async came to stay.

However, if decoupled transactions, cut in pieces to be handled by autonomous services, were already hard to govern, making them asynchronous turns out to be even harder, as Len Bass repetedly said during the debate. Are our teams ready to handle this complexity? And more importantly, is this complexity necessary in all the contexts when it is being applied now? Unfortunately, this debate on complexity kept the speakers to explore other implications, or technologies which implement reactiveness which might not be known by the audience.

Cover picture of the end of work book
The End of Work book

Another fascinating idea dropped by one of the speakers, but not followed up, was this seek of humanless that technology seems to be in seek of recently. I am afraid people with no technical background is increasingly feeling upset, even scared, by this pursuit, and I do not see that people with technical background (like the speakers, for instance) spend enough time stating clearly that the work is not ending. What we seek is automation of tasks that, by their own nature, do not require understanding to be completed.

For Artificial Intelligence, no matter its name says, is not turning machines into smart entities: machines can be trained to recognise cats in pictures, but not to know what a cat is. Some people believe that machines might achieve that ability (let’s say true intelligence) in the future, but I do not see this happening soon because just a few animal species reached that point, and evolution needed thousands of milions of years for them to appear. To behold tiny algorithms transcending their machine nature and become beings is not a spectacle I am expecting to see in my life.

Global Software Architecture Summit 2019 – episode III

ci-cd pipeline picture

This is the third episode in a series of posts reviewing the first Global Software Architecture Summit that took place in Barcelona the 10th of October. To see the whole list of episodes, please visit this.

Software architecture evolution, what to expect? Panel Discussion

This panel was moderated by Alvaro García (@alvarobiz) and the participants were Eoin Woods (@eoinwoodz), Michael Feathers (@mfeathers), and Ian Gorton. They had an animated conversation, during which the trouble with Microservices came up again.

No one doubts that Microservices is a very demanding architectural style. Everything is easier if done in a monolith than with microservices. Unfortunately, as I pointed out here, monoliths aversion to change prevent organizations to evolve by making this evolution tougher day by day. And there are plenty of reasons for organizations to embrace change: for it makes them more competitive, quicker to response to unexpected shifts in economy, or laws and regulations; in a word: keep them safe from obsolescence.

So, on one hand, monoliths are easier and cheaper, but age us. On the other hand, microservices (as an example of decoupled architectures, or, in a wider sense, of those more complex architecture styles we heard about in the Episode II of this series), keep us afresh, though they are tough, and expensive. What to do, then?

As I see them, microservices and monoliths are two opposite poles in a whole spectrum of factually possible adoptions of architecture styles. I mean, it looks reasonable (and I saw it frequently in the past three years) to have 5, 10, or 15 microservices running in the very same ecosystem that a heavier set of coupled functionalities that we may still call a monolith. This is not a mess, this is an implementation in progress towards a more and more efficient ecosystem step by step.

Monoliths and microservices are two poles in a full spectrum of possible architectures

Maybe microservices detractors feel afraid of gigantic projects which purpose is to decouple completely a fully functional monolith into an ecosystem of microservices, all in all within a timeframe of, let’s say, two or three years. Well, I must agree with them that this kind of transformation is not only too risky, but, in most cases, unnecessary. Mixed architectures, laying somewhere in that spectrum I mentioned above, may be the best possible answer.

The panel also emphasized the key importance of experimentation and observation. Again, they advocated the adoption of the Scientific Method (come up with a theory, run experiments, gather information, and obtain useful knowledge), in a neverending process a colleegue of mine named “trial and error“. But this name does not feel right to me, because I am afraid it is missing the relevant part of the point, and makes it something trivial, when it is not.

I’d rather call this Continuous Change, as a practice and as a third companion for the Continouous Integration/Continuous Delivery, duo. To me, the Continuous Change would be a more general approach to the creation of software, whereas CI/CD would be a particular application of it. For CI/CD, as in the famous diagram modeled out of the Möbius strip, tighly integrates the creation of the code with its deployment, in a way, for instance, that makes testing in production an obvious consequence.

But they are not. The Möbius strip is one possible topology to picture the production of software, among others. In this topology velocity prevails, in a way that the bigger the number of iterations in a given period, the better. Actually, the CI/CD cycle prevents experimentation unless it is reducible enough to be feature flagged, in a mechanism which purpose is to make changes as atomic as possible. Following this approach further is how even chunk commits look self-evident.

Unfortunately, I was not able to find a better idea to integrate experiments in the common CI/CD cycle in a more satisfactory way. A topic for another day, I am afraid.

Copyright notice: Featured image taken from the Linux Juggernaut web site without permission.

Global Software Architecture Summit 2019 – review series

GSAS 2019 logo

Thanks to my employer, I had the opportunity to attend to the first Global Software Architecture Summit that took place in Barcelona the 10th of October. The event, organized by Apiumhub, consisted in a one day track of speaches, debates, and one panel, as well as the usual breaks to mingle with the community, all around the topic of Software Architecture. 8 different sessions, overall provided by a bunch of illuminating, and funny, speakers who brought a huge level of expertise to the stage, and that ended up being an extremely inspiring, though exhausting, day for me.

So, I decided to share the ideas I came up with during the event here, hoping that some of you might eventually feel that inspiration too, at least as much as I can recreate it. Problem is, there was so much to say, and so many paths to follow starting at any of the topics dropped by the speakers during the event, that it came eventually obvious that I would rather cut the whole review in smaller, digestable pieces.

This is how this series came into light. The episodes are the following:

Summary

Some of the episodes are longer than others, whereas some topics show up repeatedly in some, or all, the sessions. Should I list the main topics, I’d clearly separate the incidental from the fundamental: micro-services, kubernetes, and complexity, as incidental; and the high value driven by experimentation, as fundamental.

Certainly, overall the event was highly inspiring, and I left the place with plenty of ideas that, I am sure, worth spending some time exploring.

As a final note, my advice to everyone involved in producing software: attend to events, mingle with the community, learn, and enjoy!

Copyright notice: Featured image taken from the GSAS Twitter account without permission.

Global Software Architecture Summit 2019 – episode II

A depiction of observability by Haproxy

This is the second episode in a series of posts reviewing the first Global Software Architecture Summit that took place in Barcelona the 10th of October. To see the whole list of episodes, please visit this.

Applying architectural principles, processes, and tools

Picture of Len Bass
Len Bass

The second speaker was Len Bass (@LenBass4), who started by giving us what seemed to me a first-day lesson in college on the 5 key characteristics a software architect must be proficient in, but when the time for questions arrived, dropped plenty of valuable thoughts.

One of the main vectors of change in how software is created has been the recent incursion of Operations in it. Traditionally, developers cared very few of how the code they produced was actually run. Since the DevOps, the role of developers has evolved quite quickly, and right now a developer who does willingly ignore the operational part of the software gets (or should get) easily unemployed.

Which, incidentally, drives in another of the main leit-motives of the whole event: which level of architectural complexity should we accept because of that invasion of Operations in software. Illustrated with the neverending monolith versus microservices debate. Some of the speakers in the debates were contrary to the adoption of microservices and others were in favour of them, if and only if microservices fit the problem and context. This latter seems a pretty cautious position to me, but definitely it was not conservative enough for those who complain every time things evolve faster than they can follow.

We will have time to come to this again. Before that, I would like to note three other thoughts from Len Bass: one was the idea that it is very rare that software crafters participate in the creation of any application which is not similar to other applications already created. An example of a true novelty is the Amazon’s invention of a bunch of new technologies, tools, and procedures in its seek of Infrastructure Scaling. In general, though, we should keep ourselves aware of what other people is doing all the time, as well as being open and communicative so that others may also benefit of our work.

A second thought is that log files are quite more important than we think: log files mainly exist to help first responders in case of an emergency. But they also help us gather deeply valuable information of our software’s behaviour. Which reminded me of Observability, a field that fascinates me for its promise of providing us with the ability to query the system any time, and which depends on log files to deliver its magic.

For those also interested in Observability, I highly recommend this introductory page in the Honeycomb website (I have no liaison with this particular company, except by the fact that I got completely mesmerized by a speech of their former CEO, and current CTO, Charity Majors, back in 2018 in London). There is a podcast that I follow too: o11ycast. And a conference, O11ycon, which happened in San Francisco last year. Slides and videos are free to access online.

A third idea that Len Bass said that shocked me was that Microservices potentially drive technology inconsistency into our application ecosystem. This is a common criticism, though I doubt that it is fair. It seems reasonable to assume that there is a limit in how much diversity organizations can handle. Where this limit lies on, and why is the blame put on microservices when they just expose this limit, are the two factors that look unfair to me.

Bearing in mind that Microservices allow organizations to isolate disparate technologies as long as they communicate via standard protocols, like HTTP or asynchronous messaging (by the way, another schoking moment was when someone in the audience asked a question in a way thay made me think this person believes microservices are asynchronous per se, which is wrong), the potential incosistency in adopting those disparate technologies cannot come from them as an architecture style. For it is completely possible, even reasonable in some contexts, to adopt microservices using just one technology.

I believe that microservices allow us to pick the technology that best fits in every case, but this is my believe, not something that microservices force organizations to do. And even I accept that this is an ideal: in practice, the resources are scarce, and organizations must focus on less than 5 technologies. I worked in a company where even more than 5 thecnologies were in use, and everyone was aware of the reasons why, but I do neither recommend that, nor see it happening a lot. On the contrary, my advice would be to focus on the operational, and the infrastructure, issues, get skilled on that, before bringing in additional diversity.

In my opinion, inconsistency comes out of us making bad decisions, not by the adoption of any particular architecture style. Microservices, as any other architecture style, are neutral, optional. Pick them if that fit your needs, simply ignore them if not.

Copyright notice: Featured picture taken from the Haproxy website without permission