We have nothing forbidden to learn

FeaturedRyoanJi garden

Coding by request

Even though it is not a common discussion topic in our community, we all would agree that software is created under the implicit assumption that it is going to last forever. Engineered as a definitive solution to a particular feature demand, or as a fix to a reported bug, once our code gets deployed into Production, we never think we may be going to revisit it (new bugs aside), unless additional requests arise, or to copy some lines of it to paste them somewhere else.

Of course, some code is more prone to change than others. For instance, public APIs and integration code tend to stay more stable, as Domain entities and services do too, whereas Application services (use cases) and User Interfaces may change very often. But overall it is assumed that code will remain still unless there is a reason for changing it. I dare say that to stay stable and giving service is the Nirnava of code.

There are pretty obvious reasons for this: to optimize the cost-benefit ratio, to invest our always limited resources in new things, to minimize risks due to human error, etc. The old “if ain’t broken, don’t fix it”.

The Nirvana of code is to stay unchanged and continue giving service

Although, all tactics have trade-offs. For instance, let’s see this true story taken out of my own experience at the job.


Months ago in 2019, we were developing some new software with a feature that required integration with a 3-party application. This integration works through the interchange of messages written in a proprietary data format. As files, those messages are not formatted in any of the standard file formats commonly in use, as JSON, YML, or XML. Nor even CSV. This proprietary format defines content via two mechanisms: the column number and the line prefix. For instance, some figures would stand for tax amount if they are located in any line with prefix T and aligned at column 42, whereas they would stand for a net discount if they appear aligned at column 50 in the same line (or in another line with the same prefix). Files formatted this way do not include any headers line either.

Though it looks old-fashioned, this approach can perfectly work, and actually has been working for 20 years. But it exhibits a true fault: meaning is not included, and it must be inferred out of elsewhere, beyond the file itself. Parsing files formatted so, it is not possible to say what any given value stands for, only its location (horizontal and vertical axis, plus the line prefix), so that the code does not map values to meaning, it maps locations to meaning. Since we have self-descriptive file formats at our disposal, picking a format that explicitly avoids self-description looks quite as an anti-pattern to me, as it is an easily redeemable weakness they decided to keep.

Why would anyone give up standards widely used, widely supported, with literally thousands of free tools available to help developers do their jobs, and instead keep using an anti-pattern, one which was already old-fashioned at least fifteen years ago? And what kind of developer would fit in an organization that voluntarily decides to fossilize its software?


Inertia

I am sure you all had similar experiences like that above, and I am convinced it is a legitimate question for us in charge of creating software to ask ourselves, why are we so eager to keep our code unchanged once deployed? Did we just surrender to people sitting at the cost management desk? Or do we feel so proud of our work that we truly believe our first operable version of any code is surely going to be the best possible version of it?

Or maybe we just love playing with new toys as much as we can, so we’d rather start working with new code, and forget the one just finished, as fast as possible. Or maybe we think that to pursue a greater quality prevents us from fast delivering something usable and that meets the business requirements.

Whatever the reason we might have to accept, or do, this, plain observation shows that we end up producing huge structures of code with a shocking resistance to change. From an architectural point of view, we may say that this inertia sinks its roots in Coupling: there are so many tightly entangled pieces in coupled applications that we’d rather not touch them unless it is strictly necessary.

Certainly, some progress towards decoupling has been rolling out for years. I still remember my first job in this industry, working with an IBM AS/400 with all the pieces (operating system, database, programming language) melted together as in one piece. That level of coupling is pretty uncommon right now. However, it keeps being a common idea that coupled is easier and for everyone, whereas decoupled applications are for advanced teams, and they should only be adopted with justified prevention.

In summary, here there are the trade-offs:

  • Coupled applications are easier to operate, but exhibit high inertia: code is harder to change, and so very tough, or even unaffordable, to keep updated. Besides, it becomes difficult to learn and read too.
  • Decoupled applications are harder to operate, but exhibit far lower inertia: code is easier to learn and read, to change, and to keep updated.

Tests Pass = Definition of Done

So, should we keep producing coupled software applications just because that is what we can afford? Before giving an answer to this question, there are more clues to disclose apart from architectural styles. For instance, methodology. Currently, almost everyone in our field works under the iterative procedures that got popular since the Agile Manifesto was signed. The Manifesto explicitly advocates for code’s responsiveness to change:

Responding to change over following a plan

And it even insists on the importance of the changeability of code:

Welcome changing requirements, even late in development.

We may say that code is produced under request compliance: code is considered done (and it is assumed that it should not change anymore) once the requirements are met. Therefore, code should not change unless requirements change. Eventually, this is what leads to TDD (Test Driven Development), for TDD relies on the assumption that the condition that all tests pass stands for all the requirements are fulfilled.

Even though this idea, once implemented, should allow us to change and deploy code as often as necessary, and it truly does, the condition behind is still there: nothing changes unless a requirement (or a bug) says so. (1)

This way of thinking is not original of our industry, but it is part of the luggage we took out of Engineering, considered as a set of practices to guarantee that, within a given context, all kinds of mechanisms (lamps, airplanes, bridges, etc.) will work as we designed them to do. This is important when dealing with engines because they are material, which means it would be extremely expensive to change them once they got out of the factory. Just imagine a world in which material artifacts as refrigerators, cars, or even utility infrastructures (water, electricity) were changing week after week.

Trouble comes when we realize that software applications are not made of matter, even though they rely on physical machines to be operated and yield their value. On the contrary, this capacity of being changed as much as we want is an essential property of code, and I believe it is a mistake not to make extensive use of it.

If I am not wrong, and this idea of “provided code meets all requirements (provided all tests pass), our job is done” comes from Engineering, and, as I pointed out just above, engines and software are not alike, should not we think of alternative ways of producing software? Is that even possible, I mean, are there alternative mindsets that might fit better?

I say there may be one: Science.

Science starts precisely where Engineering ends, just outside the boundaries of those safe contexts in which Engineering can fulfill its promised guarantees. For the Scientific Method stands for a continuous challenge of the theories accepted at the time, in a continuous exploration of the lands beyond the context where those theories proved that they work, until checking new hypotheses in broader and broader contexts reveals some failure in the theory behind those hypotheses. This process makes all scientific theories temporary.

In my opinion, Software Creation should move forward from the current methodologies based on request compliance, for they come out of that Engineering mindset reluctant to change items once finished, to adopt this temporary trait common in Science.

There is a key distinction between Software and Science though, for their purposes do not match. Science looks for knowledge, whereas Software Crafting, like Engineering, seeks practical purposes. That is precisely why software by requirement emerged and looks sensible. So what we need is to consider Software Crafting as something new. And the first step towards that end may come out of a combination of the practical sense of Engineering with the valuable addenda of the Scientific Method. Let’s see how.

Wabi-sabi

So the key element in our adoption of the Scientific Method in the creation of software is to consider all our applications as temporary: no matter how successful they may be right now, we must assume that software applications are just temporarily successful. Therefore, we must treat our code as temporary, even if all tests pass, and all requirements are fulfilled.

Assuming that the shining results of our work are plainly ephemeral should not be so hard once we realize that nothing around remains stable either. On the contrary, as we humans do, organizations we produce code for are continuously maneuvering to keep afloat in troubled waters of economic or political crisis, punched by competitors, laws and regulations, social conflicts, and whatever we may imagine. They are not static entities, but fools struggling for brief moments of stability that are nothing more than an illusion.

In this state of permanent flow, there is no guarantee that any option that now seems right, that fulfills its requirements and pass the tests, will keep right for long. Precisely because everything is temporary, no right option stays right forever.

No option stays right forever

Japanese aesthetics have a term for the acceptance that nothing is perfect, nothing is permanent, and nothing is complete: wabi-sabi or 寂.

Accepting that software is never perfect, never permanent, and never complete should lead us to create software on the premise that every piece of code we make is expendable. Wabi-sabi could be a very cool name for this methodology, though for now I’d decided to keep our community traditions and name it Continuous Change.

In summary, Continuous Change stands for treating every piece of software as temporary, no matter it is working as requirements dictate in Production, and holds as many procedures we may invent in order to implement this principle in practice, in our daily work.

Challenges

When Inspiration comes, that it find me working.

Pablo Picasso

How do we know what changes are worth trying out? As in Science, we must produce our code according to its requirements and, once it is up and running with no errors, cross the boundaries of that context and expand it farther. This means that we must continuously invent ways to challenge our applications. And there must be a method behind how we prepare, run, and study the results of those challenges, so that we learn something out of every one of them.

In short, these challenges must be planned, must be based on a single hypothesis, should be repeatable, and their outputs must be measurable.

Nothing yields value unless it is in code, and there is nothing worth coding without motive. 

Lucky us, this is not new: Chaos Engineering has been following this practice of throwing and proving hypotheses one at a time for some years already. The only difference is that whereas experiments in Chaos Engineering are limited to failure injection, challenges in Continuous Change may be related to anything that the team believes is relevant to check out.

We should not assume there are particular experiments better to run than others, and there are several, independent reasons for this:

  • The organization, and the maturity of the team, will surely impact on the kind of suggestions they may consider worth exploring. With some organizations and teams, this starting point might look quite poor. The good news is that the benefits of working out are incremental: every step makes the next one longer.
  • Every experiment, even the ones that fail, teach us something valuable.
  • Inspiration may come from anywhere, not only from inside the organization: a blog post, a book, a podcast, something a speaker said in a conference, a coffee chat with a colleague, etc, might suggest ideas worth trying out.
  • And then, there is also the unexpected you might find in your way.

Exploration is always exciting and deeply rewarding. Although it comes with its trade-offs too: because it requires to build new ways of producing code that make it as easy to change, or replace, as possible, Continuous Change is far more demanding than the usual development by request.

As a neverending story of continuous learning, Continuous Change can be exhausting. But we have nothing forbidden to learn, so I’d say not limit ourselves in advance, and embrace the change.

 

NOTE 1:

To add up more pressure against unsolicited changes in code, we also have the “Do not deploy on Fridays” rule: we are so convinced that shit happens, so convinced that there are dwarfs hidden inside our machines waiting for us losing attention to release havoc, so scared that we might not be in control that we’d rather stay put. Though, if that is so, why deploy on Thursdays?

I am not going to enter this debate for I consider it closed already. Just read Charity, she spent plenty of time explaining it so much better than I could possibly do. To be clear, let me rephrase this practice this way: we produce hardly changeable code because we hope this will guarantee that it will not fail unless operational conditions change (for instance, a power outage).

 

Top image credits: this image was downloaded from Pixabay: https://pixabay.com/illustrations/arrows-center-inside-middle-2034023/

Monoliths are averse to change

FeaturedAngkor wat view

The fight on microservices

The debate between those who advocate for keeping monolithic architectures and those who uphold decoupled architectures, such as the popular microservices, heats and cools time and again with no views of soon remission. And pros and cons of monoliths (or, in reverse, microservices), do keep mounting on both sides, for it is a debate in which plenty of different perspectives (pure architectural, economical, organizational, even social) confluence.

Should be keep our current monolith just because our team is not skilled enough to deal with microservices, and we are short of resources either to train them (for years!), or to fire them and form a brand new team?

Should we embrace microservices because they are what everyone is talking about, because they are the newest thing? Or because microservices are what the biggest companies, with their almost endless resources, are doing? Bearing in mind those very companies make enormous profits, and gain market influence, thanks to selling us tools designed for handling microservices, this seems at least questionable.

As a matter of fact, I have always been an advocate for change, in all areas in my life, so, being monoliths the first of both software architectures to come to life, I would say I am even naively inclined by nature in favor of decoupled software architectures just because they stand for a change in the status quo.

However, I am a software engineer for profession, not for inclination. Actually, I obtained my degree on Physics, not Engineering, and so my mindset is settled around Science and scientific method. In other words, I am used to doubting, of everything, all the time, and to an ongoing seek for evidence.

And evidence, in favor or against monoliths, is what I have been struggling to find for months. This below is what I learned in my seek.

Change as a motive

After more than 3 years working intensively, and almost exclusively, in decoupled architectures, I’ve arrived at the conclusion of a key feature of their opposite, monoliths: everything is easier with monoliths, except to make them change. Such an annoying feature to exhibit, one that hangs over our decisions every time we must pick a team and produce some code to face the tough challenges that organizations often need to overrun in order to thrive, just keep ongoing, or plainly survive.

The defining feature of monoliths is their aversion to change

Our community is full of stories about how hard to make monoliths is. Even refactors look difficult: to make, to test, and to deploy them. Think of it for a moment: is that a reasonable tradeoff for keeping things simple?

On the contrary, facing change is a crucial leverage for businesses, and history is full of human enterprises of all kind which died just because of their inability to handle changes: environmental, social, technological, pick the area you like most and see the examples piling.

Collapse, by Jared Diamond
Jared Diamond’s Collapse

Just to cite a famous example, take the book by Jared Diamond, Collapse. In this case, not only the book is that illuminating; the foreground that the author includes in his webpage (please, do follow the link above) is worth reading.

Another popular topic is the Joseph Schumpeter’s idea of Creative Destruction. Though I guess this would be less appealing for those who found their jobs destructed by creative rival companies.

How change produces value?

So, yes, this ability to facilitate change in software production that deeply decoupled architectures provide, is invaluable. At least, as invaluable as history shows it is in other planes of human existence. But, how is this value delivered, in practice?

There are two ways change drives value into organizations. The first one is immediate: by facilitating (i.e., making it quicker and less painful) the upgrade to newer versions of support software (OSs, databases, programming languages, assistant applications); to produce, and refactor, code; to deploy more often and quickly, to recover from disasters; etc.

Also, since everything you might try out is easier to test when the code is less reluctant to change, you get used to trying out more, and more often. And you can measure the outcome, and so you learn more too (meaning, more, and in a faster pace, that you may do it with a monolith). Which also makes future tests smarter, even when the output is considered failed and is discarded. Which, by the way, is not a tragedy of expensive loss, precisely because a system easy to change requires, by definition, far less resources to change as well.

A system easy to change is cheaper to change too

Marketing teams have been gathering benefits out of this for years, precisely because the results overcome the costs by far. With decoupling the software, teams who produce code get empowered to do the same.

Actually, software teams have also been doing it for years, for instance via feature flags. However, they are not used to do it with entirely expendable services. But now this is a dream come true: A/B tests may not only be a common practice in marketing, software teams may also be producing, even testing in production, alternative versions of a given autonomous piece of code (a microservice, or a lambda) to measure (instead of presuming in advance) which version performs better, which one is more reliable, or which one takes less time to get up and running. And getting rid of all the others with no regrets: so cheap they were to produce.

The second outcome of facilitating the change takes more time to unfold. Since everything in organizations becomes cultural (which is, by the way, what explains the Conway’s law), embracing code that is easier and easier to change would impact on the kind of people in your team, in the procedures they follow, which consequently would impact on the kind of solutions you produce. Ultimately, it would impact on how you thrive.

Years ago I had the opportunity to work as a consultant for organizations which got stuck on expired technologies (OS, database, and programming language all in the same metal), precisely because their teams had been picked to work with those technologies. It was so tough for them to change that eventually all become fossilized: the code, the applications, but also the people, and the overall organization.

To avoid becoming fossilized, organizations need to accept they must change

Unfortunately, I keep seeing similar attitudes, though not so extreme, right now. Plenty of companies are recruiting developers according to the technologies they can work with already. I understand they are doing this because they believe their priority must be to ensure that their teams deliver code in every sprint; but, what code?

Would not be more reasonable to recruit people according to how they think, how eager to participate in a team effort they feel, or how adaptable they are? To achieve this, organizations need to treat the technologies currently in use as easily replaceable. Easy changeable, either by substitution, or by coexistence. Remember, the point is not be changing things all the time, but to avoid keeping anything running as it is just because it would be difficult to replace it.

Achieving this is hard. For the sponsors of change, because they must focus on strategy more than on tactics; more on what it might be done, that on what everyone knows it can be done. And for developers too, for they should choose expanding their skills over digging deeper in a few, which would likely make them look less employable on the eyes of conventional organizations.

Ironically, it should be easier for CEOs in comparison: they are supposed to have already learned this, either at college, or by simple observation of the market.

Angkor Wat potograph credit: https://pixabay.com/users/falco-81448/