Coding by request
Even though it is not a common discussion topic in our community, we all would agree that software is created under the implicit assumption that it is going to last forever. Engineered as a definitive solution to a particular feature demand, or as a fix to a reported bug, once our code gets deployed into Production, we never think we may be going to revisit it (new bugs aside), unless additional requests arise, or to copy some lines of it to paste them somewhere else.
Of course, some code is more prone to change than others. For instance, public APIs and integration code tend to stay more stable, as Domain entities and services do too, whereas Application services (use cases) and User Interfaces may change very often. But overall it is assumed that code will remain still unless there is a reason for changing it. I dare say that to stay stable and giving service is the Nirnava of code.
There are pretty obvious reasons for this: to optimize the cost-benefit ratio, to invest our always limited resources in new things, to minimize risks due to human error, etc. The old “if ain’t broken, don’t fix it”.
The Nirvana of code is to stay unchanged and continue giving service
Although, all tactics have trade-offs. For instance, let’s see this true story taken out of my own experience at the job.
Months ago in 2019, we were developing some new software with a feature that required integration with a 3-party application. This integration works through the interchange of messages written in a proprietary data format. As files, those messages are not formatted in any of the standard file formats commonly in use, as JSON, YML, or XML. Nor even CSV. This proprietary format defines content via two mechanisms: the column number and the line prefix. For instance, some figures would stand for tax amount if they are located in any line with prefix T and aligned at column 42, whereas they would stand for a net discount if they appear aligned at column 50 in the same line (or in another line with the same prefix). Files formatted this way do not include any headers line either.
Though it looks old-fashioned, this approach can perfectly work, and actually has been working for 20 years. But it exhibits a true fault: meaning is not included, and it must be inferred out of elsewhere, beyond the file itself. Parsing files formatted so, it is not possible to say what any given value stands for, only its location (horizontal and vertical axis, plus the line prefix), so that the code does not map values to meaning, it maps locations to meaning. Since we have self-descriptive file formats at our disposal, picking a format that explicitly avoids self-description looks quite as an anti-pattern to me, as it is an easily redeemable weakness they decided to keep.
Why would anyone give up standards widely used, widely supported, with literally thousands of free tools available to help developers do their jobs, and instead keep using an anti-pattern, one which was already old-fashioned at least fifteen years ago? And what kind of developer would fit in an organization that voluntarily decides to fossilize its software?
I am sure you all had similar experiences like that above, and I am convinced it is a legitimate question for us in charge of creating software to ask ourselves, why are we so eager to keep our code unchanged once deployed? Did we just surrender to people sitting at the cost management desk? Or do we feel so proud of our work that we truly believe our first operable version of any code is surely going to be the best possible version of it?
Or maybe we just love playing with new toys as much as we can, so we’d rather start working with new code, and forget the one just finished, as fast as possible. Or maybe we think that to pursue a greater quality prevents us from fast delivering something usable and that meets the business requirements.
Whatever the reason we might have to accept, or do, this, plain observation shows that we end up producing huge structures of code with a shocking resistance to change. From an architectural point of view, we may say that this inertia sinks its roots in Coupling: there are so many tightly entangled pieces in coupled applications that we’d rather not touch them unless it is strictly necessary.
Certainly, some progress towards decoupling has been rolling out for years. I still remember my first job in this industry, working with an IBM AS/400 with all the pieces (operating system, database, programming language) melted together as in one piece. That level of coupling is pretty uncommon right now. However, it keeps being a common idea that coupled is easier and for everyone, whereas decoupled applications are for advanced teams, and they should only be adopted with justified prevention.
In summary, here there are the trade-offs:
- Coupled applications are easier to operate, but exhibit high inertia: code is harder to change, and so very tough, or even unaffordable, to keep updated. Besides, it becomes difficult to learn and read too.
- Decoupled applications are harder to operate, but exhibit far lower inertia: code is easier to learn and read, to change, and to keep updated.
Tests Pass = Definition of Done
So, should we keep producing coupled software applications just because that is what we can afford? Before giving an answer to this question, there are more clues to disclose apart from architectural styles. For instance, methodology. Currently, almost everyone in our field works under the iterative procedures that got popular since the Agile Manifesto was signed. The Manifesto explicitly advocates for code’s responsiveness to change:
Responding to change over following a plan
And it even insists on the importance of the changeability of code:
Welcome changing requirements, even late in development.
We may say that code is produced under request compliance: code is considered done (and it is assumed that it should not change anymore) once the requirements are met. Therefore, code should not change unless requirements change. Eventually, this is what leads to TDD (Test Driven Development), for TDD relies on the assumption that the condition that all tests pass stands for all the requirements are fulfilled.
Even though this idea, once implemented, should allow us to change and deploy code as often as necessary, and it truly does, the condition behind is still there: nothing changes unless a requirement (or a bug) says so. (1)
This way of thinking is not original of our industry, but it is part of the luggage we took out of Engineering, considered as a set of practices to guarantee that, within a given context, all kinds of mechanisms (lamps, airplanes, bridges, etc.) will work as we designed them to do. This is important when dealing with engines because they are material, which means it would be extremely expensive to change them once they got out of the factory. Just imagine a world in which material artifacts as refrigerators, cars, or even utility infrastructures (water, electricity) were changing week after week.
Trouble comes when we realize that software applications are not made of matter, even though they rely on physical machines to be operated and yield their value. On the contrary, this capacity of being changed as much as we want is an essential property of code, and I believe it is a mistake not to make extensive use of it.
If I am not wrong, and this idea of “provided code meets all requirements (provided all tests pass), our job is done” comes from Engineering, and, as I pointed out just above, engines and software are not alike, should not we think of alternative ways of producing software? Is that even possible, I mean, are there alternative mindsets that might fit better?
I say there may be one: Science.
Science starts precisely where Engineering ends, just outside the boundaries of those safe contexts in which Engineering can fulfill its promised guarantees. For the Scientific Method stands for a continuous challenge of the theories accepted at the time, in a continuous exploration of the lands beyond the context where those theories proved that they work, until checking new hypotheses in broader and broader contexts reveals some failure in the theory behind those hypotheses. This process makes all scientific theories temporary.
In my opinion, Software Creation should move forward from the current methodologies based on request compliance, for they come out of that Engineering mindset reluctant to change items once finished, to adopt this temporary trait common in Science.
There is a key distinction between Software and Science though, for their purposes do not match. Science looks for knowledge, whereas Software Crafting, like Engineering, seeks practical purposes. That is precisely why software by requirement emerged and looks sensible. So what we need is to consider Software Crafting as something new. And the first step towards that end may come out of a combination of the practical sense of Engineering with the valuable addenda of the Scientific Method. Let’s see how.
So the key element in our adoption of the Scientific Method in the creation of software is to consider all our applications as temporary: no matter how successful they may be right now, we must assume that software applications are just temporarily successful. Therefore, we must treat our code as temporary, even if all tests pass, and all requirements are fulfilled.
Assuming that the shining results of our work are plainly ephemeral should not be so hard once we realize that nothing around remains stable either. On the contrary, as we humans do, organizations we produce code for are continuously maneuvering to keep afloat in troubled waters of economic or political crisis, punched by competitors, laws and regulations, social conflicts, and whatever we may imagine. They are not static entities, but fools struggling for brief moments of stability that are nothing more than an illusion.
In this state of permanent flow, there is no guarantee that any option that now seems right, that fulfills its requirements and pass the tests, will keep right for long. Precisely because everything is temporary, no right option stays right forever.
No option stays right forever
Accepting that software is never perfect, never permanent, and never complete should lead us to create software on the premise that every piece of code we make is expendable. Wabi-sabi could be a very cool name for this methodology, though for now I’d decided to keep our community traditions and name it Continuous Change.
In summary, Continuous Change stands for treating every piece of software as temporary, no matter it is working as requirements dictate in Production, and holds as many procedures we may invent in order to implement this principle in practice, in our daily work.
When Inspiration comes, that it find me working.
How do we know what changes are worth trying out? As in Science, we must produce our code according to its requirements and, once it is up and running with no errors, cross the boundaries of that context and expand it farther. This means that we must continuously invent ways to challenge our applications. And there must be a method behind how we prepare, run, and study the results of those challenges, so that we learn something out of every one of them.
In short, these challenges must be planned, must be based on a single hypothesis, should be repeatable, and their outputs must be measurable.
Nothing yields value unless it is in code, and there is nothing worth coding without motive.
Lucky us, this is not new: Chaos Engineering has been following this practice of throwing and proving hypotheses one at a time for some years already. The only difference is that whereas experiments in Chaos Engineering are limited to failure injection, challenges in Continuous Change may be related to anything that the team believes is relevant to check out.
We should not assume there are particular experiments better to run than others, and there are several, independent reasons for this:
- The organization, and the maturity of the team, will surely impact on the kind of suggestions they may consider worth exploring. With some organizations and teams, this starting point might look quite poor. The good news is that the benefits of working out are incremental: every step makes the next one longer.
- Every experiment, even the ones that fail, teach us something valuable.
- Inspiration may come from anywhere, not only from inside the organization: a blog post, a book, a podcast, something a speaker said in a conference, a coffee chat with a colleague, etc, might suggest ideas worth trying out.
- And then, there is also the unexpected you might find in your way.
Exploration is always exciting and deeply rewarding. Although it comes with its trade-offs too: because it requires to build new ways of producing code that make it as easy to change, or replace, as possible, Continuous Change is far more demanding than the usual development by request.
As a neverending story of continuous learning, Continuous Change can be exhausting. But we have nothing forbidden to learn, so I’d say not limit ourselves in advance, and embrace the change.
To add up more pressure against unsolicited changes in code, we also have the “Do not deploy on Fridays” rule: we are so convinced that shit happens, so convinced that there are dwarfs hidden inside our machines waiting for us losing attention to release havoc, so scared that we might not be in control that we’d rather stay put. Though, if that is so, why deploy on Thursdays?
I am not going to enter this debate for I consider it closed already. Just read Charity, she spent plenty of time explaining it so much better than I could possibly do. To be clear, let me rephrase this practice this way: we produce hardly changeable code because we hope this will guarantee that it will not fail unless operational conditions change (for instance, a power outage).
Top image credits: this image was downloaded from Pixabay: https://pixabay.com/illustrations/arrows-center-inside-middle-2034023/