The fight on microservices
The debate between those who advocate for keeping monolithic architectures and those who uphold decoupled architectures, such as the popular microservices, heats and cools time and again with no views of soon remission. And pros and cons of monoliths (or, in reverse, microservices), do keep mounting on both sides, for it is a debate in which plenty of different perspectives (pure architectural, economical, organizational, even social) confluence.
Should be keep our current monolith just because our team is not skilled enough to deal with microservices, and we are short of resources either to train them (for years!), or to fire them and form a brand new team?
Should we embrace microservices because they are what everyone is talking about, because they are the newest thing? Or because microservices are what the biggest companies, with their almost endless resources, are doing? Bearing in mind those very companies make enormous profits, and gain market influence, thanks to selling us tools designed for handling microservices, this seems at least questionable.
As a matter of fact, I have always been an advocate for change, in all areas in my life, so, being monoliths the first of both software architectures to come to life, I would say I am even naively inclined by nature in favor of decoupled software architectures just because they stand for a change in the status quo.
However, I am a software engineer for profession, not for inclination. Actually, I obtained my degree on Physics, not Engineering, and so my mindset is settled around Science and scientific method. In other words, I am used to doubting, of everything, all the time, and to an ongoing seek for evidence.
And evidence, in favor or against monoliths, is what I have been struggling to find for months. This below is what I learned in my seek.
Change as a motive
After more than 3 years working intensively, and almost exclusively, in decoupled architectures, I’ve arrived at the conclusion of a key feature of their opposite, monoliths: everything is easier with monoliths, except to make them change. Such an annoying feature to exhibit, one that hangs over our decisions every time we must pick a team and produce some code to face the tough challenges that organizations often need to overrun in order to thrive, just keep ongoing, or plainly survive.
The defining feature of monoliths is their aversion to change
Our community is full of stories about how hard to make monoliths is. Even refactors look difficult: to make, to test, and to deploy them. Think of it for a moment: is that a reasonable tradeoff for keeping things simple?
On the contrary, facing change is a crucial leverage for businesses, and history is full of human enterprises of all kind which died just because of their inability to handle changes: environmental, social, technological, pick the area you like most and see the examples piling.
Just to cite a famous example, take the book by Jared Diamond, Collapse. In this case, not only the book is that illuminating; the foreground that the author includes in his webpage (please, do follow the link above) is worth reading.
Another popular topic is the Joseph Schumpeter’s idea of Creative Destruction. Though I guess this would be less appealing for those who found their jobs destructed by creative rival companies.
How change produces value?
So, yes, this ability to facilitate change in software production that deeply decoupled architectures provide, is invaluable. At least, as invaluable as history shows it is in other planes of human existence. But, how is this value delivered, in practice?
There are two ways change drives value into organizations. The first one is immediate: by facilitating (i.e., making it quicker and less painful) the upgrade to newer versions of support software (OSs, databases, programming languages, assistant applications); to produce, and refactor, code; to deploy more often and quickly, to recover from disasters; etc.
Also, since everything you might try out is easier to test when the code is less reluctant to change, you get used to trying out more, and more often. And you can measure the outcome, and so you learn more too (meaning, more, and in a faster pace, that you may do it with a monolith). Which also makes future tests smarter, even when the output is considered failed and is discarded. Which, by the way, is not a tragedy of expensive loss, precisely because a system easy to change requires, by definition, far less resources to change as well.
A system easy to change is cheaper to change too
Marketing teams have been gathering benefits out of this for years, precisely because the results overcome the costs by far. With decoupling the software, teams who produce code get empowered to do the same.
Actually, software teams have also been doing it for years, for instance via feature flags. However, they are not used to do it with entirely expendable services. But now this is a dream come true: A/B tests may not only be a common practice in marketing, software teams may also be producing, even testing in production, alternative versions of a given autonomous piece of code (a microservice, or a lambda) to measure (instead of presuming in advance) which version performs better, which one is more reliable, or which one takes less time to get up and running. And getting rid of all the others with no regrets: so cheap they were to produce.
The second outcome of facilitating the change takes more time to unfold. Since everything in organizations becomes cultural (which is, by the way, what explains the Conway’s law), embracing code that is easier and easier to change would impact on the kind of people in your team, in the procedures they follow, which consequently would impact on the kind of solutions you produce. Ultimately, it would impact on how you thrive.
Years ago I had the opportunity to work as a consultant for organizations which got stuck on expired technologies (OS, database, and programming language all in the same metal), precisely because their teams had been picked to work with those technologies. It was so tough for them to change that eventually all become fossilized: the code, the applications, but also the people, and the overall organization.
To avoid becoming fossilized, organizations need to accept they must change
Unfortunately, I keep seeing similar attitudes, though not so extreme, right now. Plenty of companies are recruiting developers according to the technologies they can work with already. I understand they are doing this because they believe their priority must be to ensure that their teams deliver code in every sprint; but, what code?
Would not be more reasonable to recruit people according to how they think, how eager to participate in a team effort they feel, or how adaptable they are? To achieve this, organizations need to treat the technologies currently in use as easily replaceable. Easy changeable, either by substitution, or by coexistence. Remember, the point is not be changing things all the time, but to avoid keeping anything running as it is just because it would be difficult to replace it.
Achieving this is hard. For the sponsors of change, because they must focus on strategy more than on tactics; more on what it might be done, that on what everyone knows it can be done. And for developers too, for they should choose expanding their skills over digging deeper in a few, which would likely make them look less employable on the eyes of conventional organizations.
Ironically, it should be easier for CEOs in comparison: they are supposed to have already learned this, either at college, or by simple observation of the market.
Angkor Wat potograph credit: https://pixabay.com/users/falco-81448/