This is the third episode in a series of posts reviewing the first Global Software Architecture Summit that took place in Barcelona the 10th of October. To see the whole list of episodes, please visit this.
Software architecture evolution, what to expect? Panel Discussion
This panel was moderated by Alvaro García (@alvarobiz) and the participants were Eoin Woods (@eoinwoodz), Michael Feathers (@mfeathers), and Ian Gorton. They had an animated conversation, during which the trouble with Microservices came up again.
No one doubts that Microservices is a very demanding architectural style. Everything is easier if done in a monolith than with microservices. Unfortunately, as I pointed out here, monoliths aversion to change prevent organizations to evolve by making this evolution tougher day by day. And there are plenty of reasons for organizations to embrace change: for it makes them more competitive, quicker to response to unexpected shifts in economy, or laws and regulations; in a word: keep them safe from obsolescence.
So, on one hand, monoliths are easier and cheaper, but age us. On the other hand, microservices (as an example of decoupled architectures, or, in a wider sense, of those more complex architecture styles we heard about in the Episode II of this series), keep us afresh, though they are tough, and expensive. What to do, then?
As I see them, microservices and monoliths are two opposite poles in a whole spectrum of factually possible adoptions of architecture styles. I mean, it looks reasonable (and I saw it frequently in the past three years) to have 5, 10, or 15 microservices running in the very same ecosystem that a heavier set of coupled functionalities that we may still call a monolith. This is not a mess, this is an implementation in progress towards a more and more efficient ecosystem step by step.
Monoliths and microservices are two poles in a full spectrum of possible architectures
Maybe microservices detractors feel afraid of gigantic projects which purpose is to decouple completely a fully functional monolith into an ecosystem of microservices, all in all within a timeframe of, let’s say, two or three years. Well, I must agree with them that this kind of transformation is not only too risky, but, in most cases, unnecessary. Mixed architectures, laying somewhere in that spectrum I mentioned above, may be the best possible answer.
The panel also emphasized the key importance of experimentation and observation. Again, they advocated the adoption of the Scientific Method (come up with a theory, run experiments, gather information, and obtain useful knowledge), in a neverending process a colleegue of mine named “trial and error“. But this name does not feel right to me, because I am afraid it is missing the relevant part of the point, and makes it something trivial, when it is not.
I’d rather call this Continuous Change, as a practice and as a third companion for the Continouous Integration/Continuous Delivery, duo. To me, the Continuous Change would be a more general approach to the creation of software, whereas CI/CD would be a particular application of it. For CI/CD, as in the famous diagram modeled out of the Möbius strip, tighly integrates the creation of the code with its deployment, in a way, for instance, that makes testing in production an obvious consequence.
But they are not. The Möbius strip is one possible topology to picture the production of software, among others. In this topology velocity prevails, in a way that the bigger the number of iterations in a given period, the better. Actually, the CI/CD cycle prevents experimentation unless it is reducible enough to be feature flagged, in a mechanism which purpose is to make changes as atomic as possible. Following this approach further is how even chunk commits look self-evident.
Unfortunately, I was not able to find a better idea to integrate experiments in the common CI/CD cycle in a more satisfactory way. A topic for another day, I am afraid.
Copyright notice: Featured image taken from the Linux Juggernaut web site without permission.