Szymon Kulec, someone I don’t know personally or digitally, tweeted about having his mind blown away by an article from Simon Wardley on AI-assisted software development and, incidentally, Serverless.
The article in question is this, Why the fuss about conversational programming?
Please go check it out before keep on reading.
Spellbound at first after my first reading it, I found myself hesitating and finally disagreeing with some of Wardley’s arguments and conclusions. Not that I fully disagree with him though, but enough to be worth writing about it.
As happens with all my blog posts, I tend to write more for the sake of my own’s clarity of mind than anything else. I am publishing these comments for those happy few who read the aforementioned blog post and dislike his positioning as Moses descending with the 10 Mandates received from God as much as I do.
Besides, I think it can be refreshing to have a second view on the topic.
At the beginning of his post on Conversational Programming (we’ll define this concept later), Wardly includes a reference to a prior blog post also by him in which he talked about Serverless. This post, Why the fuss about Serverless?, is also of the highest interest to software architects and engineers, so I’ll follow the author’s recommendation and dive deep into it first.
Wardley sets the starting point of Serverless computing in software architectures built using autonomous components coordinated somehow, or, as we may say in more abstract terms, underpinned by the Principle of Separation of Concerns:
I’m also going to use the current popular terms like composable architecture (old skool was componentisation, they are the same thing) which are all derived from the ideas of compositionality — the ability to break down into and build with components.Simon Wardley, “Why the fuss about conversational programming? | by swardley | Medium“
However, soon it becomes apparent that he is thinking more in the practical terms of running software than in its architecture.
In the serverless world you don’t care about underlying infrastructure.Simon Wardley, “Why the fuss about conversational programming? | by swardley | Medium“
This is the promise of Serverless, its reason to be. Its goal. As a goal, it is legit, no matter you follow it or not.
Today, we use the term micro services to describe this separation of functions and provision as web services. We’re moving away from the monolith program containing all the functions to a world of separated and discrete functions. A utility platform just enables this and abstracts the whole underlying process from the developer.Simon Wardley, “Why the Fuss About Serverless? | HackerNoon“
It exists an intent behind the execution of any application there is. An intent that represents in software (a.k.a, in the Codomain) a Domain feature some user wants to have fulfilled. This intent is allegedly implemented in Serverless as the ultimate result of executing several functions.
The nature of the feature itself would make no difference. Even though this might be quite the case in general, I tend to believe there is an operational upper limit to this extreme granularity: Serverless would become simply not affordable in terms of the longer response times it yields when many functions in the cloud must be called, and coordinated, to get an answer.
Thanks to asynchronicity, i.e., the fact that we can forget about higher response times because there is no user waiting for a response, this issue may be avoided quite often. Otherwise, acceptable response times might be achieved via a supreme effort in infrastructure design (caches, redundancy, edge computing practices, etc.). Overall, my opinion is that these efforts would eclipse the benefits we expect to obtain from adopting Serveless.
Beyond this operational upper limit in response times, I think there is also a high complexity involved in Serverless architectures that Wardley did not mention. Distributed software applications are more convenient than monolithic ones for several reasons we might decide matter most than the simplicity a monolith brings. But the cost of coordinating components in a distributed application is always far higher than in a monolith, wherein everything is one call away.
There are two common strategies to implement it: orchestration, which involves an orchestrator or central coordinator (a.k.a., a bottleneck) that organizes the calls to the components; and choreography, where there is no central coordinator and processes self-manage themselves. As you may imagine, choreography is far more complicated to implement than orchestration, and the latter is far more complicated than in a monolith.
I will not say we are in a zero-sum game, because depending on the context one option may beat the others in all that matters. But no option would prevail in all contexts possible. Unless I am wrong, Simon Wardley believes the choreography of Serverless components would, and that we must embrace its supremacy.
Replicating Zimki without being aware it ever existed
Simon Wardley ran a company that implemented Serverless even before AWS had been born. Since the word had not been invented yet, they called their product Zimki. There is a lot of information about the offering of this company in Why the Fuss About Serverless?. For example, his value proposition:
It’s all about utility platforms where you just code, where billing is as granular as possible (e.g. down to the function) and you don’t give two hoots about “yak shaving” (pointless tasks like capacity planning or racking servers etc).Simon Wardley, “Why the Fuss About Serverless? | HackerNoon“
The billing system is a key feature of his idea of Serverless. However, as I said above, I hardly doubt the communication costs among components do not ruin it all. All these network and configuration costs are simply ignored as though they were negligible or do not even exist. I guess everyone having to check bills from cloud providers have been noticed that these costs are not negligible at all.
Even though, Wardly insists on the cost to support his idea of Serverless.
Billing by the function not only enables me to see what is being used but also to quickly identify costly areas of my program. I would often find that one function was causing the majority of the cost because of the way I had coded it. My way of retrieving trades in my program was literally killing me with cost. I could see it, I could quickly direct investment into improving that one costly function and reduce the overall cost.Simon Wardley, “Why the Fuss About Serverless? | HackerNoon“
Do I find Wardley’s view appealing, albeit fairly reductionist and poorly appealing to software engineers? Well, quite a lot. With Geovanny Vega, a friend of mine, we envisioned something pretty similar during COVID. We called it Kumo (cloud, in Japanese). The business idea was simple: a purely cloud-based platform of third-party functions built upon tools for everyone to publish and monetize.
We would provide a discovery engine for clients to find functions fitting their needs, and a billing system for function providers to get their part after having deducted ours.
Reading about Zimki this week, it became crystal clear to me that we were almost 20 years late with Kumo. Everything we thought was already in Zimki, including the discovery system:
That assumes someone has the sense to build a discovery mechanism such as a service register.Simon Wardley, “Why the Fuss About Serverless? | HackerNoon“
Nothing of this matters now, for my friend and I followed separate paths. After revisiting these ideas, though, I keep thinking that a scenario of multiple computing providers offering function discovery and billing (besides other value-added services) would be far better than us depending on a few, big utility companies.
Incidentally, one of my ideas with Kumo was to give an opportunity to developers and companies in developing countries to take their piece in the new economy in the cloud that we all in the richest countries on Earth take for granted. I don’t foresee utility companies doing that ever.
Overall, in disagreement with Simon Wardley, I see in Serverless a great, complementary architecture, but not the Next Big Thing that kills all others by turning them démodé.
After this long detour, let’s move to AI now.
AI and the emergence of conversational programming
The post about Conversational Programming, which we can define as AI-assisted software development, suggests far less divergent opinions than the piece about Serverless. A world wherein developers explain what they want to an AI assistant is clearly the future.
The rapid evolution of large language models towards more of a commodity service will enable more conversational styles of programming. If you think this is science fiction then an example of this was provided at AWS RE:Invent in 2019 by Aleksandar Simovic. This doesn’t mean that the system will build everything for you, there will always be edges that need to be crafted but the majority of what is built today is repetition of code that has already been done.Simon Wardley, “Why the fuss about conversational programming? | by swardley | Medium“
The idea is so obvious now that early examples are there already. Check out this two, for example:
The first victim of Conversation Programming adoption would be Infrastructure engineers. For the infrastructure to be put in place to make the components run and communicate will also be orchestrated by the assistant.
I want you take a moment to think about this. The speed of one company with engineers building systems through conversational programming (i.e. a discussion with the system) versus the speed of a company whose engineers are messing around with containers and orchestration systems (such as Kubernetes clusters) versus the speed of a company whose engineers are still wiring servers in racks. I want you to think about the Red Queen effect and realize that you will have no choice over this evolution.Simon Wardley, “Why the fuss about conversational programming? | by swardley | Medium“
A bit ominous but, again, I agree.
Then Wardley offers a list of conclusions. Let’s check them out one by one:
1) You’ll need fewer engineers. Nope.
Agree. Our DevOps beloved friends are lost though.
2) It’ll reduce IT budgets. Nope.
Agree. You cannot be cost-efficient unless you can compare between options, and you lost this possibility once you moved to AI-assisted development. Why? Because AI-assistants are provided by the same cloud providers. The AI-assistant, and not we, will decide what components are going to be deployed, and what services (queues, databases, DNS, etc.) will operate it.
3) You have a choice. Nope.
We’ll see. I am not a believer in the End of Times scenarios.
4) It’s only for startups. Nope.
5) We can build our own. Nope.
Not at the beginning. In the far future, who knows? As I said above, I hope we’ll all be able to start up many Serverless providers and so avoid the fate of depending on a few utility providers.
6) I can make a more efficient application by hand-crafting the code. Nope.
7) It’ll be the death of DevOps / FinOps etc. Nope.
I tend to disagree. DevOps are expensive. There is always someone somewhere working as we all did 30 years ago, but no one cares about them.
Hope this found this piece interesting and somewhat useful to navigate these times of overwhelming exposure to AI new developments.
Copyright: the artwork at the top is a picture of an Haiku written by Kobayashi Isaa I shot myself in a museum in Lisbon.