Currently software development, and mainly, web development is constrained by several paradigms. Unfortunately, there is little, to none discussion about these. But not always a new idea is a good one. Therefore we should be cautious, because some of such ideas may be harmful.
This article is about one of such paradigms. It is about the paradigm of separation.
Unconsciously we all agreed it is a great way to solve software development problems. To some extend it was true. But we gone too far.
Let me illustrate it by example of how today a modern web app is built.
1. Double backend
Normally there is a database somewhere in the cloud. Than, there are lambda functions that handle grabbing the data, and then there is an API.
This kind of a separation is rather good. But what happens next is not. After that there is another backend layer that gets data from API to serve it to the web app.
Ok, so why we need essentially two backend services? One as a lambda layer, and second as a frontend backend?
There is no real benefit from it if you thing about a good architecture of a solution. There should be one backend and one API that serves common services for all applications.
So why it is not done? There are several reasons. Sometimes there is a problem with maintaining the lambda layer. It is badly designed. So no one likes to touch it to not break stuff. There is also lack of tools. Maintaining lambda layer is error prone, and extremely time consuming.
So sometimes a second backend layer is necessary to compensate. So it is a way to fix bad architecture with a patch.
It takes unnecessary time and cost to maintain two backend layers. It also generates more places where bugs may occur.
How well it sounds to say about separation in this context. It sounds like it almost makes sense. But it does not, mostly second backend layer are just wires that so nothing than move data back and forth.
The proper way is to maintain one, good backend with one good API.
When we will move forward, data reach at last to the frontend code. A place where separation takes its toll on many ways.
Lately a popular subparadigm of separation was born. It is called microservices. Aside from the literature about that approach, it is understood differently from person to person.
Microservices, micromanagement both fall into the same category. In theory, microservices are great because you can separate domains, and set different teams to work on different projects. Everything is separate so it works awesome.
The problem with the usage of microservices is however, that people don’t know when to split a service into microservices. Just like no one really knows where is the line between micromanagement and management.
But in both cases everyone can learn it is a wrong deduction process result in most cases. But what is wrong with microservices?
Creating microservices generates a lots of overheat. Every microservices needs to maintain its own dependencies that are mostly duplicates. They need to duplicate a lots of boiler plate. Not to mention that design, style, common code has to be in some way shared between them.
Not only these reasons, but also other cause microservices maintaince to be very expensive. So when they should be used? There are two things that can balance out the cost of splitting project into these. First is when the project is really huge. I mean — huge. People doesn’t know what it means. So let’s use an example: when you build an operating system. Or a bank software. Than, building microservices can be helpful.
Another requirement is that you can clearly separate microservices. They are totally different things. They don’t share more than 1% of common stuff.
Now, we don’t write operating systems or bank software every day don’t we? So using microservices won’t balance the cost.
Above of what I wrote before, it is not only that microservices are for huge systems. Where “micro” really means gigantic piece of software. Because I believe some people know about it. But still decide to use microservices to prepare the code for the future. Will it become an operating system or a bank software? I hope it will, because of it is not set up as a year long milestone, it is a premature optimization. One of the worst kind of mistake that can be done.
As long as you read it it may seem like I wrote: microservices are great but for larger projects.
But the greatness itself is also a matter of debate. Are they really? In a standard process of applying solutions there has to be a problem, analysis, conclusions, and a solution.
It is great to thing about what problem we solve rather than pick up every solution available. So what problem constitutes implementing microservices?
Among others a trouble with maintaining codebase can be heard. During my 20y dev practice I have seen it sometimes. And almost always the problem was one: the code was a mess.
The quality of code, refactoring real tests, proper layers, programming to language, not on a language, engineering the process, and so on. If it comes to improving code quality and maintaince process there is a lots of proven ways that work. Ways that were practically applied to hundreds of thousands of projects with success.
But not even one of such method tells that person should take garbage cans and place them in multiple rooms to solve the problem of a mess. All the person does, is separates a mess with some more messy wires.
Clearly microservices are one of the greatest examples how separation paradigm can be harmful. It is a bad solution to badly stated diagnosis (if even!)
Before considering microservices an analysis has to be made. Proper ways of handling problem is to choose solutions and apply them. Not the other way around. It’s like with micromanagement. If you feel the need to do it, it does not mean your employees work wrong. It means there are other problems to solve and micromanagement won’t help but make it worse.
Before I wrote about two examples where we went to far with separation. I can count a lots of more of such examples. But will tell you about the problem with Redux. Just to make it brief and jump to conclusions.
Redux is a state management library. Popular frontend stuff that people are really, really obsessed with.
And I get the reason behind it. There is one place storing the state, state changes flow is easy to track, there is a fancy Chrome extension. Good stuff!
But the cost to it! It is literally the worst way to store the state person could possibly imagine.
Creators of Redux went great lengths to ensure everything is separated. Actions, reducers, effects, everything. Theoretically it may make sense. But practically it means using Redux is 99% of the time writing/copying boilerplate code just to set it up right. Not to mention every single line of code is where a bug can hide. And that writing code is a cost.
These issues however were put aside, because separation was thought of as the most important thing to take care of. Even if it is by the cost and quality of the code.
Today Redux is a rather popular library. What is dissapointing however, more than it’s terrible usability, is that with all the glory of Open Source and other libs that provide the same standard, but in a better way Redux still is used today.
It is on itself a symbol of separation. Symbol of ill understood quality of coding. A great example that we went to far with separation ignoring costs of it.
Separation is a paradigm that is widespread today. It almost became a synonym of quality. It is not discussed. It is like a dogma. Almost a religious figure. No one discusses because, you know, everyone smart knows, it is a good thing.
Sometimes it may be. But nowadays we went too far with separating stuff because it is so easy. It feels like it solves problems, but often it creates more than we wanted.
If there is one thing I’d love you could take out from this article is just to remember we went too far with separation and we should stop it!