Microservices on the Web: Wrong Bees or a Problem in the Monolith
Microservice architecture is an eternal topic of debate at many IT conferences. How to properly implement it into the workflow and what important moment in the launch process can make work easier in the future, says Vladimir Skibin, chief developer and head of SKTeam
Every web developer has heard mentions of microservices or microservice architecture in their life. These could be mentions of a hyped phenomenon, work optimization, or heated debates between supporters of monoliths (applications that do not use distributed work) and microservices (supporters of dividing everything into microservices).
In short, the essence of this approach to development is to move independent parts of the application into separate applications and establish communication between the "central" microservice and the rest.
The simplest example is sending mail from a site, which is performed not in the main application flow, but in a separate service. The task of this auxiliary service is precisely to unload the main application flow from routine tasks and allow it to deal with the main logic. This kind of deputy from the boss accepts the task of sending a letter and is engaged in sending it himself. The main part of the application can always inquire about the status of the submission (this is an optional functionality, the implementation of which many do not attach importance to). After completing its task, the service will notify that it has done everything by recording in the database about the successful or failed execution. It is, so to speak, “not worthy” of being called by the loud name “service”, since it has only one function - that is why such parts of the application were called microservices.
To understand how all this works, you need to understand where microservices came from, where the root problem was that contributed to their emergence.

Anyone who has written applications, both on the web and standalone, has encountered the problem of dividing responsibilities between modules. Those who fully use the SOLID development principles in their practice understood that the transition from an application where everything works as a single whole to an application that can be divided into several black boxes, and define protocols (rules) for interaction between them is inevitable.
The root cause was the need to develop the product by several teams and split the product into several independent parts, and in such a way that each team could develop its part independently of the others and put it into operation, include it as part of the product.
The consequence was that these very parts into which the product was divided can be written in completely different approaches or even in different programming languages. For example, part in PHP, part in C++, and the main part in NodeJS.
While asynchronous programming languages were starting to gain popularity, the popularity of the microservice approach was also growing. After all, the transition to this concept fit perfectly into the model of algorithms in programming languages that used an asynchronous type of data processing in their work. Reworking the application into microservices allowed abstracting its parts to a higher level, which contributed to better planning of the application architecture.
And then the first problems began to appear. Systems theory says that the more parts in the system, the less reliable it is. Application fragmentation became uncontrolled and fragmented everything as much as possible, which actually caused problems.
At this point, and on the basis of the problems that appeared, supporters of the old, monolithic approach began to say that microservices are a great evil. If one microservice does not work, then it can break the entire operation of the application.
In fact, this is a very fine line. It is important to analyze and understand which and how many microservices the application needs to be divided into. And here you need to choose only the important ones, without which the application will not work. It is also necessary to handle all errors correctly. If a microservice has failed, then all other parts should function in their full capacity, but when requesting data from the failed one, indicate that there is no way to contact it.
For example, if the mail sending microservice is not working, then the main application should not crash because of this. Let its tasks accumulate in a certain registry, and after restoring functionality, it will gradually complete all its tasks.
The only problem that may arise is e2e testing. For its full implementation, all microservices must be running and operational. Although on the other hand, the purpose of writing e2e tests is also to model the behavior of cases when something does not work.
By trial and error, the rules for including a microservice architecture in the application life cycle are developed:
- Initially, everything can be written in a monolith, so the first prototype will be released faster.
- As needed, autonomous modules can be moved to microservices. Initially, they can be “hung” on an http service and communication with them can be organized in the same way.
- As the number of microservices grows, implement message delivery systems such as RabitMQ, ActiveMQ, NATS. These systems act as registries of unprocessed messages.
- And at each stage of the budding of the next microservice, test failure points, and if necessary, implement cached response systems so that the failure of a separate microservice module does not lead to the collapse of the entire application
And most importantly, document each step. Even if the development is carried out by a single programmer, in six months, a year, two years it will be difficult to remember why this or that decision was made. And when you or the one who comes after you will have to go through this entire path again, which took the lion's share of all the work.