In the last post we saw the Advantages of Microservice Architecture
If there are good things about microservice, there are bad things about it as well.
Let's cover some of the summary of what Sam says in the book Building Microservice.
Developer Experience
As the number of services increases, the developer experience may be impacted, especially with resource-intensive runtimes like the JVM, limiting the number of microservices that can run on a single developer machine. Running a large system locally becomes challenging, leading to discussions on how to handle this limitation. Extreme solutions like "developing in the cloud" can be considered but may negatively affect feedback cycles. Alternatively, limiting the scope of the system a developer works on could be a more practical approach.
Technology overload
You don't need a Kubernetes cluster with just 3 services! Having lots of different technologies for microservices can be overwhelming. Some companies get excited and introduce a ton of new, alien tech when they switch to microservices. But you don't have to use every option available. It's important to balance the benefits of different technologies with the complexity they bring.
The takeaway here is when you start with microservices, you'll face challenges like handling data consistency and dealing with delays. Trying to understand these challenges while also learning a bunch of new technologies can be tough. It's important to focus on gradually increasing complexity and only introducing new tech when you really need it. This helps manage the complexity and gives you time to learn better ways of doing things that might come up later.
Cost
Switching to microservices might initially cost more for a few reasons. First, you'll likely need to run more things—more processes, computers, network, storage, and supporting software, leading to additional costs. Second, any change in a team or organization slows things down temporarily because it takes time to learn and adapt.
If your main goal is to cut costs, microservices might not be the best choice. Microservices are more beneficial for making money by reaching more customers or developing more features at the same time. So, are microservices a way to increase profits? Maybe. Are they a way to reduce costs? Not really.
Reporting
In a monolithic system, analyzing data together is easy with a single database and a fixed schema for reports. In a microservice setup, where the schema is broken into isolated parts, reporting across all data becomes more challenging. Modern reporting methods like real-time streaming can adapt to microservices but may need new technology. Another option is publishing microservice data into central reporting databases or data lakes for reporting purposes.
Monitoring and Troubleshooting
Monitoring and troubleshooting in a standard monolithic application is simpler with fewer machines and a binary failure mode. In a microservice setup, understanding the impact of a single service instance going down can be challenging. While in a monolithic system, 100% CPU usage for a long time signals a big problem, it's less clear in a microservice architecture with numerous processes.
Security
In a single-process monolithic system, information stays within the process. In microservices, data moves between services over networks, making it vulnerable. Securing data in transit and protecting microservice endpoints from unauthorized access is crucial.
Testing
When testing software, there's a trade-off between the breadth of functionality covered and the challenges of setup and maintenance. End-to-end tests, which cover a lot of functionality, are powerful but can be difficult to manage. In microservice architecture, these tests become even more complex, spanning multiple processes and requiring extensive setup. As your microservices grow, the returns on investing in end-to-end testing diminish, leading to the exploration of alternative testing approaches like contract-driven testing, testing in production, and progressive delivery techniques. Later you can use parallel run or canary release methods.
What is canary release:
A canary release is a deployment strategy where a new version of the software is gradually rolled out to a small subset of users or servers before being made available to the entire user base. This approach allows for testing the new release in a real-world environment with reduced risk, as any issues or bugs can be identified and addressed before a full-scale release.
Latency
In a microservice setup, tasks that used to be handled locally on one computer might now be spread across different microservices. This involves sending information over networks, making the system slower. Measuring the impact on speed during the design or coding phase is tricky, so it's essential to make changes gradually and measure their effects. You need to know how much latency is acceptable in your services.
Data consistency
Moving from a single-database monolithic system to a distributed setup with different databases managed by various processes poses challenges in ensuring data consistency. Unlike the past reliance on database transactions, maintaining similar safety in a distributed system is complex, and distributed transactions often prove problematic. Concepts like sagas are important to use. Gradual decomposition of your application allows you to assess the impact of architectural changes in a more controlled manner.
What is Saga
A saga is a design pattern for managing a long-running and complex transaction that involves multiple steps or services. Instead of relying on traditional distributed transactions, which can be challenging to implement and scale, a saga breaks down the transaction into a series of smaller, more manageable steps.
Each step in the saga corresponds to a specific action or operation, and the overall saga is coordinated to ensure that all steps either succeed or fail together. If a failure occurs at any step, compensating transactions are used to undo the completed steps and maintain a consistent state.
Sagas helps address the challenges of distributed transactions by providing a more scalable and flexible approach to managing transactions across multiple services in a distributed system.
Top comments (0)