White to Black Gradient

7 Tips for Getting the Most Out of Microservices

The Kalix Team at Lightbend
  • 12 September 2022,
  • 10 minute read


A microservice is an independently releasable service with isolated processing around its data. It encapsulates all the functionality around a specific business domain with clear boundaries, making its service available over networks, typically HTTP

The benefits of microservices include:

  • The ability to change the functionality of one service without impacting other services.
  • Teams get autonomy over tech stacks and the freedom to make changes to their services without needing to coordinate closely with each other.
  • Each service can scale independently.
  • It’s easier to align parts of the system to their associated business areas.

Microservices Tips

1. The Single Responsibility Principle

The term single responsibility principle (SRP) is over twenty years old and is rooted in object-oriented programming (OOP). It states that each code component of a computer program should have a single function completely encapsulated within it—and it should only have one reason to change.

This is just as relevant when talking about microservices—our broader system architecture solves similar problems to code, only on a larger scale. If we change our Service X, we shouldn’t need to change Service Y. The one reason Service Y should need to change if the function of Service Y needs to change for its domain.

Services Are Independently Deployable

Each microservice should be deployable without having to deploy another service. If you find that you often deploy certain services together, look closer at the code—perhaps you need to think again about where you’ve drawn the divide between them. Otherwise, you have what we call a distributed monolith, and you lose most of the benefits of microservices.

Why is this such a big deal when, for example, stackoverflow.com is a distributed monolith? Because an architecture with independently deployable microservices makes it much easier to manage the impact of changes. A monolith could have more changes every deployment—even if you make them more frequently, as stackoverflow.com does—and you’re unlikely to reach the nirvana of zero-downtime deployments.

Information Hiding

Information hiding is how you break it down in a way that satisfies the SRP ideal—where each part has just one reason to change. We share an interface that does not change, which hides the implementation details that are likely to change.

Code “modules” in a monolith should be good at this, but we always find ways to violate them. And as the size of a product increases, so do the number of teams that need to work. For this, we need to add some barriers between parts of the system to allow them all to work productively.

Moving data behind separate services makes it much harder to violate controls, forcing you to give greater consideration to those boundaries and reducing the impact of changes on other services.

2. Keep Data Separate

Multiple services hitting the same database make hiding information and independent deployments difficult. But dividing and isolating data between services is also difficult. Data is the hardest part of microservices. Tackling it is often the time when you might start to revisit the question of whether microservices are beneficial for your scenario. And, as part of your decision process, you will need to reconsider the benefits:

  • All your data in one place with powerful tools like a SQL JOIN to extract data.
  • Cost savings of a single database.
  • ACID transactions.

Then there are all the new challenges you will face with microservices around data. These include:

  • Performance issues with latency pulling data from multiple sources.
  • Working with distributed transactions across services.
  • Learning to work with eventual consistency of data across services.

Assuming you’ve thought all this through and are ready to get started, make sure you avoid the common mistake of going for a quick win by starting with the code. If you tackle the data first, the coding will be much easier, and there will be fewer unpleasant surprises when trying to adhere to the SRP.

3. Ensure Backwards Compatibility

One of the biggest challenges is avoiding breaking existing API features when your developers make changes to a service. One reason to adopt a microservice architecture is to give teams autonomy to work on the various components of a system. The difficulty is having to consider every possible impact of every design decision.

The chances are you will need to make changes to your microservice APIs. However, you can do things to ensure that services that consume the API can continue to use it relatively easily.


Your users are other teams and their services. Know the requirements of your users, and be sure that the API changes that create work for them also add value.


You want to implement a better way of doing something, great—but give teams time to adapt. Communicate that the old way is deprecated but keep it running for some time alongside the new way. If you are not providing a direct replacement, clearly communicate how to migrate.


Be consistent in what users should call your API, so it’s easy to pick up any changes—obvious but often overlooked!

Automated Testing

Adopt continuous integration (CI) practices, merging the work of developers into a single repository and running automated test suites that check whether an API has been broken for existing use cases.


Gives users a way to control how they handle changes. There are lots of good ways to version and no correct way—but don’t reinvent the wheel. Use what’s already out there. Also, choose a standard format everyone will understand, not something specific to a particular language or platform.

Semantic versioning is a common standard of major, minor, and patch version parts. So version 2.4.1 would be the major.minor.patch:

  • If the patch number is incremented, it’s a backward-compatible change, like a bug fix.
  • If the minor version increments, we’ve added functionality (in a backward-compatible way).
  • If the major version goes up, we’ve made incompatible API changes, like changing a return type.

4. Use a Gateway

When a single service is broken up into smaller parts, client apps need to consume functionality from more than one microservice:

Consume functionality from more than one microservice

If client apps communicate directly with all services, several new challenges are introduced:

  • Client application code will grow in complexity.
  • Clients will become tightly coupled to services, increasing client impact when the microservices change.
  • Client performance will suffer from the numerous network calls.
  • All services are exposed to the public, increasing the security risk.
  • Cross-cutting concerns such as authentication, SSL, caching, loading balancing, and request logging must be implemented for every service.

A microservice can be many things, but more often than not, we’re talking about having an API exposed over HTTP. If this is the case, having an API gateway as a single-entry point for a group of microservices can greatly reduce code complexity, attack service, coupling, and network traffic for clients.

API Gateway

Be careful your API gateway doesn’t grow into a large monolith itself, though. As your service expands, you might want to consider splitting your API gateway into smaller specialisms. A good approach is to divide them by business domain boundaries; that way, you also remove the temptation to build large aggregating queries in the gateway that, while making life easier for the clients, couples the microservices into a monolith.

5. Leverage Asynchronous Communication

It’s easy to reason about the flow of communication inside a single-process application. It’s predictable and happens in near real-time (you can measure these calls in sub-milliseconds). But this does not translate well to microservice architectures that communicate across networks, which can be unreliable and slow.

Synchronous communication is the norm in a typical client app. Request and response happen in a single cycle, as near as possible to real-time. If we’re on a shopping website, the expected user experience is:

  1. You pay for a product at the checkout.
  2. The web app does what it needs to do behind the scenes.
  3. Your order is confirmed.
  4. Later, your account for the online shop will note when your product is dispatched.

With microservices, it’s easy to inadvertently create a synchronous chain, where the client app has to wait much longer for responses because of a series of communication between services in the backend.


In the diagram above:

  1. The user has submitted their order.
  2. The shopfront web app opens an HTTP request to the microservice that triggers deliveries from the warehouse.
  3. That microservice, in turn, opens an HTTP request to the catalog microservice to update stock levels after the dispatch is set up.

With a chain of synchronous calls like this, the web app user is stuck waiting for the calls to complete before the purchase is confirmed.

The solution is to identify parts of your microservice architecture that will benefit from asynchronous communications through HTTP polling or an event bus. Namely, communications that don’t need to be in real-time, where several seconds or longer is acceptable. These can be sent as messages onto an event “bus” (typically a queue structure) to be picked up by another microservice when ready.

The diagram below shows our online shop example reworked to send asynchronous messages for some of its backend communications. The client app only has to wait for the purchase to be confirmed. The dispatch and catalog services still get notified—but eventually, rather than instantly. We’ve also added a message that the dispatch service can send to the shopping web app when the order is shipped to update the user account in place of the acknowledgment the shop front would have got from the request-response pattern.


6. Focus on Security

A pure monolithic application will have just one single entry point —typically port 443 for HTTPS web traffic. Its components are all separate pieces of code in the same application, so communication between them all happens inside a single process. Microservices architecture is more complex, comprised of more parts that must communicate across networks. Each one of these will have its own entry point, exposing a larger attack surface that we need to protect.

Start with network-level controls—physical or virtual (such as container-based networking) to create small network segments to make it easier to isolate breaches. On top of this, ensure all communications between microservices use SSL/TLS to provide the critical information security concepts of confidentiality and integrity. Network segmentation alone is not enough and is dependent on trust.

You’ll also need application-level controls. Users of the services could be humans or other services. You need to confirm they are who they say they are (authentication) then you have to control which resource they have access to (authorization).

It’s not all challenges, and there are also benefits. They offer greater controls over the scope of access, reducing the impact of any successful attacks. And with Microservices, it’s much easier to adhere to the principle of least privilege, a key tenet of application security.

7. Think Infrastructure

The infrastructure requirements can differ significantly between on-prem and cloud.

To manage your own infrastructure for microservices on-prem, you’ll need an operations team well-versed in the latest containerization tools and technologies. You’ll also need a mature DevOps culture to be sure infrastructure is in place when the developers need it. A common stumbling block is that developers need to write code that works in the production environment from day one, for which they need infrastructure from day one too. And so, we have an impasse!

Working instead in the cloud part solves this problem. The main infrastructure components are available as off-the-shelf services. There will still be some DevOps work to do on configuration, but the infrastructure can be provisioned much faster.

The cloud infrastructure solution still needs an Operations team with experience across cloud security—cloud logging and monitoring, container technologies like Kubernetes, and managing resource scaling, amongst other things.


Microservices can provide a lot of flexibility for teams. They can also facilitate an organizational shift away from top-down command and towards more autonomy for teams to work in parallel with less coordination.

They are also complex to design, and it’s not easy to avoid the traps of a distributed monolith. On top of that work, there’s a significant DevOps requirement, even when taking advantage of cloud services. Eliminating dependency on DevOps can tip the balance when considering a microservice approach.

Kalix with Lightbend offers just that. Every layer—from the server infrastructure to security and databases to the application frameworks—is provisioned and managed for you. Your developers can focus on the code that solves the business problem, significantly reducing the operational resources needed to support your systems.

Kalix leverages the latest in microservice technology to bring you a cost-efficient, easy-to-implement solution that works with any tech stack. It allows any team to easily move to the cloud and innovate fast—without changing the language or framework you currently work in.

Learn more about Kalix’s high-performance microservices and APIs.