If you look at some of the most successful companies using microservices, they almost always arrive there much later as a response to growing pains. Here is the deal though, when you are starting to build a product, you have no growing pains. Your number one objective is to get a version of your product out there as fast as possible. This doesn’t mean you write poor code or take on lots of technical debt; it just means that your tradeoffs are different.
Nothing about monolithic code bases preclude you from writing well abstracted/layered systems. You can just as easily write unmaintainable microservices that have poorly defined boundaries and are impossible to operate.
Separation of concern (bounded context) — This one is perhaps the biggest benefit proponents bring up. Burned by large monolithic code bases that get out of control, where boundaries start to blur and layered dependencies seem to disappear. With microservices, you are forced to define the boundaries up front and communicate through a well-defined interface.
Distinct and small enough for a single team to own it end-to-end — This is more of a social organization pattern. When a team is large enough with tens if not hundreds of people working on the same product, breaking it down into smaller distinct pieces makes sense. In return, each team can function autonomously, without a lot of dependencies. Of course organizing this way requires a lot of ground work. You have to understand the domain, have clear boundaries and have stable defined interfaces.
Can scale independently — If say you have certain parts of your application that get hit rather hard (I.e. sending messages, part of an API, etc…) and it’s a separate deployable service, you can scale that separately from rest of infrastructure. Thus you can keep costs down, not having to scale every piece of your monolith.
Can use best language/framework/library/database for service — This one depends highly on what you’re building and the skill structure of your team. Certain tools are better suited for certain problems. C is great for low-level code when you want to squeeze every drop out of your CPU. Elixir/BEAM are great when you need a low IO latency, highly scalable, available service that can handle failures gracefully. Node is great when you need async io and want to benefit from the huge ecosystem of libraries and frameworks. You get the point. Typical monolith teams pick one technology and are then forced to use the same tool for multiple jobs.
You can’t have your cake and eat it too
Here is the dilemma though. If you could get all of the above for free, there would be no debating the benefits of microservices. But as with most things in life, it’s about tradeoffs. Though above are rather substantial benefits of microservices, they come with an even bigger set of hurdles.
Premature optimization — In most cases, you can achieve the same separation of concern and clean abstraction in a monolithic code base. Often people equate bad complex code bases with monolithic applications, but correlation isn’t causation. You can write poorly abstracted micro-services as well. Refactoring now requires coordinated modification of all dependent services [see next item].
Premature abstractions — It’s hard to properly abstract something without understanding the problem and the domain first. This understanding is usually empirical. Thus most abstractions are best to build through refactoring. It’s must easier to refactor a monolithic code base with wrong assumptions than to have to make cascading changes in multiple microservices. Monolithic code bases are cohesive and thus share a single unit test suite, thus making refactoring less complicated and more rapid.
Testing — In a monolithic code base, when something doesn’t conform to requirements constraints, it won’t compile or pass unit tests (if you’ve written them of course). In a microservice architecture, you have remote dependencies which are much harder to test. Yes, you can mock service interfaces, but those will not stay in sync with evolution in the real world. You can make this somewhat simpler with containerization. It still brings its own set of complexities. Developers use external services (i.e. RDBMS, file system, etc…) all the time, but these interfaces are stable. Using a single RDBMS across multiple services would yield similar problems, as you can’t upgrade the DB without cascading effects across all services that depend on it. Microservices increase the number of external dependencies.
Deployment — Any releases contain changes to a service’s interface could require coordination of deployment. This makes testing and coordination of deployment much more difficult. You can’t release an interface change, without ensuring that all services that rely on it are released concurrently.
Orchestration, Discovery — When you deal with lots of distributed units, you have to not only rely on discovery (your compiler will no longer do that for you) but also orchestrate business processes through small defined steps. You can also no longer rely on simple transactional services and therefore have to either embrace eventual consistency or implement 2PC commits.
Tooling — You have to build a lot of tooling around deployment, discovery, debugging, monitoring, and management of microservices. Many organizations that build microservices have dedicated teams that build tools around delivery and management of microservice systems.
A common problem in our industry is everyone seems to model their architecture based on what Google, Twitter, Facebook, and a few others are doing. The reality is that their problems are very unique. They also build custom specialized databases which solve unique high scale issues. Most companies are not Google, or Twitter, or Facebook, or Uber, and the vast majority never will be. You can still be extremely successful and never reach this scale. If you ever do happen to become the Uber of X, you’ll probably have to rewrite and refactor your products many times in the process.