I'll also mention that yes, I agree, in most cases microservices are not the answer. This really comes down to the natural evolution of companies that scale to 50+ engineers split across multiple teams and continuing to grow. The software architecture should model after the org and allow teams to execute independently. If this is possible with monolithic architectures or anything else then that should be the approach taken.
In my case, I came from Google and Hailo, environments in which scale mattered on all levels and I didn't see the tools out there (back in 2015) to solve these problems for everyone else.
Rails is for Ruby. Spring is for Java. Micro is for Go
I see a world in which Micro can be used to write even a single service with a model that can scale to dozens. But more importantly I want to help unlock the reuse of services and the power of what microservices enabled for me at Hailo and what I've seen it do for others.
I'm all for microservices dying out. It's an awful fad. Making 6 pull requests to add a function argument should not be a thing.
Starting creating a communication pattern and separating concerns all the way to the table pattern they have setup, which does seem like overkill.
The former is useful in building teams, the latter is useful in dealing with scaling issues.
I see microsevices as return to the Unix philosophy:
>Write programs that do one thing and do it well.
>Write programs to work together.
Many things are significantly easier in a monolith. Integration testing, reasoning (and verifying with tests) about how components interact, refactoring of interfaces etc. As soon as you pull components out into microservices many assumptions developers may not even realise they make about developing in a monolith go out the window.
I vote for punting in microservices until the value proposition is clear. Otherwise you just end up with a macrolith that makes you dream of monolithic good old days
I think part of it is because of the hype machine, where people only talk about how awesome things are that they invented, instead of talking about what problems it solves, what it doesn’t solve, and what its tradeoffs are. If you are reading something to evaluate a technology and it doesn’t talk about all three of those things, discard what you’re reading, because it will mislead you.
Smaller services are also easier to test, for the same reasons. Services force the team to limit scope. While one can try to do the same in a monolith, it's too easy to "just this once" rely on some back channel data passing or assumption of the internal state of another part of the monolith.
Sometimes applications with only 100s of users need microservices simply due to the complexity and range of the workloads.
The moment your monolithic frontend and backend need to start doing asynchronous work, you'll want to build a "microservice" to pull from a queue.
I think it's more accurate to say that they don't necessarily provide any horizontal scaling, but can provide operational scaling. This is only if there's some natural team divide, and the impact of the introduced complexity does not outweigh the benefits of the teams being able to test and deploy independently.
Sadly in most cases I have come across, this is not the case -- the complexity introduced and/or problems caused by lack of ecosystem maturity outweigh any potential organisational benefit.
> release coordination
Microservices can make release coordination significantly harder i.e. when a feature release requires multiple deployments from separate teams, I definitely wouldn't list this in the pros column, it's very much an "it depends". Other tangential factors, e.g. monorepo vs multi-repo, can be more significant.
> The moment your monolithic frontend and backend need to start doing asynchronous work, you'll want to build a "microservice"
I agree queue-based background work is a case where services are a good fit (sadly this is not what a lot of people are doing with their codebases when they "go microservices"), ... but it can also be simpler to deploy the same exact same monolithic codebase to a worker, and only execute the part that is performing your async task.
(If your worker is a lambda, sure, that isn't going to work.)
If you're operating a monolith, every release requires coordination from every team, no?
As I say it can depend a lot on other factors like repo organisation, pipeline maturity.
I struggle to see how release coordination can be something microservices inherently improve upon, do you have an example comparison perhaps?
This has been working very well for us, and I would like to see us buy into the microservice model more with teams that each operate one or two services instead of each team committing a little bit to every service. In that model, each team would only need to worry about their own service(s) instead of interactions across every service (or rather, this would be less of a concern).
The only piece that doesn't work so well for us is local development; we've tried Docker Compose and PM2 to run the whole fleet of services in either containerized and native-process configuration (respectively), but the former is slow due to terrible Docker for Mac filesystem performance and the latter introduces all sorts of environment sanitization/consistency issues.
That’s a multi process architecture. We’ve done that for ages. Multiple processes doing what the do and communication is via TCP or named pipes.
On the "web application" side, we've also been doing the same thing since the LAMP days. Your web app has no understanding of how to efficiently store records; the database service handles that. Your web app has no idea how to efficiently maintain TCP connections from a variety of web browsers all using slightly different APIs; your frontend proxy / load balancer / web server handles that. All your app has to do is produce HTML. It's popular because it works great. You can change how the database is implemented and not break the TCP connection handling of the web server. And people are doing that all the time.
All "microservices" is is doing this to your own code. You can write something and be done with it, then move on to the next thing. This is the value -- loose coupling. Well-defined APIs and focused tests mean that you spend all your time and effort on one thing at a time, and then it's done. And it scales to larger organizations; while "service A" may not be done, it can be worked on in parallel with "service B".
That said, I certainly wouldn't take a single team of 4 and have them write 4 services that interact with each other concurrently. That isn't efficient because not enough exists for anyone to make progress on their individual task, unless it's very simple. But if you do them one at a time, you gradually build a very reliable and maintainable empire, getting more and more functionality with very little breakage.
The problem that people run into is that they act as the developers of these services without investing in the tooling needed to make this work. You need to have easy-to-use fakes for each service; so you don't have to start up the entire stack to test and play with a change in one service. You need to collect all the logs and store them where you can see all of them at once. You need monitoring so you can see what components aren't interacting correctly (though monoliths need this too). You need distributed tracing so you can get an idea of why a particular high-level request went wrong. All these things are available off the shelf for free and can be configured in a day or two (ELK, Jaeger, Prometheus, Grafana). (The other problem that people run into is bad API design. There is no hack that works around bad API design in microservices; your "// XXX: this is global for horrible reasons" simply isn't possible. You have to know what you want the API surface to look like, or it's going to be a disaster. It's just as much of a disaster in monoliths, though; this is how you get those untestable monstrosities that break randomly every release. Microservices make you fail now, monoliths make you fail later.)
That's part of the problem, you really can't. Requirements change. You have to update that service, and hope it's backwards compatible because if not, have fun updating all the services that interact with it.
Making changes backwards compatible is fairly simple. If semantics change, give it a new name. If mandatory fields become optional, just ignore them. (This is why you use something like protocol buffers and not JSON. It's designed for the client and server operating off of different versions of the data structure.)
Having two ways to do something is always going to be a maintenance burden, and make your service harder to understand. It is not related to microservices or monoliths. The same problems exist in both cases, and the solution is always the same; decide whether maintaining two similar things is easier than refactoring your clients. In the case of internal services where you control all the clients, refactoring is easy. You just do it, then it's done. In the case of external services, refactoring is impossible. So you maintain all the versions forever, or risk your users getting upset.
We're doing it for the project I'm working on at work, and it's my opinion a colossal waste of engineering time and effort. We're a big company, but we're not FAANG. Our user-base will never be even likely to be 100k users total. But hey we're doing this in the name of industry's current 'best-practice.'
I can't wait for the microservice & scrum trends to die off.