Preferences

“each API can just be its own module instead of a server”

This is basically what MCP is. Before MCP, everyone was rolling their own function calling interfaces to every API. Now it’s (slowly) standardising.


If you search for MCP integrations, you will find tons of MCP "servers", which basically are entire servers for just one vendor's API (sometimes just for one of their products, eg. YouTube). This is the go to default right now, instead of just one server with 100 modules. The MCP protocol itself is just to make it easier to communicate with the LLM clients that users can use and install. But, if you're doing backend code, there is no need to use MCP for it.
vidarh
There's also little reason not to. I have an app server in the works, and all the API endpoints will be exposed via MCP because it hardly requires writing any extra code since the app server already auto-generates the REST endpoints from a schema anyway and can do the same for MCP.

An "entire server" is also overplaying what an MCP server is - in the case where an MCP server is just wrapping a single API it can be absolutely tiny, and also can just be a binary that speaks to the MCP client over stdio - it doesn't need to be a standalone server you need to start separately. In which case the MCP server is effectively just a small module.

The problem with making it one server with 100 modules is doing that in a language agnostic way, and MCP solves that with the stdio option. You can make "one server with 100 modules" if you want, just those modules would themselves be MCP servers talking over stdio.

> The problem with making it one server with 100 modules is doing that in a language agnostic way

I agree with this. For my particular use-case, I'm completely into Elixir, so, for backend work, it doesn't provide much benefit for me.

> it can be absolutely tiny

Yes, but at the end of the day, it's still a server. Its size is immaterial - you still need to deal with the issues of maintaining a server - patching security vulnerabilities and making sure you don't get hacked and don't expose anything publicly you're not supposed to. It requires routine maintenance just like a real server. Mulitply that with 100, if you have 100 MCP "servers". It's just not a scalable model.

In a monolith with 100 modules, you just do all the security patching for ONE server.

vidarh
You will still have the issues of maintaining and patching your modules as well.

I think you have an overcomplicated idea of what a "server" means here. For MCP that does not mean it needs to speak HTTP. It can be just a binary that reads from stdin and writes to stdout.

>I think you have an overcomplicated idea of what a "server"

That's actually true, because I'm always thinking from a cloud deployment perspective (which is my use case). What kind of architecture do you run this on, at scale on the cloud? You have very limited options if your monolith is on a serverless and is CPU/memory bound, too. So, that's where I was coming from.

vidarh
You'd run it on any architecture that can spawn a process and attach to stdin/stdout.

The overhead of spawning a process is not the problem. The overhead if your runtime is huge and/or slow to start could be, in which case you simply wouldn't run it on a serverless system - which is ludicrously expensive at scale anyway (my dayjob is running a consultancy where the big money-earner is helping people cut their cloud costs, and moving them off serverless systems that are entirely wrong for them is often one of the big savings; it's cheap for tiny systems, but even then often the hassle isn't worth it)

jcelerier
.. what's the security patching you have to do for reading / writing on stdio on your local machine?
ethbr1
Hehe.

This item has no comments currently.