I've found this from an old-school systems geek to be useful https://github.com/giantswarm/mcp-debug -- especially its REPL mode
Thanks for this - I've been using the MCP Inspector https://modelcontextprotocol.io/docs/tools/inspector but find it doesn't really fit my workflow.
I like the fact this mcp-debug tool can present a REPL and act as a mcp server itself.
We've been developing our MCP servers by first testing the principle with the "meat robot" approach - we tell the LLM (sometimes just through the stock web interface, no coding agent) what we're able to provide and just give it what it asks for - when we find a "tool" that works well we automate it.
This feels like it's an easier way of trying that process - we're finding it's very important to build an MCP interface that works with what LLMs "want" to do. Without impedance matching it can be difficult to get the overall outcome you want (I suspect this is worse if there's not much training data out there that resembles your problem).
Does anybody know of a cross-platform LLM-frontend with sync that is also open-source? I am currently using the web version of LobeChat on macOS and Android, but it's quite slow and has some features missing.
https://chorus.sh/ has a BYOK version
Openwebui
For what it’s worth, I’ve been using WitsyAi: it’s fully free, open source, and serves as a universal desktop chat-client (with remote MCP calling). You just need to BYO API keys.
Remote MCPs are close to my heart; I’ve been building a “Heroku for remote MCP tools” over at Ninja[2] to make it easy for people to spin up and share MCP tools without the usual setup headaches.
Lately, I’ve also been helping folks get started with MCP development on Raspberry Pi. If you’re keen to dive in, feel free to reach out [3].
[1] https://witsyai.com
[2] https://ninja.ai
[3] https://calendly.com/schappi/30min