One weird thing I found a few weeks ago, when I added my remote MCP to Claude's integration tab on the website, I was getting OAuth errors.
Turns out they are requiring a special "claudeai" scope. Once I added that to my server, I was able to use it remotely in claude desktop!
I couldn't find any docs or reasons online for them requesting this scope.
Also, I have been using remote mcps in claude code for weeks with the awesome mcp-remote proxy tool. It's nice to not need that any longer!
Then as I'm writing a book currently on MCP Servers with OAuth, Elicitations come out! I'm rushing to update this book and be the best source for every part of the latest spec, as I can already see lots of gaps in documentation on all these things.
Huge shout out to VS Code for being the best MCP Client, they have support for Elicitations in Insiders already and it works great from my testing.
For more curious and lazy people -- what are elicitations?
It’s interesting to see other tools struggling to keep up. ChatGPT supposedly will get proper MCP client support “any day now”, but I don’t see codex supporting it any time soon.
Aider is very much struggling to adapt as well, as their whole workflow of editing and navigating files is easily replaced by MCP servers (probably better as well, as it provides much effective ways of reducing noise vs signal), so it’ll be interesting to see how tools adapt.
I’d love for Claude Code (or any tool for that matter) to fully embrace the agentic way of coding, e.g. have multiple agents specialize in different topics and some “main” agent directing them all. Those workflows seem to be working really well.
People are going to continue doing that because these agentic tasks can take some time to run and checking in to approve a command so often becomes an annoyance.
I can’t see a way around that except to have some kind of sandboxing or a concept of untrusted or tainted input rather than treating all tokens as the same. Maybe a way of detecting if the response of a tool is within a threshold of acceptability within the definition of the MCP (which is easier with structured output), which is used to force a manual confirmation or straight up rejection if it’s deemed to be unusual or unsafe.
I think we are starting to see these remote agent environments where each agent session gets its own sandbox environment to run things in. I bet thats where this is going.
That said, I ditched codex for claude code... Sorry open ai. No MCP and no way to interact during execution is a huge drawback.
> Javascript community suddenly got automatic code creation agents, and went to town.
I've been working on an MCP server[0] that let's LLMs safely and securely generate and execute JavaScript in a sandbox including using `fetch` to make API calls. It includes a built in secrets manager to prevent exposing secrets to the LLM.I think this unlocks a lot of use cases that require code execution without compromising security. Biggest one is that you can now ask the LLM to make API calls securely because the JS is run in a C# interpreter with constraints for memory, time, and statement limits with hidden secrets (e.g. API keys).
The implementation is open source with sample client code in JS using Vercel AI SDK with a demo UI as well.
Couldnt AI help with that..?
That said, the original spec needed some rapid iteration. With https support finally in relatively good shape, I hope we'll be able to take a year to let the API dust settle. Spec updates every three months are really tough, especially when not versioned, thoroughly documented, or archived properly.