I think integration into existing IDEs is the wrong form for agentic coding. The best way to work is managing several Git worktrees with agents running so you aren't stuck waiting 20+ minutes for Claude Code to finish.
I built a UI to manage this, and it is starting to turn into a new type of IDE, based around agent management and review rather than working on one thing at a time.
When I see proposals for this kind of workflow, the one question I have is how will you manage your personal context. When I’m reviewing code by coworker, I’m not seeking to fully understand the code or checking that it’s correct. I’m mostly trying to get a high level understanding and checking for glaring mistakes (code styles, best practices,…). I can get through a lot of PR in a day that way.
For more important stuff, like if it falls under my supervision, I will test the branch and carefully check the implementation. And this for each PR updates. That takes a lot longer.
So I’m wondering, how do you context switch between many agent running and proposing diffs. Especially if you need to vet the changes. And how do you manage module dependencies where an update by one task can subtly influence the implementation by another?
LeafItAlone
>So I’m wondering, how do you context switch between many agent running and proposing diffs. Especially if you need to vet the changes.
I’m wondering this too. But from what I have seen, I think most people doing this are not really reading and vetting the output. Just faster, parallelized, vibe coding.
Not saying that’s what parent is doing, but it’s common.
stingraycharles
Yeah. I would like multiple agents because each can be primed with a different system prompt and “clean” context. This has been proven to work, eg with Aider’s “architect” vs “editor” models / agents working together.
For parallel work who want stuff to “happen faster”, I am convinced most of these people don’t really read (nor probably understand) the code it produces.
scuol
It's basically like having N of the most prolific LoC producing colleagues who don't have a great mental model of how the language works that you have to carefully parse all of their PRs.
Honestly, I've seen too many fairly glaring mistakes in all models I've tried that signal that they can't even get the easy stuff right consistently. In the language I use most (C++), if they can't do that, how can I trust them to get all the very subtle things right? (e.g. very often they produce code that holds some form of dangling references, and when I say "hey don't do that", they go back to something very inefficient like copying things all over the place).
I am very grateful they can churn out a comprehensive test suite in gtest though and write other scripts to test / do a release and such. The relief in tedium there is welcome for sure!
jbentley1
I tried to make it easy to remember what you are doing. You can see the prompts you ran, and I used the Monaco editor from VSCode to view and edit the diffs.
I think there are opportunities to give special handling to the markdown docs and diagrams Claude likes to make a long the way to help review.
EGreg
Why don’t you automate this checking with AI? You can then cover hundreds of PRs a day.
Voloskaya
> You can then cover hundreds of PRs a day.
I would argue you haven't covered any.
Why not just skip the reviews then? If you can trust the models to have the necessary intelligence and context to properly review, they should be able to properly code in the first place.
Obviously not where models are at today.
EGreg
Not necessarily. It's like the Generative Adversarial Network (GAN). You don't just trust the generator, but it's a back-and-forth between the Generator and Discriminator.
Voloskaya
The discriminator is trained on a different objective than the generator, it's specifically trained on being good at discriminating, so it is complimentary.
Here we are talking about the same model doing the review (even if you use a different model provider, it's still trained on essentially the same data, with the same objective and very similar performances).
We have had agentic systems where one agent checks the work of another since 2+ years, this isn't a paradigm pushed by AI coding model providers because it doesn't really work that well, review is still needed.
derwiki
Turtles all the way down. We seem to be marching towards a future like that, but are we there today? Some of the AI-generated PRs I’ve seen teammates put out “work” (because sometimes two wrongs make a right) but convince me we still need a human in the loop.
But that was two weeks ago; maybe it’s different today
jbentley1
The other replies are correct that right now you need some level of human review, but it would be interesting to have a second AI review with a clean context. Maybe a security checklist, or a prompt telling it to check that the tests are covering the functionality appropriately.
Etheryte
There's no reason you couldn't do the same thing as an IDE plugin.
jbentley1
Yes there is. IDEs just aren't designed for it. The main screen in an IDE is a single branch at a time, I want to be managing a swarm of agents on multiple branches/worktrees
Etheryte
You don't need IDE support for this, it's all Git under the hood. Your extension can hold virtual branches in memory in the background, feed the file contents to the LLM through that layer and back, and the only problem you need to deal with after the fact is how to resolve conflicts, but the LLM would also be a good candidate to handle that. The more I think about it, the more Git makes this a straightforward implementation compared to say SVN, since branches cost nearly nothing. All of this is not to say that it's a trivial piece of work, but it is very much doable without building a new IDE from scratch.
radicalbyte
That needs isolation, which in practise means multiple machines..
derwiki
Why machines? Multiple clones of the same repo is one low tech way to achieve that.
brulard
If we're talking for example full stack JS/TS app, wouldn't you need a separate build/dev server running, database and likely more?
naasking
I don't see why you necessarily need multiple machines, just multiple checkouts, one for each agent. Depends on what shared resources are involved, eg. databases, etc.
int_19h
Why not multiple IDE windows then?
SkyPuncher
Your tool is cool, but is solves a different issue.
Right now, background agents have two major problems:
1. There is some friction to getting the isolated environment working correctly. Difficulty depends on specifics of each project. Ranging from "select this universal container" to "it's going to be hell getting all of your dependencies working". Working in your IDE pretty much solves that - it's likely a place where everything is already setup.
2. People need to learn how agents build code. Watching an agent work in your IDE while being able to interject/correct them is extremely helpful to long term success with background agents.
mindwok
I personally disagree. I use Cursor every day on commercial projects, and while I find background agents cool and useful in some contexts they are more often than not simply a distraction.
My preferred way to vibe code is to lock in on a single goal and iterate towards it. When I'm waiting for stuff to finish, I'm exploring docs or info to figure out how to get closer. Reviewing the existing codebase or changes is also super useful for me to grasp where I'm up to and what to do next. This idea of managing swarms of agents for different tasks does not gel with me, too much context switching and multitasking.
Jonovono
Looks cool! What was your reason for not using the Claude Code TS SDK? Looks like you install the package, but are manually spawning claude commands instead?
Side note: You should look into electron-trpc. it greatly simplifies IPC handling
brulard
This is nice, I was thinking about needing multiple working trees for different sessions of claude code.
Regarding your webpage - I wish you would vibe away the annoying header coming down every time I scroll just tiny little bit up.
jbentley1
Noted! Thanks
OtherShrezzing
For Anthropic, they’ve got to put their product where their customers are. If they’re all in a cli or IDE, then the correct place to put agenetic coding features is into the cli or IDE.
data-ottawa
I was just reading the Claude Code recommending that approach this morning.
Having a nice way to manage the work trees sounds great, but the rate limiting still sounds like an issue to this approach.
If I hit the rate limit in 2 hours and got value out of each prompt I ran, that's better than doing the same amount of work in 6 hours and not hitting the limit.
Personally, I'm running 2 accounts and switching between them for maximum productivity. Just as a function of what my time is worth it is a no brainer.
mikojan
Rate limiting has not been a problem for me. I need time to review the proposals, the actual source code and to meddle with it in between
One must also always be aware that an LLM WILL ALWAYS DO what you ask it for. Often you ask for the wrong thing. And you need to rethink.
Maybe I am inefficient though I really only use at the most two additional work trees at the same time.
brulard
> ... LLM WILL ALWAYS DO what you ask it for.
What? That's not my experience at all. Especially not "always"
mikojan
Yes, yes they do. If you ask it to refactor something and integrate it somewhere else; it will do exactly that even if in the course of it you would find that that would dramatically increase complexity not reduce it.
I cannot count how many times that or something like that has happened to me.
While I seem to have a little attention from this comment, if anyone can test this Linux installer for Crystal and tell me if it works on their machine I would appreciate it:
Basics are working on arch with the AppImage, anything specific?
jbentley1
If you can call Claude Code that means everything else should be working, as most functionality is built around the terminal and that is how it is calling Claude Code.
Thanks for your help, now I'll be able to include Linux support in my next release
4b11b4
Seems like Amp would plug into this better? At least regarding the ability for sharing prompts, etc.
lbeurerkellner
This looks really cool, thanks for sharing.
smrtinsert
What tasks require parallel workflows like this? Running one claude prompt gives me more than enough to chew on for several hours if done correctly.
throwaway314155
> The best way to work is managing several Git worktrees with agents running so you aren't stuck waiting 20+ minutes for Claude Code to finish.
Sounds like you're limiting yourself to users who are comfortable paying 100-200$ monthly subscription or even thousands per month for API prices.
C.C. is expensive but i was hoping we weren't going to build tooling that exacerbated this issue simply because for some of us money is less of an issue than for most of us.
jbentley1
If you are paying a senior engineer 200k, getting them a CC max plan is equivalent to 1.2% of their salary. I would say that it increases productivity by a lot more than that.
So yes it might feel expensive in terms of a personal monthly budget, but the value for money is insane.
artursapek
When I try to run two CCs at once I quickly get 429 rate limited, even on the $200 plan
andy_ppp
Maybe the UI should allow you to still ask questions but in a queue to prevent this. It could have informative text like “waiting on 3 previous questions” and a progress bar of some kind.
jbentley1
Weird, I have not had this issue and I commonly run 5+ at once
bicx
What I'd like:
- Top-tier `git worktree`-based context switching in the same IDE window.
- A framework for attaching terminal-based agents to each worktree branch. Eventually this should evolve into a better open protocol for integration, primarily for diffs, permission request notifications, and progress indicators.
- A sidebar that monitors agent status/notifications on each active worktree branch.
- A quick notification-style way of responding to agent prompts across all branches. This has been built in standalone agent manager tools, but I can't use those tools effectively when I need to quickly jump in and be an engineer.
- Branch-context-level association with browser test windows or mobile emulator/simulator instances.
- Strong code completion capabilities via other faster models, a great extension ecosystem with lots of language server support, and function as a high-quality IDE.
Right now, I'm managing multiple macOS desktops with different instances of Windsurf running Claude agents in-terminal, and web browser windows / mobile emulators/simulators are dragged into the respective desktops for each instance. It's clunky.
visarga
What I'd like - a coding agent with debugging capabilities. Walk the stack, inspect local variables and arguments, basically seeing what is really happening as opposed to debugging by prints and asserts.
That could be interesting! And probably not all _that_ difficult to build, given the right interaction points and IDE APIs.
kordlessagain
> A quick notification-style way of responding to agent prompts across all branches. This has been built in standalone agent manager tools, but I can't use those tools effectively when I need to quickly jump in and be an engineer.
I tried, unsuccessfully, to write a plugin for VSCode that would let Claude run a tool to jump me to the file and line it was editing. It sorta worked but kept hanging.
dewey
What’s the actual difference between Cursor and Claude Code these days? I’ve used both and then just switched to Cursor because the company paid for it…but except the cli vs UI difference I couldn’t really spot any big differences as both did multi-file edits.
The current state of having multiple editors open, or having to switch between JetBrains stuff and Cursor is really a bit of an annoying transition period (I hope).
kissgyorgy
The difference is huge, not even close both in quality and usage.
Claude Code is fully agentic, meaning you give it a task and fully implements everything, produces surprisingly good, working code. Can test, commit, run commands, log in to remote system, debug anything.
It doesn't optimise for token usage, which Cursor heavily do, that's why it can produce higher quality code on first shots (the downside is that the cost is very high)
Cursor's agent mode is very much in it's infrantry just catching up, but Cursor is essentially a tool for editing files, but Claude Code is like a junior developer.
jen729w
This does Cursor a disservice by not mentioning its deep integration.
Cursor will suggest and complete code for you inline. You just tab-complete your way to a written function. It's mad.
Claude Code doesn't do this.
Cursor also has much better awareness of TypeScript. It'll fix errors as they occur, and you can right-click an issue and have it fixed.
Contrast with CC where I've had to specify in CLAUDE.md to "NEVER EVER leave me with TS errors", and to do this it runs a CLI check using its integration, taking way longer to do the same thing.
stpedgwdgfhgdd
Fwiiw
I noticed that CC’s generated Go code nowadays is very solid. No hallucination recently that i can remember or struck me. I do see youtube videos of people working with js/ts still struggling with this. Which is odd, there is way more training material for the latter. Perhaps the simplicity of Go shines here.
CC might generate Go code for which there are already library functions present. So thorough code reviews are a necessity.
int_19h
Modern idiomatic JavaScript and TypeScript encourage "clever" code. The latter also has a very complicated type system, which, again, is frequently used, especially in .d.ts files for pure JS libraries because JS devs love tricks like functions doing different things based on number and type of arguments. So models learn all that from the training set, but then often can't deal with the complexity they themselves introduce.
Much as I dislike Go, it is indeed probably closer to the ideal language for the LLM. But I suspect that we need to dial it down even further, e.g. no type inference whatsoever (so no := etc). In fact I wonder if forcing the model to spell out the type of every subexpression as a type assertion might be beneficial due to the way LLMs work, for the same reason why prompting for explicit chain-of-thought improves outputs even with models not specifically trained to produce CoT. In the similar vein, it could require fully qualified names for all library functions etc. But it also needs to have fewer footguns, which Go has aplenty - possible to ignore error returns, concurrency is unsafe etc. I suspect message passing a la Erlang might be the best bet there but this is just a gut feel.
Of course, the problem with any hypothetical new PL optimized for LLMs is that there's no training data for it. To some extent this can be mitigated by mechanically converting existing code - e.g. mandatory fully qualified names and explicit type assertions for subexpressions could be easily bolted onto any existing statically typed language.
pbedat
The CC vscode plugin can also fetch the errors and warnings reported to vscode by other plugins and language Servers, making the additional compile step obsolete
anaisbetts
That's the biggest reason that the IDE plugin exists, so that Claude Code can get access to the LSP information
jen729w
Right. Which it can. But to suggest that this makes it Cursor-like is wildly misrepresentative.
etothet
This is precisely what the CC extension does, no? At least that’s how the extension behaves in JetBrains IDEs.
jen729w
Nope. It allows the CLI to read and parse your files. It absolutely does not give you Cursor-like interactivity.
If I’m wrong I’d be overjoyed! But I have it installed and have seen no hint of this.
Copilot in vscode is so bad about typescript errors as well.
randomtoast
You mentioned that Claude Code is fully agentic.
I am using the Cursor agent mode, which can run in auto mode with, let's say, 50 consecutive tool calls, along with editing and other tasks. It can operate autonomously for 30 minutes and complete a given task. I haven't tried Claude Code yet, but I'm curious—what exactly does Claude Code do differently compared to the Cursor agent?
Is the improvement in diff quality solely because Cursor limits the context size, or are there other factors involved?
dtech
I'd suggest to just give it a shot and notice the difference, it's night and day.
I couldn't get cursor agent to do useful stuff for me - might be because I don't do TS or Python - and Claude Code was a big productivity boost almost from day one. You just tell it to do stuff, and it just... does it. At like the level of a college student.
brulard
I'm writing TS and I was not very happy with Cursor - I expected more coming from using Cline + Sonnet in VS Code. I tried the composer or how do they call it, and the results were mediocre. After few hours of struggling I gave up and returned to Cline. Now with Claude Code I got much more value right from the start. I don't know, maybe I was "holding it wrong".
int_19h
Cursor does a lot of trickery to reduce context size, since they ultimately have to pay per token even if you pay them a flat fee. Cline, on the other hand, is quite profligate with context, but the results are correspondingly better, especially when paired with Gemini due to its large max context.
chrisweekly
"infrantry"
I think you meant "infancy"
posix86
Not sure what you mean, cursor has agents, that run in feedback cycles, checking e.g syntax errors before continuing, reflecting, working for minutes if need be, can execute commands in your terminal, check any file it wants. What can cc do that cursor can't, at least in theory?
SparkyMcUnicorn
I've had Claude Code run for many hours, and also let it manage its own sub-agents. Productively.
Coming back to an implementation that has good test coverage, functions exactly as specified, and is basically production-ready is achievable through planning/specs.
Maybe Cursor can do this now as well, but it was just so far behind last time I tried it.
20wenty
Do you have details on what "optimise for token usage" looks like in Cursor? Or is your point more about how Cursor manages the context window?
cle
Cursor does all that stuff too perfectly fine.
SV_BubbleTime
> but Claude Code is like a junior developer.
This has been exactly my experience. I guess one slightly interesting thing is that my “junior developer” here will get better with time, but not because of me.
kajecounterhack
A lot of people use them together (cursor for IDE and claude code in the terminal inside the IDE).
In terms of performance, their agents differ. The base model their agents use are the same, but for example how they look at your codebase or decide to farm tasks out to lesser models, and how they connect to tools all differ.
razemio
What are you doing that you do not feel a difference? Claude is superior for me in every single way. It is not even close. I mainly use scala, python, js and dart. Maybe curser is better with other languages?
I can use claude to be a very productive assistant. Especially useful for small to medium changes. If I plan accordingly it is like magic. It tends to duplicate code but that us about it. Code produced by cursor needs alot of work the last time I tried. Often slowing me down instead of helping.
derwiki
Anecdata, but Cursor does fine for me with Ruby on Rails
chrsw
Claude Code is very impressive. It almost feels like another programmer sitting there with you in your terminal. It's not perfect and usually needs help understanding what you're trying to do but once all the pieces are in place and it gets going it's incredible. I'm not even using it properly in terms of giving it the right context it needs to truly understand my project. And I'm not using it for TypeScript or even any web development.
garychalmers
Cursor forces you to switch to a different IDE (unless you'are already using VSCode), while Cloude-Code (or Aider) is simply a terminal that works in parallel to your current IDE, editing directly your project files.
In my case the "IDE" is vim+tmux+bash and I prefer CLI assistants, but this applies also to people that uses a graphical IDE different from VSCode.
brulard
Doesn't it force you to switch to different IDE even when using VSCode? It's separate editor, right? At least that's how I used it. If it was just an extension, like this one, it would work so much better for me.
chrisweekly
I know that some VSC users use Cursor for inline edits (in the main IDE UX), and also use CC in VSC's integrated terminal.
khaledh
One feature that is still exclusive to Cursor is the Cursor Tab feature. It almost always accurately predicts your next edit with high accuracy, based on your recent edits and cursor navigation.
But from an agent perspective, Claude Code is much more tuned to understanding the task, breaking it down into small steps, and executing those steps with precision.
Overall, IMO agentic coding is great for well defined tasks, especially when they're backed by tests. It still lacks though in deep technical discussions and being opinionated about architectural decisions, unless specifically nudged in a certain direction. This is an area where Gemini Pro excels at, but it sucks at as a coding agent. So I use both: Gemini Pro for high-level picture design, and Claude Code for executing the plan by giving it clear requirements. All while making some edits myself using Cursor Tab.
nsingh2
One thing with Cursor is that you can use other non-anthropic models like o3 or Gemini. I've found o3 more useful for cracking localized difficult problems, and Gemini for large context refactoring.
SparkyMcUnicorn
Adding Gemini and/or o3 to Claude Code is nice sometimes[0]. Or you can use a router/proxy to swap out the provider entirely.
Is there an advantage to this setup over using VSCode Copilot in Agent mode with Claude Sonnet 3.7 or 4? What am I missing out on?
mmaunder
You have to try and experience claude code to answer this question. Otherwise it's just going to be a pointless debate here. If you live in a linux terminal you're going to be instantly addicted. Make sure you read the docs. Use a CLAUDE.md, create planning docs for big tasks in markup format, iterate on the planning doc until you're happy then get it to implement. And also use the technique of, as you approach the context limit have it write its memory to a file, /clear and then read that file back in. This gives you better mileage.
mertysn
How can you keep track of the current context limit if you are on Claude Pro or Max? I'm used to their desktop app for MCP use and there is no way to tell until the end-of-the-road popup shows up.
brulard
I couldn't get Copilot agent mode to work with Playwright MCP. It was installed, it appeared in the tools selection config, but copilot insists it doesn't have access to any of its functions.
I had to do some installing and setup to get playwright to work. Now, how to get the agent to use the working playwright on its own is a different matter.
upcoming-sesame
I was able to have it work with Playwright but many times it doesn't figure out that it needs to use the playwright MCP tool and instead tries to use curl or something else
westonmyers
Which model? I had Claude Sonnet agent running Playwright just yesterday.
brulard
Why would model affect the ability to use tools? I just tried sonnet 4, 3.7 and gpt 4.1 without luck. Playwright opens fine from other tools where i configured it.
mh-
Someone more knowledgeable on how these LLMs are trained and fine-tuned can weigh in, but it absolutely does matter. Some models are much more "willing" to use tools (and more adept at discerning when to use them) than others.
I can't comment on why you're having this issue specifically, unfortunately.
Taylor_OD
Thank god. I was shocked that there wasnt a bigger outrage last week when github decided that copilot was going to have premium request limits. I'm guessing we will see a much larger outcry when people hit their request limit a week into this month/billing period.
Glad there is some competition.
cedws
I cancelled my Copilot subscription after years of use. I don’t like having something taken away from me and still having to pay the same price. I also don’t want to be mentally penny counting every time I hit enter.
rafaelmn
Lol Claude code will burn through 10$ in a task.
golly_ned
It's practically unlimited usage with $100 or $200 subscription. Given prices elsewhere now I don't see how this is viable longer-run and not heavily subsidized by losses in anthropic
ewuhic
Tangent, but does anyone use Roo for VSCode?
And does browser in Roo work with Claude provided by GitHub copilot?
peter-m80
is there any benefit of using this instead of copilot agent mode with claude backend?
Been using it for a couple days - The integration fixed the gap that required me to open the files for viewing updates, and changes made in real-time as compared to the terminal mode, which did things behind the scenes, and you had no idea what its doing. the series of nonsensical (but funny) names (Pondering, Twerking, Juggling, etc.) it gives are not useful after its initial fancy wears off..
jasonthorsness
Diff viewing! My workflow has been terminals with Claude Code on the left and vscode on the right pretty much just for diffs; maybe this can replace that.
josefrichter
It does. Among other things.
coreyh14444
AFAIK, this gets auto-installed when you launch Claude Code inside of VSCode (or Cursor) so no need to seek it out and install it this way, right?
pxc
> this gets auto-installed when you launch Claude Code inside of VSCode
Is it just me, or does that seem really invasive?
khaledh
Correct. From the extension web page:
Auto-installation: When you launch Claude Code from within VSCode’s terminal, it automatically detects and installs the extension
yomismoaqui
How does this compares to Amp (https://ampcode.com) a similar offering from Sourcegraph? (it also has a VSCode extension)
Yesterday I burned 15€ (10€ free credit) trying Amp and I gotta said I was impressed.
Unsure why this is getting downvoted -- I'm equally excited to see this - thanks for sharing.
b0a04gl
how our workflow starts changing when we realize it can hold multi step intent. l we stop thinking file by file. we start thinking in actions. "split this module, write tests, refactor callers" becomes a single unit in out head because Claude understands that as a unit too( maximum efforts mode ).
this slowly rewires how we approach code. we stop worrying about syntax early, we write more scaffolds, we batch tasks more. subtle shift but huge long term effect.
how soon before we start designing codebases for LLM agents to navigate more cleanly? flat structures, less indirection, more declarative metadata
yomismoaqui
> how soon before we start designing codebases for LLM agents to navigate more cleanly? flat structures, less indirection, more declarative metadata
This is something that I have been mulling over since I heard reports that LLMs work very well with languages like Go (explicit static typing, simple syntax, only 1 way to do things...)
Seems like with humans, the less we have to worry about the incidental complexity imposed by the tools we are using (language, framework, lib...) the more brain bandwidth we have available to use to solve a problem with code.
Paradigma11
I think we might see rising popularity of languages that give you more tools to make illegal state unrepresentable. While it is great that there is so much python/js code that a LLM knows to avoid wide stretches of wrong terrain, but who cares if you can just use a language that makes that impossible in the first place.
> how soon before we start designing codebases for LLM agents to navigate more cleanly?
It's already happening. Armin Ronacher started writing more Go code instead of Python because it understand it better.
My coworker changed writing a Desktop app in Rust, because it can navigate it better because of better tooling and type system.
People already thinking about how to write documentation for AI instead of other people, etc.
fritzo
One thing I prefer about Cursor is that it stores and manages the long prompts I enter. I abandoned Claude Code after I typed in a long paragraph of prompt then accidentally hit an arrow key and lost all my prompt-writing work. Prompts are valuable, and Cursor treats them as valuable, whereas Claude Code seems to expect throw-away one-liners.
Has this been fixed? Does the vscode Claude Code plugin retain prompts more reliably?
kordlessagain
I use Claude Desktop to write the plan (the prompt) and then tell Claude Code to read that file to do heavy lifting. If I'm working on several projects at once, I open multiple WSL based Claudes. In Claude Desktop, I have several tools I've vibed which manage context and notes for the projects: https://github.com/kordless/gnosis-evolve
I did try to get Claude Desktop to send comms to Claude Code, but got stuck on a few things related to the terminal emulation in Windows.
I have session list, load, and save tools. If a character is embodied that is working on a project, that goes in the session information and the character is loaded (embodied) when you start a new session. Making characters is done with the character generator tool, which strongly randomizes traits. Traits can related to ability (or inability) to run tools. Why have a personality in the AI? Because it keeps it fun and changes the tone of the code commenting and planning. And it affects tool runs...
> We are Groot! completely deadpan delivery while already analyzing the situation
There are notes on projects (folders) and any files it created for planning usually goes in /notes in the folder.
Claude Code does have some ability to save sessions, but I don't edit it much myself. That would be a better job for Claude Desktop.
topek
You can also resume previous conversations. --resume and --continue are your friends
SV_BubbleTime
In Claude if you write something and hit the arrow and it’s gone, you’ve hit the other arrow when it comes back.
If I remember correctly, this is even true between instances of Claude on the same terminal.
I am in a container, so if I close rebuild my container obviously that’s gone.
Taylor_OD
Yeah, up will go to the previous prompt. Down will go back to the "current" one, right?
kissgyorgy
This was already installed when you ran Claude Code in a VSCode terminal, I guess the difference is that now it's explicitly listed on the VSCode Marketplace.
world2vec
From the extension's page:
Features:
- Auto-installation: When you launch Claude Code from within VSCode’s terminal, it automatically detects and installs the extension
- Selection context: Selected text in the editor is automatically added to Claude’s context
- Diff viewing: Code changes can be displayed directly in VSCode’s diff viewer instead of the terminal
- Keyboard shortcuts: Support for shortcuts like Alt+Cmd+K to push selected code into Claude’s prompt
- Tab awareness: Claude can see which files you have open in the editor
- Configuration: Set diff tool to auto in /config to enable IDE integration features
ttoinou
It was slightly buggy it was uninstalled itself sometimes. I hope this will be better now with this official extension
dezmou
So I won't get anything more that the file compare that appear when claude in terminal ask to modify a file ?
kissgyorgy
You can select lines, which will be added to the context (can't do that from the console), it can show the edited files in the VSCode editor, not just in the terminal.
dezmou
The extension say "Tab awareness: Claude can see which files you have open in the editor" I don't know how to activate this, it would help me to not have to CD in the terminal each time
dezmou
Ok so I tested it by CD into a directory, and open a file from another directory, create an empty function and selecting the function in the editor, and asking claude to just "fill the function" it knew which text was selected in which file and filled the function, this will gain me some time
zackify
Seems like this is the exact same extension that’s been in use but it’s just now publicly on the extension marketplace.
basemi(dead)
[flagged]
nithril
VSCode is really the primary platform for AI/agentic plugins, receiving priority over other IDEs such as IntelliJ, this is understandable as it is free, supporting many languages, and really good.
As a long-time IntelliJ user, I’m beginning to question whether it still makes sense to remain on this platform.
Perhaps I’m too impatient and agentic plugins may reach parity on IntelliJ within a year but a year is quite a long time to wait in this really fast-evolving landscape.
While your observation is generally true, and I share your overall concern about my IDE of choice, in this specific example it doesn’t apply as the Claude Code plugin for IntelliJ offers exactly the same integration as their plugin for VSCode.
nithril
Is your affirmation based on a testing of both plugin? I'm genuily wondering the plugin quality as IntelliJ plugin is still in beta and the VScode one not
e1g
I use IntelliJ (WebStorm) as my daily driver, and Claude Code is an integral part of my workflow. I didn’t check VSCode specifically, but I did read the link to that extension - everything they describe is precisely how the Claude Code works within WebStorm (despite being labeled “Beta”)
nithril
Appreciate your input, but if you haven’t actually tested the VSCode integration, it’s hard to compare. Reading a feature list isn’t the same as using the tool.
e1g
Okay, I just tried the Claude Code extension in VSCode - the experience is 100% identical to how it works within IntelliJ, even using the same icons and shortcuts to turn it on/off.
Did you just reply to a comment about a link with a comment with the same link?
mdaniel
> this is understandable as it is free, supporting many languages, and really good.
IntelliJ and PyCharm are both Apache 2, IntelliJ for sure supports many languages, and I'll keep the commentary about the last item to myself
brulard
I was using webstorm some years ago, but after the switch to VS Code I never looked back. For me at least it was very laggy, UI bloated, and autocomplete would be unreliable for me due to constant "indexing".
esafak
Junie works
nithril
Not comparable.
I did test VSCode and IntelliJ on agentic, MCP, and IntelliJ is for the moment far behind.
bionhoward
I always get blocked in using Claude Code by basic logic
What are you building that doesn’t compete with Anthropic? (Using your brain competes with Anthropic) — major legal risk
How do we justify accepting the lack of privacy on Claude? Is it just for people doing FOSS? You’re cool with them reading your business codebase to verify you aren’t using your brain?
Given it is logically impossible to not compete with general intelligence, and that I expect private github repos to remain private, I feel forced to think Claude Code is a nerd snipe / bad joke / toy
kordlessagain
The solution is simple: understand what the tool actually does before declaring it impossible to use. But then again, reading documentation requires... what's the word... effort.
Claude Code stores feedback transcripts for only 30 days and has "clear policies against using feedback for model training":
Privacy safeguards
We have implemented several safeguards to protect your data, including limited retention periods for sensitive information, restricted access to user session data, and clear policies against using feedback for model training.
barrkel
What is the lack of privacy on Claude Code? Aren't Pro and API both private and not used for training?
psi_chi_phi
That is also my understanding. They would never have any corporate clients if they were stealing and training on all of their paid queries.
I built a UI to manage this, and it is starting to turn into a new type of IDE, based around agent management and review rather than working on one thing at a time.
https://github.com/stravu/crystal
For more important stuff, like if it falls under my supervision, I will test the branch and carefully check the implementation. And this for each PR updates. That takes a lot longer.
So I’m wondering, how do you context switch between many agent running and proposing diffs. Especially if you need to vet the changes. And how do you manage module dependencies where an update by one task can subtly influence the implementation by another?
I’m wondering this too. But from what I have seen, I think most people doing this are not really reading and vetting the output. Just faster, parallelized, vibe coding.
Not saying that’s what parent is doing, but it’s common.
For parallel work who want stuff to “happen faster”, I am convinced most of these people don’t really read (nor probably understand) the code it produces.
Honestly, I've seen too many fairly glaring mistakes in all models I've tried that signal that they can't even get the easy stuff right consistently. In the language I use most (C++), if they can't do that, how can I trust them to get all the very subtle things right? (e.g. very often they produce code that holds some form of dangling references, and when I say "hey don't do that", they go back to something very inefficient like copying things all over the place).
I am very grateful they can churn out a comprehensive test suite in gtest though and write other scripts to test / do a release and such. The relief in tedium there is welcome for sure!
I think there are opportunities to give special handling to the markdown docs and diagrams Claude likes to make a long the way to help review.
I would argue you haven't covered any.
Why not just skip the reviews then? If you can trust the models to have the necessary intelligence and context to properly review, they should be able to properly code in the first place. Obviously not where models are at today.
Here we are talking about the same model doing the review (even if you use a different model provider, it's still trained on essentially the same data, with the same objective and very similar performances).
We have had agentic systems where one agent checks the work of another since 2+ years, this isn't a paradigm pushed by AI coding model providers because it doesn't really work that well, review is still needed.
But that was two weeks ago; maybe it’s different today
Right now, background agents have two major problems:
1. There is some friction to getting the isolated environment working correctly. Difficulty depends on specifics of each project. Ranging from "select this universal container" to "it's going to be hell getting all of your dependencies working". Working in your IDE pretty much solves that - it's likely a place where everything is already setup.
2. People need to learn how agents build code. Watching an agent work in your IDE while being able to interject/correct them is extremely helpful to long term success with background agents.
My preferred way to vibe code is to lock in on a single goal and iterate towards it. When I'm waiting for stuff to finish, I'm exploring docs or info to figure out how to get closer. Reviewing the existing codebase or changes is also super useful for me to grasp where I'm up to and what to do next. This idea of managing swarms of agents for different tasks does not gel with me, too much context switching and multitasking.
Side note: You should look into electron-trpc. it greatly simplifies IPC handling
Regarding your webpage - I wish you would vibe away the annoying header coming down every time I scroll just tiny little bit up.
Having a nice way to manage the work trees sounds great, but the rate limiting still sounds like an issue to this approach.
https://docs.anthropic.com/en/docs/claude-code/common-workfl...
Personally, I'm running 2 accounts and switching between them for maximum productivity. Just as a function of what my time is worth it is a no brainer.
One must also always be aware that an LLM WILL ALWAYS DO what you ask it for. Often you ask for the wrong thing. And you need to rethink.
Maybe I am inefficient though I really only use at the most two additional work trees at the same time.
What? That's not my experience at all. Especially not "always"
I cannot count how many times that or something like that has happened to me.
https://github.com/stravu/crystal/actions/runs/15791009893/a...
Thanks for your help, now I'll be able to include Linux support in my next release
Sounds like you're limiting yourself to users who are comfortable paying 100-200$ monthly subscription or even thousands per month for API prices.
C.C. is expensive but i was hoping we weren't going to build tooling that exacerbated this issue simply because for some of us money is less of an issue than for most of us.
So yes it might feel expensive in terms of a personal monthly budget, but the value for money is insane.
- Top-tier `git worktree`-based context switching in the same IDE window.
- A framework for attaching terminal-based agents to each worktree branch. Eventually this should evolve into a better open protocol for integration, primarily for diffs, permission request notifications, and progress indicators.
- A sidebar that monitors agent status/notifications on each active worktree branch.
- A quick notification-style way of responding to agent prompts across all branches. This has been built in standalone agent manager tools, but I can't use those tools effectively when I need to quickly jump in and be an engineer.
- Branch-context-level association with browser test windows or mobile emulator/simulator instances.
- Strong code completion capabilities via other faster models, a great extension ecosystem with lots of language server support, and function as a high-quality IDE.
Right now, I'm managing multiple macOS desktops with different instances of Windsurf running Claude agents in-terminal, and web browser windows / mobile emulators/simulators are dragged into the respective desktops for each instance. It's clunky.
https://github.com/jasonjmcghee/claude-debugs-for-you
I tried, unsuccessfully, to write a plugin for VSCode that would let Claude run a tool to jump me to the file and line it was editing. It sorta worked but kept hanging.
The current state of having multiple editors open, or having to switch between JetBrains stuff and Cursor is really a bit of an annoying transition period (I hope).
Claude Code is fully agentic, meaning you give it a task and fully implements everything, produces surprisingly good, working code. Can test, commit, run commands, log in to remote system, debug anything.
It doesn't optimise for token usage, which Cursor heavily do, that's why it can produce higher quality code on first shots (the downside is that the cost is very high)
Cursor's agent mode is very much in it's infrantry just catching up, but Cursor is essentially a tool for editing files, but Claude Code is like a junior developer.
Cursor will suggest and complete code for you inline. You just tab-complete your way to a written function. It's mad.
Claude Code doesn't do this.
Cursor also has much better awareness of TypeScript. It'll fix errors as they occur, and you can right-click an issue and have it fixed.
Contrast with CC where I've had to specify in CLAUDE.md to "NEVER EVER leave me with TS errors", and to do this it runs a CLI check using its integration, taking way longer to do the same thing.
I noticed that CC’s generated Go code nowadays is very solid. No hallucination recently that i can remember or struck me. I do see youtube videos of people working with js/ts still struggling with this. Which is odd, there is way more training material for the latter. Perhaps the simplicity of Go shines here.
CC might generate Go code for which there are already library functions present. So thorough code reviews are a necessity.
Much as I dislike Go, it is indeed probably closer to the ideal language for the LLM. But I suspect that we need to dial it down even further, e.g. no type inference whatsoever (so no := etc). In fact I wonder if forcing the model to spell out the type of every subexpression as a type assertion might be beneficial due to the way LLMs work, for the same reason why prompting for explicit chain-of-thought improves outputs even with models not specifically trained to produce CoT. In the similar vein, it could require fully qualified names for all library functions etc. But it also needs to have fewer footguns, which Go has aplenty - possible to ignore error returns, concurrency is unsafe etc. I suspect message passing a la Erlang might be the best bet there but this is just a gut feel.
Of course, the problem with any hypothetical new PL optimized for LLMs is that there's no training data for it. To some extent this can be mitigated by mechanically converting existing code - e.g. mandatory fully qualified names and explicit type assertions for subexpressions could be easily bolted onto any existing statically typed language.
If I’m wrong I’d be overjoyed! But I have it installed and have seen no hint of this.
I am using the Cursor agent mode, which can run in auto mode with, let's say, 50 consecutive tool calls, along with editing and other tasks. It can operate autonomously for 30 minutes and complete a given task. I haven't tried Claude Code yet, but I'm curious—what exactly does Claude Code do differently compared to the Cursor agent?
Is the improvement in diff quality solely because Cursor limits the context size, or are there other factors involved?
I couldn't get cursor agent to do useful stuff for me - might be because I don't do TS or Python - and Claude Code was a big productivity boost almost from day one. You just tell it to do stuff, and it just... does it. At like the level of a college student.
I think you meant "infancy"
Coming back to an implementation that has good test coverage, functions exactly as specified, and is basically production-ready is achievable through planning/specs.
Maybe Cursor can do this now as well, but it was just so far behind last time I tried it.
This has been exactly my experience. I guess one slightly interesting thing is that my “junior developer” here will get better with time, but not because of me.
In terms of performance, their agents differ. The base model their agents use are the same, but for example how they look at your codebase or decide to farm tasks out to lesser models, and how they connect to tools all differ.
But from an agent perspective, Claude Code is much more tuned to understanding the task, breaking it down into small steps, and executing those steps with precision.
Overall, IMO agentic coding is great for well defined tasks, especially when they're backed by tests. It still lacks though in deep technical discussions and being opinionated about architectural decisions, unless specifically nudged in a certain direction. This is an area where Gemini Pro excels at, but it sucks at as a coding agent. So I use both: Gemini Pro for high-level picture design, and Claude Code for executing the plan by giving it clear requirements. All while making some edits myself using Cursor Tab.
https://github.com/BeehiveInnovations/zen-mcp-server
#browser_navigate https://www.hackerneue.com/
I had to do some installing and setup to get playwright to work. Now, how to get the agent to use the working playwright on its own is a different matter.
I can't comment on why you're having this issue specifically, unfortunately.
Glad there is some competition.
Is it just me, or does that seem really invasive?
Yesterday I burned 15€ (10€ free credit) trying Amp and I gotta said I was impressed.
The next few years are going to be interesting.
this slowly rewires how we approach code. we stop worrying about syntax early, we write more scaffolds, we batch tasks more. subtle shift but huge long term effect.
how soon before we start designing codebases for LLM agents to navigate more cleanly? flat structures, less indirection, more declarative metadata
This is something that I have been mulling over since I heard reports that LLMs work very well with languages like Go (explicit static typing, simple syntax, only 1 way to do things...)
Seems like with humans, the less we have to worry about the incidental complexity imposed by the tools we are using (language, framework, lib...) the more brain bandwidth we have available to use to solve a problem with code.
Maybe something like https://flix.dev/ with many analyzers.
It's already happening. Armin Ronacher started writing more Go code instead of Python because it understand it better. My coworker changed writing a Desktop app in Rust, because it can navigate it better because of better tooling and type system.
People already thinking about how to write documentation for AI instead of other people, etc.
Has this been fixed? Does the vscode Claude Code plugin retain prompts more reliably?
I did try to get Claude Desktop to send comms to Claude Code, but got stuck on a few things related to the terminal emulation in Windows.
I have session list, load, and save tools. If a character is embodied that is working on a project, that goes in the session information and the character is loaded (embodied) when you start a new session. Making characters is done with the character generator tool, which strongly randomizes traits. Traits can related to ability (or inability) to run tools. Why have a personality in the AI? Because it keeps it fun and changes the tone of the code commenting and planning. And it affects tool runs...
> We are Groot! completely deadpan delivery while already analyzing the situation
There are notes on projects (folders) and any files it created for planning usually goes in /notes in the folder.
Claude Code does have some ability to save sessions, but I don't edit it much myself. That would be a better job for Claude Desktop.
If I remember correctly, this is even true between instances of Claude on the same terminal.
I am in a container, so if I close rebuild my container obviously that’s gone.
Features:
- Auto-installation: When you launch Claude Code from within VSCode’s terminal, it automatically detects and installs the extension
- Selection context: Selected text in the editor is automatically added to Claude’s context
- Diff viewing: Code changes can be displayed directly in VSCode’s diff viewer instead of the terminal
- Keyboard shortcuts: Support for shortcuts like Alt+Cmd+K to push selected code into Claude’s prompt
- Tab awareness: Claude can see which files you have open in the editor
- Configuration: Set diff tool to auto in /config to enable IDE integration features
As a long-time IntelliJ user, I’m beginning to question whether it still makes sense to remain on this platform.
Perhaps I’m too impatient and agentic plugins may reach parity on IntelliJ within a year but a year is quite a long time to wait in this really fast-evolving landscape.
The intellij plugin in beta: https://plugins.jetbrains.com/plugin/27310-claude-code-beta-...
IntelliJ and PyCharm are both Apache 2, IntelliJ for sure supports many languages, and I'll keep the commentary about the last item to myself
I did test VSCode and IntelliJ on agentic, MCP, and IntelliJ is for the moment far behind.
What are you building that doesn’t compete with Anthropic? (Using your brain competes with Anthropic) — major legal risk
How do we justify accepting the lack of privacy on Claude? Is it just for people doing FOSS? You’re cool with them reading your business codebase to verify you aren’t using your brain?
Given it is logically impossible to not compete with general intelligence, and that I expect private github repos to remain private, I feel forced to think Claude Code is a nerd snipe / bad joke / toy
Claude Code stores feedback transcripts for only 30 days and has "clear policies against using feedback for model training":