Preferences

jbentley1 parent
I think integration into existing IDEs is the wrong form for agentic coding. The best way to work is managing several Git worktrees with agents running so you aren't stuck waiting 20+ minutes for Claude Code to finish.

I built a UI to manage this, and it is starting to turn into a new type of IDE, based around agent management and review rather than working on one thing at a time.

https://github.com/stravu/crystal


skydhash
When I see proposals for this kind of workflow, the one question I have is how will you manage your personal context. When I’m reviewing code by coworker, I’m not seeking to fully understand the code or checking that it’s correct. I’m mostly trying to get a high level understanding and checking for glaring mistakes (code styles, best practices,…). I can get through a lot of PR in a day that way.

For more important stuff, like if it falls under my supervision, I will test the branch and carefully check the implementation. And this for each PR updates. That takes a lot longer.

So I’m wondering, how do you context switch between many agent running and proposing diffs. Especially if you need to vet the changes. And how do you manage module dependencies where an update by one task can subtly influence the implementation by another?

LeafItAlone
>So I’m wondering, how do you context switch between many agent running and proposing diffs. Especially if you need to vet the changes.

I’m wondering this too. But from what I have seen, I think most people doing this are not really reading and vetting the output. Just faster, parallelized, vibe coding.

Not saying that’s what parent is doing, but it’s common.

stingraycharles
Yeah. I would like multiple agents because each can be primed with a different system prompt and “clean” context. This has been proven to work, eg with Aider’s “architect” vs “editor” models / agents working together.

For parallel work who want stuff to “happen faster”, I am convinced most of these people don’t really read (nor probably understand) the code it produces.

scuol
It's basically like having N of the most prolific LoC producing colleagues who don't have a great mental model of how the language works that you have to carefully parse all of their PRs.

Honestly, I've seen too many fairly glaring mistakes in all models I've tried that signal that they can't even get the easy stuff right consistently. In the language I use most (C++), if they can't do that, how can I trust them to get all the very subtle things right? (e.g. very often they produce code that holds some form of dangling references, and when I say "hey don't do that", they go back to something very inefficient like copying things all over the place).

I am very grateful they can churn out a comprehensive test suite in gtest though and write other scripts to test / do a release and such. The relief in tedium there is welcome for sure!

jbentley1 OP
I tried to make it easy to remember what you are doing. You can see the prompts you ran, and I used the Monaco editor from VSCode to view and edit the diffs.

I think there are opportunities to give special handling to the markdown docs and diagrams Claude likes to make a long the way to help review.

EGreg
Why don’t you automate this checking with AI? You can then cover hundreds of PRs a day.
Voloskaya
> You can then cover hundreds of PRs a day.

I would argue you haven't covered any.

Why not just skip the reviews then? If you can trust the models to have the necessary intelligence and context to properly review, they should be able to properly code in the first place. Obviously not where models are at today.

EGreg
Not necessarily. It's like the Generative Adversarial Network (GAN). You don't just trust the generator, but it's a back-and-forth between the Generator and Discriminator.
Voloskaya
The discriminator is trained on a different objective than the generator, it's specifically trained on being good at discriminating, so it is complimentary.

Here we are talking about the same model doing the review (even if you use a different model provider, it's still trained on essentially the same data, with the same objective and very similar performances).

We have had agentic systems where one agent checks the work of another since 2+ years, this isn't a paradigm pushed by AI coding model providers because it doesn't really work that well, review is still needed.

derwiki
Turtles all the way down. We seem to be marching towards a future like that, but are we there today? Some of the AI-generated PRs I’ve seen teammates put out “work” (because sometimes two wrongs make a right) but convince me we still need a human in the loop.

But that was two weeks ago; maybe it’s different today

jbentley1 OP
The other replies are correct that right now you need some level of human review, but it would be interesting to have a second AI review with a clean context. Maybe a security checklist, or a prompt telling it to check that the tests are covering the functionality appropriately.
Etheryte
There's no reason you couldn't do the same thing as an IDE plugin.
jbentley1 OP
Yes there is. IDEs just aren't designed for it. The main screen in an IDE is a single branch at a time, I want to be managing a swarm of agents on multiple branches/worktrees
Etheryte
You don't need IDE support for this, it's all Git under the hood. Your extension can hold virtual branches in memory in the background, feed the file contents to the LLM through that layer and back, and the only problem you need to deal with after the fact is how to resolve conflicts, but the LLM would also be a good candidate to handle that. The more I think about it, the more Git makes this a straightforward implementation compared to say SVN, since branches cost nearly nothing. All of this is not to say that it's a trivial piece of work, but it is very much doable without building a new IDE from scratch.
radicalbyte
That needs isolation, which in practise means multiple machines..
derwiki
Why machines? Multiple clones of the same repo is one low tech way to achieve that.
brulard
If we're talking for example full stack JS/TS app, wouldn't you need a separate build/dev server running, database and likely more?
naasking
I don't see why you necessarily need multiple machines, just multiple checkouts, one for each agent. Depends on what shared resources are involved, eg. databases, etc.
int_19h
Why not multiple IDE windows then?
SkyPuncher
Your tool is cool, but is solves a different issue.

Right now, background agents have two major problems:

1. There is some friction to getting the isolated environment working correctly. Difficulty depends on specifics of each project. Ranging from "select this universal container" to "it's going to be hell getting all of your dependencies working". Working in your IDE pretty much solves that - it's likely a place where everything is already setup.

2. People need to learn how agents build code. Watching an agent work in your IDE while being able to interject/correct them is extremely helpful to long term success with background agents.

mindwok
I personally disagree. I use Cursor every day on commercial projects, and while I find background agents cool and useful in some contexts they are more often than not simply a distraction.

My preferred way to vibe code is to lock in on a single goal and iterate towards it. When I'm waiting for stuff to finish, I'm exploring docs or info to figure out how to get closer. Reviewing the existing codebase or changes is also super useful for me to grasp where I'm up to and what to do next. This idea of managing swarms of agents for different tasks does not gel with me, too much context switching and multitasking.

Jonovono
Looks cool! What was your reason for not using the Claude Code TS SDK? Looks like you install the package, but are manually spawning claude commands instead?

Side note: You should look into electron-trpc. it greatly simplifies IPC handling

brulard
This is nice, I was thinking about needing multiple working trees for different sessions of claude code.

Regarding your webpage - I wish you would vibe away the annoying header coming down every time I scroll just tiny little bit up.

jbentley1 OP
Noted! Thanks
OtherShrezzing
For Anthropic, they’ve got to put their product where their customers are. If they’re all in a cli or IDE, then the correct place to put agenetic coding features is into the cli or IDE.
throwaway314155
> The best way to work is managing several Git worktrees with agents running so you aren't stuck waiting 20+ minutes for Claude Code to finish.

Sounds like you're limiting yourself to users who are comfortable paying 100-200$ monthly subscription or even thousands per month for API prices.

C.C. is expensive but i was hoping we weren't going to build tooling that exacerbated this issue simply because for some of us money is less of an issue than for most of us.

jbentley1 OP
If you are paying a senior engineer 200k, getting them a CC max plan is equivalent to 1.2% of their salary. I would say that it increases productivity by a lot more than that.

So yes it might feel expensive in terms of a personal monthly budget, but the value for money is insane.

data-ottawa
I was just reading the Claude Code recommending that approach this morning.

Having a nice way to manage the work trees sounds great, but the rate limiting still sounds like an issue to this approach.

https://docs.anthropic.com/en/docs/claude-code/common-workfl...

mikojan
Rate limiting has not been a problem for me. I need time to review the proposals, the actual source code and to meddle with it in between

One must also always be aware that an LLM WILL ALWAYS DO what you ask it for. Often you ask for the wrong thing. And you need to rethink.

Maybe I am inefficient though I really only use at the most two additional work trees at the same time.

brulard
> ... LLM WILL ALWAYS DO what you ask it for.

What? That's not my experience at all. Especially not "always"

mikojan
Yes, yes they do. If you ask it to refactor something and integrate it somewhere else; it will do exactly that even if in the course of it you would find that that would dramatically increase complexity not reduce it.

I cannot count how many times that or something like that has happened to me.

brulard
Most of the time, maybe. Absolutely not always. I'll tell it to "implement feature a, ignore typescript errors". And it happened multiple times for me that it did the exact opposite, fix TS errors, and feature is barely mentioned in the response. Or more recently (with deep research) "Give me list of {some_product_name}, make absolutely sure to make a CSV and output a CSV. Columns are a,b,c,..". Does it give me the data? No, I get a wall of text with absolutely no data. Ok, you may argue this is some agent, etc. but user may not see a difference.

Don't take me wrong, I'm a big fan and constant user of all these things, but I would say it frequently have problem following prompts.

Paradigma11
Or reduce complexity: https://xkcd.com/221/
jbentley1 OP
If I hit the rate limit in 2 hours and got value out of each prompt I ran, that's better than doing the same amount of work in 6 hours and not hitting the limit.

Personally, I'm running 2 accounts and switching between them for maximum productivity. Just as a function of what my time is worth it is a no brainer.

jbentley1 OP
While I seem to have a little attention from this comment, if anyone can test this Linux installer for Crystal and tell me if it works on their machine I would appreciate it:

https://github.com/stravu/crystal/actions/runs/15791009893/a...

ninthaccountshn
Basics are working on arch with the AppImage, anything specific?
jbentley1 OP
If you can call Claude Code that means everything else should be working, as most functionality is built around the terminal and that is how it is calling Claude Code.

Thanks for your help, now I'll be able to include Linux support in my next release

smrtinsert
What tasks require parallel workflows like this? Running one claude prompt gives me more than enough to chew on for several hours if done correctly.
4b11b4
Seems like Amp would plug into this better? At least regarding the ability for sharing prompts, etc.
artursapek
When I try to run two CCs at once I quickly get 429 rate limited, even on the $200 plan
andy_ppp
Maybe the UI should allow you to still ask questions but in a queue to prevent this. It could have informative text like “waiting on 3 previous questions” and a progress bar of some kind.
jbentley1 OP
Weird, I have not had this issue and I commonly run 5+ at once
lbeurerkellner
This looks really cool, thanks for sharing.

This item has no comments currently.