I worry that messing with the AI is the equivalent of tweaking my colour schemes and choosing new fonts.
- anything with good enough adoption is good enough (unless I'm an SME to judge directly)
- build something with it before considering a switch
- they're similar enough that what I learn in one will transfer to others
- everything sucks compared with 2-3 years from now; switching between "sucks" and "sucks+" will look silly in retrospect
I found this didn't take me very long. Try things in order of how popular they seem and keep notes on what you do and don't like.
I personally settled on Zed (because I genuinely like the editor even with the AI bits turned off), Copilot (because Microsoft gave me a free subscription as an active OSS dev) and Claude Sonnet (seems to be a good balance). Other people I work with like Claude Code.
Can you provide concrete details?
When I do projects in this realm, it requires significant discussion with the business to understand how reality is modeled in the database and data, and that info is required before any notion of "clean up" can be defined.
That just leaves the other 80-90% to do manually ;)
Our target deploy environment is K8S if that makes a difference. Right now I’m using mise tasks to run everything
- Using AI code gen to make your own dev tools to automate tasks. Everything from "I need a make target to automate updating my staging and production config files when I make certain types of changes" or "make an ETL to clean up this dirty database" to "make a codegen tool to automatically generate library functions from the types I have defined" and "generate a polished CLI for this API for me"
- Using Tilt (tilt.dev) to automatically rebuild and live-reload software on a running Kubernetes cluster within seconds. Essentially, deploy-on-save.
- Much more expansive and robust integration test suites with output such that an AI agent can automatically run integration tests, read the errors and use them to iterate. And with some guidance it can write more tests based on a small set of examples. It's also been great at adding formatted messages to every test assertion to make failed tests easier to understand
- Using an editor where an AI agent has access to the language server, linter, etc. via diagnostics to automatically understand when it makes severe mistakes and fix them
A lot of this is traditional programming but sped up so that things that took hours a few years ago now take literally minutes.