But the overall gain in efficiency is still a low single digit speedup. It's not a multi-OOM speedup as if e.g. doing 1000 long divisions by hand over many days versus letting a computer program do them in a split second. The "wall" that is irreducible complexity was never OOMs away from how modern pre-AI software development was done.
For example, I've used LLMs to write ~1600 lines of Rust in the past few days. I'm having it make Ratatui bindings for Ruby. I haven't ever learned Rust, but I can read C-like languages so I kinda understand what's happening. I could tell when it needed to be modularized. I have a sneaking suspicion most of the Rust tests it's written are testing Ratatui, rather than testing its own bindings. But I've had the LLM cover the functionality in Ruby tests, a language I do know. So I've felt comfortable enough to ship it.
I really encourage you to update your priors since capabilities are very different than even 6 months ago.