How do you verify it is teaching you the correct thing if you don't have any baseline to compare it to?
Doesn't that sound ridiculous to you?
Admittedly, part of it is my own desire for code that looks a certain way, not just that which solves the problem.
Choosing the battles to pick is part of the skill at the moment.
I use AI for a lot of boiler plate, tedious tasks I can’t quite do a vim recording for, small targeted scripts.
The boilerplate argument is becoming quite old.
It’s basically just a translation, but with dozens of tables, each with dozens of columns it gets tedious pretty fast.
If given other files from the project as context it’s also pretty good at generating the table and column descriptions for documentation, which I would probably just not write at all if doing it by hand.
I think you need to imagine all the things you could be doing with LLMs.
For me the biggest thing is so many tedious things are now unlocked. Refactors that are just slightly beyond the IDE, checking your config (the number of typos it’s picked up that could take me hours because eyes can be stupid), data processing that’s similar to what you have done before but different enough to be annoying.
It's not AI, there is no intelligence. A language model as the name says deals with language. Current ones are surprisingly good at it but it's still not more than that.
Writing the code is the fast and easy part once you know what you want to do. I use AI as a rubber duck to shorten that cycle, then write it myself.