We keep getting more cool features in the tools, but I don't see any indication that the models are getting any better at understanding or managing complexity. They still make dumb mistakes. They still write terrible code if you don't give them lots of guardrails. They still "fix" things by removing functionality or adding a ts-ignore comment. If they were making progress, I might be convinced that eventually they'll get there, but they're not.
So maybe there isn't any fundamental change needed to LLMs to take it from junior to senior dev.
> They still "fix" things by removing functionality or adding a ts-ignore comment.
I've worked with many many people who "fix" things like that. Hell just this week, one of my colleagues "fixed" a failing test by adding delays.
I still think current AI is pretty crap at programming anything non-trivial, but I don't think it necessarily requires fundamental changes to improve.
> LLMs are stupid, but some humans are stupid too, therefore LLMs can be smart as well
Not what I said. The correct logic is "LLMs are stupid, but that doesn't prove that they MUST ALWAYS be stupid, in the same way that the existence of stupid people doesn't prove that ALL people are stupid".
> let's put aside the obvious bad logic
Please.
> WHY some people are better than others at certain tasks. it is always because they have lots of practice and learned from their experiences.
What? No it isn't. It's partly because they have lots of practice and learned from experience. But it's also partly natural talent.
> something LLM categorically cannot do
There's literally a step called "training". What do you think that is?
The difference is that LLMs have a distinct off-line training step and can't learn after that. Kind of like the Memento guy. Does that completely rule out smart LLMs? Too early to tell I think.
oh wow they use the same word so they must mean the same thing! hard to argue with that logic :)