Oh, I guess you mean when they grow up.
The woo is laughable. A cryptobro could have pulled the same nonsense out of their ass about web 3.0
---
Less flippantly, they are excellent for self-studying university-level topics. It's like being able to ask questions to a personal tutor/professor.
- documentation
- design reviews
- type systems
- code review
- unit tests
- continuous integration
- integration testing
- Q&A process
- etc.
It turns out when include all these processes, teams of error-prone human developers can produce complex working software. Mostly -- sometimes there are bugs. Kind of a lot actually. But we get things done.Is it not the same with AI? With the right processes you can get consistent results from inconsistent tools.
This is a pretty massive difference between the two, and your narrative is part of why AI is proving to be so harmful for education in general. Delusional dreamers and greedy CEOs talking about AI being able to do "PhD level work" have potentially ruined a significant chunk of the next generation into thinking they are genuinely learning from asking AI "a few questions" and taking the answers at face value instead of struggling through the material to build true understanding.
I’ll take a potential solution I can validate over no idea whatsoever of my own any day.
Maybe say something concrete? What's a positive real world impact of LLMs where they aren't hideously expensive and error prone to the point of near uselessness? Something that isn't just the equivalent of a crypto-bro saying that their system for semi-regulated speculation (totally not a rugpull!) will end the tyranny of the banks.