I don't know about the rest of your comment, albeit I consider myself more in the non-Chomsky camp. This emergence thing seems a little more than an elaborate hoax to me.
Well, maybe they've just trained GPT-4 to wiggle my balls, when I ask it to analyse a poem that I wrote 15 years ago.
The local llm scene, regardless of this debate, is nevertheless the hottest topic IMHO atm.
Several finetunes have repeatedly blown my mind even in the past 2 weeks. On a Raspberry Pi.
AI can interface. Natural Language Processing. This is the first time in history humaity has such knowledge and technology.
It's basically C-3PO.
Maybe I shouldn't, though I justify it because it allows me to focus on the bespoke bits of whatever I'm doing...
Feels a bit double edged to me still.
Even when I do need some, I use swagger or copier templates or something similar.
If you’re using ChatGPT for scaffolding, I feel you’ve fucked up?
Generative AI is only as good as the dataset that you give it, so for problems that exist in a heavily contrived and parameterized space (like leetcode-style problems) it works really well. But give it a novel problem with intertwining libraries and external dependencies along with custom type and structure definitions, it's going to fall flat.
Humans do something AI can't, and that's draw from experience to apply a solution on a novel problem. This is why I'm not terribly worried about AI coming for engineer jobs anywhere in the near future. I use ChatGPT all the time to write me small functions, generate regular expressions, etc. Basically all the drudgery.
I'd argue we've hit peak-AI at this point because from this point on, all datasets are going to be colored by AI-generated results. Generative-AI is now on a trajectory were it'll simply regress to the mean of knowledge.