It was a whole new world that may have changed my life forever. ChatGPT is a shitty Google replacement in comparison, and it's a bad alternative due to being censored in its main instructions.
LLMs in their current form have existed since what, 2021? That's 4 years already. They have hundreds of millions of active users. The only improvements we've seen so far were very much iterative ones — more of the same. Larger contexts, thinking tokens, multimodality, all that stuff. But the core concept is still the same, a very computationally expensive, very large neural network that predicts the next token of a text given a sequence of tokens. How much more time do we have to give this technology before we could judge it?
See, AI systems, all of them, not just LLMs, are fundamentally bound by their training dataset. That's fine for data classification tasks, and AI does excel at that, I'm not denying it. But creative work like writing software or articles is unique. Don't know about you, but most of the things I do are something no one has ever done before, so they by definition could not have been included in the training dataset, and no AI could possibly assist me with any of this. If you do something that has been done so many times that even AI knows how to do it, what's even the point of your work?
I could just do the same as GP, and qualify MUDs and BBS as poor proxies for social interactions that are much more elaborate and vibrant in person.
But LLMs are from the get-go a bad idea, a bullshit generating machine.
Is that a matter of opinion, or a fact (in which case you should be able to back it up)?
As for what I said, I was just mimicking the comment of GP, which I'll quote here:
> The internet actually enabled us to do new things. AI is nothing of that sort. It just generates mediocre statistically-plausible text.