Perhaps enough time for you to build a good understanding of what it is capable of, and how it is evolving over time?
I have a good enough understanding of what it is capable of, and I remain unimpressed.
See, AI systems, all of them, not just LLMs, are fundamentally bound by their training dataset. That's fine for data classification tasks, and AI does excel at that, I'm not denying it. But creative work like writing software or articles is unique. Don't know about you, but most of the things I do are something no one has ever done before, so they by definition could not have been included in the training dataset, and no AI could possibly assist me with any of this. If you do something that has been done so many times that even AI knows how to do it, what's even the point of your work?
LLMs in their current form have existed since what, 2021? That's 4 years already. They have hundreds of millions of active users. The only improvements we've seen so far were very much iterative ones — more of the same. Larger contexts, thinking tokens, multimodality, all that stuff. But the core concept is still the same, a very computationally expensive, very large neural network that predicts the next token of a text given a sequence of tokens. How much more time do we have to give this technology before we could judge it?