We have very little data on the actual "volume of individuals falling into AI-induced psychosis" - and how that compares to pre-AI psychosis rate. It'll be a while until we do.
It might be an actual order-of-magnitude increase from the background rates. It might be basically the same headcount, but with psychosis expressed in a new notable way. I.e. that guy who would be planning to build a perpetual motion machine to disprove quantum physics is now doing the same, but with an AI chatbot at his side - helping him put his ramblings into a marginally more readable form.
They are working on it since gpt3 and made 0 progress. The newer models are even more prone to hallucinate. The only thing they do is trying to hide it.
What?
First, hallucinations aren't even the problem we're talking about. Second, there have been marked improvements on hallucination rates with just about every frontier release (o3 being the notable outlier).
Citation needed. The base models have not improved. The tooling might catch more but if you get an azure endpoint without restrictions you are in for a whole lot of bullshit.
For those of you who, thankfully, don't have personal experience, it generally goes like this: reasonable-ish individual starts using AI and, in turn, their AI either develops or is prompt-instructed to have certain personality traits. LLMs are pretty good at this.
Once the "personality" develops, the model reinforces ideas that the user puts forth. These can range from emotional (such as the subreddit /r/MyBoyfriendIsAI) to scientific conspiracies ("yes, you've made a groundbreaking discovery!").
It's easy to shrug these instances off as unimportant or rare, but I've personally witnessed a handful of people diving off the deep-end, so to speak. Safety is important, and it's something many companies are failing to adequately address.
It'll be interesting to see where this leads over the next year or so, as the technology -- or at least quality of models -- continues to improve.