Should they not have done so?
Like this guy for example, was he being stupid? https://www.thesun.co.uk/health/37561550/teen-saves-life-cha...
Or this guy? https://www.reddit.com/r/ChatGPT/comments/1krzu6t/chatgpt_an...
Or this woman? https://www.hackerneue.com/item?id=43171639
This is a real thing that's happening every day. Doctors are not very good at recognizing rare conditions.
They got lucky.
This is why I wrote this blog post. I'm sure some people got lucky when an LLM managed to give them the right answer, because they go and brag about it. How many people got the wrong answer? How many of them bragged about their bad decision? This is _selection bias_. I'm writing about my embarrassing lapse of judgment because I doubt anyone else will
AI saves lives, it's selection bias.
AI gives bad advice after being asked leading questions by a user who clearly doesn't know how to use AI correctly, AI is terrible and nobody should ask it about medical stuff.
Or perhaps there's a more reasonable middle ground? "It can be very useful to ask AI medical questions, but you should not rely on it exclusively."
I'm certainly not suggesting that your story isn't a useful example of what can go wrong, but I insist that the conclusions you've reached are in fact mistaken.
The difference between your story and the stories of the people whose lives were saved by AI is that they did generally not blindly trust what the AI told them. It's not necessary to trust AI to receive helpful information, it is basically necessary to trust AI in order to hurt yourself using it.
I'm certainly not suggesting that you should ask LLM for medical diagnoses, but still, someone who actually understands the tool they're using would likely not have ended up in your situation.