Anyways, I have a bone to pick with you in your last paragraph. You are creating the problem for yourself. There are plenty of people elsewhere (even within HN) discussing exactly what you want, but you choose not to interact with them and instead spend time arguing against "ridiculous blueprints".
You choose what you interact with online when it comes to posting comments, you are choosing not to interact with "nuanced conversations and genuine information transfer" -- why? Are we certain you care about genuine information transfer, or are you just here to feel superior to plebs with "anti-ChatGPT arguments"? Rhetorical questions for the culture.
"Interns", for short.
It is relevant and you know exactly why it can't be left by itself.
> There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.
Except that people can be held to account when something goes wrong and an AI cannot. I can guarantee you that you would not trust an AI in high risk situations such as Level 5 transportation in cars or planes with no pilots (this is not the same as autopilot mid-flight) and sit on the passenger's seat to transport you from A to B.
> Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.
You're not getting the point. It's about trustworthiness in AI, when a human does something wrong they can explain themselves and their actions transparently. A black-box AI model cannot, and can generate and regurgitate nonsense confidently from it's own training set to convince novices that it is correct.
> There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.
Or perhaps many here are skeptical about the AI LLM hype and still do not trust it?
There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.
Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.
This community has such a ridiculous blueprint for “anti-ChatGPT” arguments. There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.