One problem ChatGPT would have in its current form is it would need auxiliary assistance to craft a larger-sized bill, as bills easily exceed its current window size. But that's a solvable problem too.
A technology can be both wildly powerful, mindblowingly cool, and deeply imperfect. I don't believe it's intellectually dishonest to emphasize the latter when it comes to impact against the human beings on the other end of the barrel. Especially when the technology starts to break out of the communities that already understand it (and its limitations).
I truly can’t tell whether you are describing the US Congress or LLMs.
You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.
Of note: I use chatgpt a lot to generate a lot of garbage. Or for those of you offended by the word, then mentally replace it with something more "neutral" sounding like "debris" or "fragments".
Exactly.
It is AI snake oil with the humans still checking if it will hallucinate which it certainly will and thus cannot be fully autonomous and needs qualified people monitoring and reading / checking the output.
Since not only it can generate garbage, it is untrustworthy to be left by itself and to be fully autonomous at the click of a button.
There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.
Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.
This community has such a ridiculous blueprint for “anti-ChatGPT” arguments. There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.
Anyways, I have a bone to pick with you in your last paragraph. You are creating the problem for yourself. There are plenty of people elsewhere (even within HN) discussing exactly what you want, but you choose not to interact with them and instead spend time arguing against "ridiculous blueprints".
You choose what you interact with online when it comes to posting comments, you are choosing not to interact with "nuanced conversations and genuine information transfer" -- why? Are we certain you care about genuine information transfer, or are you just here to feel superior to plebs with "anti-ChatGPT arguments"? Rhetorical questions for the culture.
"Interns", for short.
It is relevant and you know exactly why it can't be left by itself.
> There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.
Except that people can be held to account when something goes wrong and an AI cannot. I can guarantee you that you would not trust an AI in high risk situations such as Level 5 transportation in cars or planes with no pilots (this is not the same as autopilot mid-flight) and sit on the passenger's seat to transport you from A to B.
> Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.
You're not getting the point. It's about trustworthiness in AI, when a human does something wrong they can explain themselves and their actions transparently. A black-box AI model cannot, and can generate and regurgitate nonsense confidently from it's own training set to convince novices that it is correct.
> There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.
Or perhaps many here are skeptical about the AI LLM hype and still do not trust it?
This is true. It is not really _generating_ garbage, as much as it is regurgitating garbage from the input data.
https://arxiv.org/abs/2303.12712
All things mentioned in Section 10.2 are extremely worrying in a context of lawmaking and jurisprudence in general.