Preferences

Calling cutting edge-models consumer facing models like ChatGPT-4 garbage generating machines is very intellectually dishonest. These models are fully capable of drafting these kinds of texts, esp. when qualified staff is guiding the model.

Well, I just popped in "Write a new Federal law banning the collection of melted snow by individuals or small-business proprietorships for the purpose of protecting endangered plant species. Include a loophole that excludes minority-owned businesses or people who contribute a sufficient amount of money to carbon sequestration technologies or senators or representatives who voted in favor of strongly pro-union causes." and I won't burden HN with the results but it definitely has the shape of a fully-fledged bill for Congress to pass.

One problem ChatGPT would have in its current form is it would need auxiliary assistance to craft a larger-sized bill, as bills easily exceed its current window size. But that's a solvable problem too.

They may not generate garbage per se, but they do generate bullshit. Or if you want to put it more positively, they are consummate improvisers. The amount of guidance they require cannot be understated, since while they demonstrate phenomenal capacity for production of language, and understanding of language, they do not yet demonstrate much in the way of capacity for control or alignment.

A technology can be both wildly powerful, mindblowingly cool, and deeply imperfect. I don't believe it's intellectually dishonest to emphasize the latter when it comes to impact against the human beings on the other end of the barrel. Especially when the technology starts to break out of the communities that already understand it (and its limitations).

> They may not generate garbage per se, but they do generate bullshit. Or if you want to put it more positively, they are consummate improvisers. The amount of guidance they require cannot be understated, since while they demonstrate phenomenal capacity for production of language, and understanding of language, they do not yet demonstrate much in the way of capacity for control or alignment.

I truly can’t tell whether you are describing the US Congress or LLMs.

I can't deny that the similarities are strong enough that it weakens some of the philosophical underpinnings of the argument. But I am also wondering these days whether we are all just LLMs at the core of it.
As long as they actually look at it which I don't expect them to, especially after revision 15 of a multi-page document.
How is it intellectually dishonest? It generates garbage, it's fully up to you to dig into that garbage and find something worthy from it. It has no idea it's even generating garbage!

You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.

Of note: I use chatgpt a lot to generate a lot of garbage. Or for those of you offended by the word, then mentally replace it with something more "neutral" sounding like "debris" or "fragments".

> You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.

Exactly.

It is AI snake oil with the humans still checking if it will hallucinate which it certainly will and thus cannot be fully autonomous and needs qualified people monitoring and reading / checking the output.

Since not only it can generate garbage, it is untrustworthy to be left by itself and to be fully autonomous at the click of a button.

Nobody suggested it be left by itself.

There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.

Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.

This community has such a ridiculous blueprint for “anti-ChatGPT” arguments. There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.

In attempts to avoid the phrase "garbage generator" you've described human beings in your life in the most depressing way possible. Value providers who you don't trust to operate by themselves.

Anyways, I have a bone to pick with you in your last paragraph. You are creating the problem for yourself. There are plenty of people elsewhere (even within HN) discussing exactly what you want, but you choose not to interact with them and instead spend time arguing against "ridiculous blueprints".

You choose what you interact with online when it comes to posting comments, you are choosing not to interact with "nuanced conversations and genuine information transfer" -- why? Are we certain you care about genuine information transfer, or are you just here to feel superior to plebs with "anti-ChatGPT arguments"? Rhetorical questions for the culture.

> Value providers who you don't trust to operate by themselves.

"Interns", for short.

> Nobody suggested it be left by itself.

It is relevant and you know exactly why it can't be left by itself.

> There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.

Except that people can be held to account when something goes wrong and an AI cannot. I can guarantee you that you would not trust an AI in high risk situations such as Level 5 transportation in cars or planes with no pilots (this is not the same as autopilot mid-flight) and sit on the passenger's seat to transport you from A to B.

> Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.

You're not getting the point. It's about trustworthiness in AI, when a human does something wrong they can explain themselves and their actions transparently. A black-box AI model cannot, and can generate and regurgitate nonsense confidently from it's own training set to convince novices that it is correct.

> There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.

Or perhaps many here are skeptical about the AI LLM hype and still do not trust it?

Intellectual honesty is very much in the garbage generating machine camp. Making an embedding space of reasonable language and then randomly sampling it is not a way to draft a law.
As someone that doesn’t know how the human brain works, and has never drafted any laws, let alone empirically seen what value an LLM can bring in this scenario, you should certainly quality this with a massive “in my layperson’s opinion”.
> Calling cutting edge-models consumer facing models like ChatGPT-4 garbage generating machines is very intellectually dishonest.

This is true. It is not really _generating_ garbage, as much as it is regurgitating garbage from the input data.

I beg to disagree. There are already hundreds of real-world examples whereby these models are doing terrible jobs with anything related to jurisprudence.
If you want proof, it is enough to point to the following recent paper from Microsoft people:

https://arxiv.org/abs/2303.12712

All things mentioned in Section 10.2 are extremely worrying in a context of lawmaking and jurisprudence in general.

Guiding and finalizing / correcting.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal