Preferences

> It's a stochastic parrot

Stopped reading right there. The author clearly has no clue of how neural networks work or what makes them tick.


"stochastic parrot" is an interesting shibboleth for a certain school of thought on the capabilities of LLMs, but i think it is uncharitable to make the leap straight to "has no clue of how neural networks work". on some level, all of us who spend our free time enough to know what "stochastic parrot" refers to have some idea how NNs work, and on another, none of us know how NNs really work.

we could all do with a bit more humility dealing with this topic and each other.

That's fair, but it is a negative take that disregards all emergent properties. If you strip all emergent properties from it, there is nothing left. The same thing is true of all biological systems.

Why bother being "human". We are all just a bunch of cells exchanging chemicals and electrical signals. That's all there is to it. There is no reasoning, just a bunch of signals going back and forth.

How honestly can you say that "emergent properties" are real if you haven't really seen the training data or you don't actually know how the thing works?.

It stand's to reason that the bigger the model, the more likely you'll get an answer to a question you're looking for. Even the apparently "tricky questions".

Even things like translating code from one coding language to another...

Anyway, maybe we are ALL stochastic parrots (including ChatGPT-10) and that's all we'll ever be...bravo.

Emergent properties are never "real". They just are and you can see them happening, but "underneath" it's nothing.

Edit: I meant to say I don't need access to training. By experimenting with in/outputs you can get a basic picture. I don't need to see biological scans to say something about your personality either.

I think an important distinction here is to say that currently, you perceive them to be real. They aren't factually real things, at least not yet.

Judging someones personality is a subjective process, not an objective one.

I do not. What I say is that I perceive them. Their realness is a non-issue (to me). "Factual", you mean by "authorities"? I do get your point, but I think you overthink the issue. If you see something, it is there. It can be illusory, sure, but think about why that matters.
Do you have any resources you'd recommend to form a better understanding of how NNs tick? I'd like to get a better intuitive grasp on whats going on - I've mostly just been responding to that with "Well, if stochastic parrotism can do all this..."
In case you haven't seen it yet, the term "stochastic parrot" got introduced by this paper [1] titled "On the danger of stochastic parrots". A related paper [2] titled "Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data" got awarded Best Paper by the Assoc. for Computational Linguistics and it's also easier to read.

Those two papers are critical of LLMs and discuss what researchers believe that they can and cannot do. I don't say that you need to agree with them but I think reading them should give you a good primer on why some researchers are not as excited as HN users are.

[1] https://dl.acm.org/doi/10.1145/3442188.3445922

[2] https://aclanthology.org/2020.acl-main.463/

The recent post from Stephen Wolfram[1] is pretty good as an introduction, but I haven't seen any super comprehensive material that tries do dissect all the interesting behaviour we see in the really big llms. For that just reading the relevant papers themselves has been pretty fruitful for me. Some of them are actually very well written, even if you aren't used to reading scientific papers. I can recommend the Sparks of AGI paper[2] and the toolformer paper[3].

Obviously there's much more out there, those three things are a pretty good read.

[1]: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...

[2]: https://arxiv.org/abs/2303.12712

[3]: https://arxiv.org/abs/2302.04761

At this point, anyone using the term "stochastic parrot" is just giving credence to my personal belief that humans are also stochastic parrots.
A more apt term would be delusional parrot, applies both to the LLM and to everyone else who thinks GPT is the second coming of Jesus.
I mean, its probably a component (or an approximation of a component) of what we do, at some level. Christ knows I've felt like a stochastic parrot when I'm zoning out 3 hours into a meeting and someone asks me a question out of the blue. I probably have a smaller context window at those points than GPT3 does...
Author is a college freshman, I stopped reading after reading the about me.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal