Preferences

> Unlike living things, that information doesn't allow them to change.

The paper is talking about whole systems for AGI not the current isolated idea of pure LLM. Systems can store memories without issues. I'm using that for my planning system and the memories and graph triplets get filled out automatically, the get incorporated in future operations.

> It can produce some sequence that may or may not cause some external entity to feed it back some more data

That's exactly what people do while they do research.

> The representation of the information has nothing to do with what it represents.

That whole point implies that the situation is different in our brains. I've not seen anyone describe exactly how our thinking works, so saying this is a limitation for intelligence is not a great point.


usrbinbash
> That whole point implies that the situation is different in our brains.

The situation is different in our brains, and we don't need to know how exactly human thinking works to acknowledge that...we know humans can infer meaning from language other than the statistical relationship between words.

viraptor OP
> and we don't need to know how exactly human thinking works to acknowledge that.

Until you know how thinking works in humans, you can't say something else is different. We've got the same inputs available that we can provide to AI models. Saying we don't form our thinking based on statistics on those inputs and the state of the brain is a massive claim on its own.

usrbinbash
> Until you know how thinking works in humans, you can't say something else is different.

Yes, I very much can, because I can observe outcomes. Humans are a) alot more capable than language models, and b) humans do not rely solely on the statistical relationships of language tokens.

How can I show that? Easily in fact: Language tokens require organized language.

And our evolutionary closest relatives (big apes) don't rely on organized speech, and they are able of advanced cognition (planning, episodic memory, theory of the mind, theory of self, ...). The same is true for other living beings, even vertebrates that are not closely related with us, like Corvidae, and even some invertebrates like Cephalopods.

So unless you can show that our brains are somehow more closely related to silicon-based integrated circuits than they are to those of a Gorilla, Raven or Octopus, my point stands.

viraptor OP
> Humans are a) alot more capable than language models

That's a scale of capability, not architecture difference. A human kid is less capable than an adult, but you wouldn't classify them as thinking using different mechanisms.

> b) humans do not rely solely on the statistical relationships of language tokens. (...) Language tokens require organized language.

That's just how you provide data. Multimodal models can accept whole vectors describing images, sounds, smells, or whatever else - all of them can be processed and none of them are organised language.

> that our brains are somehow more closely related to silicon-based integrated circuits than they are to those of a Gorilla

That's entirely different from a question about functional equivalence and limit of capabilities.

This item has no comments currently.