Preferences

suchintan parent
I have a 2yo and it's been surreal watching her learn the world. It deeply resembles how LLMs learn and think. Crazy

Retric
Odd, I've been stuck by how different LLMs and kids learn the world.

You don’t get that whole uncanny valley disconnect do you?

haskellshill
> It deeply resembles how LLMs learn and think

What? LLMs don't think nor learn in the sense humans do. They have absolutely no resemblance to a human being. This must be the most ridiculous statement I've read this year

goatlover
How so? Your kid has a body that interacts with the physical world. An LLM is trained on terabytes of text, then modified by human feedback and rules to be a useful chatbot for all sorts of tasks. I don't see the similarity.
crazygringo
If you watch how agents attempt a task, fail, try to figure out what went wrong, try again, repeat a couple more times, then finally succeed -- you don't see the similarity?
haskellshill
> try to figure out what went wrong

LLMs don't do this. They can't think. If you just one for like five minutes it's obvious that just because the text on the screen says "Sorry, I made I mistake, there are actually 5 r's in strawberry", doesn't mean there's any thought behind it.

crazygringo
I mean, you can literally watch their thought process. They try to figure out reasons why something went wrong, and then identify solutions. Often in ways that require real deduction and creativity. And have quite a high success rate.

If that's not thinking, then I don't know what is.

dingnuts
no I see something resembling gradient descent which is fine but it's hardly a child
balder1991
No, because an agent doesn’t learn, it’s just continuing a story. A kid will learn from the experience and at the end will be a different person.
CaptainOfCoit
You just haven't added the right tool together with the right system/developer prompt. Add a `add_memory` and `list_memory` (or automatically inject the right memories for the right prompts/LLM responses) and you have something that can learn.

You can also take it a step further and add automatic fine-tuning once you start gathering a ton of data, which will rewire the model somewhat.

haskellshill
Perhaps it can improve but it can't learn because that requires thought. Would you say that a PID regulator can "learn"?
CaptainOfCoit
I guess it depends on what you understand "learn" to mean.

But in my mind, if I tell the LLM to do something, and it did it wrong, then I ask it to fix it, and if in the future I ask the same thing and it avoids the mistake it did first, then I'd say it had learned to avoid that same pitfall, although I know very well it hasn't "learned" like a human would, I just added it to the right place, but for all intents and purposes, it "learned" how to avoid the same mistake.

deadbabe
A person is not their body.

The person is the data that they have ingested and trained on through the senses that are exposed by their body. Body is just an interface to reality.

haskellshill
That is a very weird and fringe definition of what a person is.
deadbabe
If you have a different life experience than what you had so far, wouldn’t you be a different person?
melagonster
I am sorry, but you are scoffing at the humanity of your kid; you know that, right?

This item has no comments currently.