You don’t get that whole uncanny valley disconnect do you?
What? LLMs don't think nor learn in the sense humans do. They have absolutely no resemblance to a human being. This must be the most ridiculous statement I've read this year
LLMs don't do this. They can't think. If you just one for like five minutes it's obvious that just because the text on the screen says "Sorry, I made I mistake, there are actually 5 r's in strawberry", doesn't mean there's any thought behind it.
You can also take it a step further and add automatic fine-tuning once you start gathering a ton of data, which will rewire the model somewhat.
But in my mind, if I tell the LLM to do something, and it did it wrong, then I ask it to fix it, and if in the future I ask the same thing and it avoids the mistake it did first, then I'd say it had learned to avoid that same pitfall, although I know very well it hasn't "learned" like a human would, I just added it to the right place, but for all intents and purposes, it "learned" how to avoid the same mistake.
The person is the data that they have ingested and trained on through the senses that are exposed by their body. Body is just an interface to reality.