You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).
All of that just sounds hard, not mathematically impossible.
As I understand it, this is mostly a rehash on the dated Lucas Penrose argument, which most Mind Theory researches refute.
So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.
So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.
What does humility have to do with anything?
> So because of this we know reality is governed by maths.
That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.
Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.
> What does humility have to do with anything?
Not the GP but I think humility is kinda relevant here.
Let me repharse it. As far as we know all of reality is governed by the principles of logic and therefore math. This is the most likely possibility and we have based all of our technology and culture and science around this. It is the fundamental assumption humanity has made on reality. We cannot consistently demonstrate disproof against this assumption.
>Not the GP but I think humility is kinda relevant here.
How so? If I assume all of reality is governed by math, but you don't. How does that make me not humble but you humble? Seems personal.
What you said is only true for the bits of humanity you have decided to focus upon -- capitalist, technology-driven modern societies. If you looked beyond that, there are cultures that build society upon other assumptions. You might think those other modes are "wrong", but that's your personal view. For me, I personally don't think any of these are "true" in the absolute sense, as much as I don't think yours is "true". They're just ways humans with our mortal brains try to grapple with a reality that we don't understand.
As a sidenote, probability does not mean the thing you think it means. There's no reasonable frequentist interpretation for fundamental truth of reality, so you're just saying your Bayesian subjective probability says that math is "the most likely possibility". Which is fine, except everyone has their own different priors...
Whatever probability is, whatever philosophers say about it any of this it doesn’t matter. You act like all of it is true including the usage of the web technology that allows you to post your idea here. You are acting as if all the logic, science and technology that was involved in the creation of that web technology is real and thus I am simply saying because the entire world claims this assumption by action then my claim is inline with the entire world.
You can make a philosophical argument but your actions aren’t inline with that. You may say no one can prove math or probability to be real but you certainly don’t live your life that way. You don’t think that science logic and technology will suddenly fall apart and not work when oh turn on your computer. In fact you live your life as if those things are fundamentally true. Yet you talk as if they might not be.
Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.
We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.
If we're OK with descriptions so lossy that they fit in a sentence, we also understand the human brain:
A electrochemical network with external inputs and some feedback loops, pumping ions around to trigger voltage cascades to create muscle contractions as outputs.
No. The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.
That "can" should be "could", else it presumes too much.
For both human brains and surprisingly small ANNs, far smaller than LLMs, humanity collectively does not yet know the defining characteristics of the aspects we care about.
I mean, humanity don't agree with itself what any of the three initials of AGI mean, there's 40 definitions of the word "consciousness", there are arguments about if there is either exactly one or many independent G-factors in human IQ scores, and also if those scores mean anything beyond correlating with school grades, and human nerodivergence covers various real states of existance that many of us find incomprehensible (sonetimes mutually, see e.g. most discussions where aphantasia comes up).
The main reason I expect little from an AI is that we don't know what we're doing. The main reason I can't just assume the least is because neither did evolution when we popped out.
Often people who don’t know how to be logical end up using analogies as proof. And you can simply say that the analogy doesn’t apply and is inaccurate and the whole argument becomes garbage because analogies aren’t logical basis for anything.
Analogies are communication cools to facilitate easier understanding they are not proofs or evidence of anything.
https://www.reddit.com/r/singularity/comments/1lbbg0x/geoffr...
https://youtu.be/qrvK_KuIeJk?t=284
In that video above George Hinton, directly says we don't understand how it works.
So I don't speak just for myself. I speak for the person who ushered in the AI revolution, I speak for Experts in the field who know what they're talking aboutt. I don't speak for people who don't know what they're talking about.
Even though we know it's a feedforward network and we know how the queries are tokenized you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
Don't try to just argue with me. Argue with the experts. Argue with the people who know more than you, Hinton.
That isn't what Hinton said in the first link. He says essentially:
People don't understand A so they think B.
But actually the truth is C.
This folksy turn of phrase is about a group of "people" who are less knowledgeable about the technology and have misconceptions.
Maybe he said something more on point in the second link, but your haphazard use of urls doesn't make me want to read on.
I watch a lot of video interviews on hinton I can assure you that “not understanding” is 100 percent his opinion both from the perspective of the actual events that occurred and as someone who knows his general stance from watching tons of interviews and videos about him.
So let me be frank with you. There are people smarter than you and more eminent than you who think you are utterly and completely wrong. Hinton is one of those people. Hopefully that can kick start the way you think into actually holding a more nuanced world view such that you realize that nobody really understands LLMs.
Half the claims on HN are borderline religious. Made up by people who unconsciously scaffold evidence to support the most convenient view.
If we understood AI completely and utterly we would be able to set those weights in a neural net into values that give us complete and total control over how the neural net behaves. This is literally our objective as human beings who created the neural net. We want to do this and we absolutely know that their exists a configuration of weights in reality that can help us achieve this goal that we want so much.
Why haven’t we just reached this goal? Because we literally don’t understand how to reach this goal even though we know it exists. We. Don’t. Understand. It is literally the only conclusion that follows given our limited ability to control LLMs. Any other conclusion is ludicrous and a sign that your logical thought process is not crystal clear.
And that's okay - his humility isn't holding anyone back here. I'm not claiming to have memorized every model weight ever published, either. But saying that we don't know how AI works is empirically false; AI genuinely wouldn't exist if we weren't able to interpret and improve upon the transformer architecture. Your statement here is a dangerous extrapolation.
> you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
You'd think this, but it's actually wrong. If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time. It's a lot of math, but it's wholly possible to compute exactly what AI would say ahead of time if you can solve for the non-deterministic seeded entropy, or remove it entirely.
LLM weights and tokenizer are both always idempotent, the inference software often introduces variability for more varied responses. Just so we're on the same page here.
That answers the "what", but not the "why" nor the "how exactly", with the latter being crucial to any claim that we understand how these things actually work.
If we actually did understand that, we wouldn't need to throw terabytes of data on them to train them - we'd just derive that very equation directly. Or, at the very least, we would know how to do so in principle. But we don't.
Your statement completely contradicts hintons statement. You didn’t even address his point. Basically you’re saying Hinton is wrong and you know better than him. If so, counter his argument don’t restate your argument in the form of an analogy.
> You'd think this, but it's actually wrong.
No you’re just trying to twist what I’m saying into something that’s wrong. First I never said it’s not deterministic. All computers are deterministic, even RNGs. I’m saying we have no theory about it. A plane for example you can predict its motion via a theory. The theory allows us to understand and control an airplane and predict its motion. We have nothing for an LLM. No theory that helps us predict, no theory that helps us fully control and no theory that helps us understand it beyond the high level abstraction of a best fit curve in multidimensional space. All we have is an algorithm that allows an LLM to self assemble as a side effect from emergent effects.
Rest assured I understand the transformer as much as you do (which is to say humanity has limited understanding of it) you don’t need to assume I’m just going off hintons statements. He and I knows and understands LLMs as much as you even though we didnt invent it. Please address what I said and what he said with a counter argument and not an analogy that just reiterates an identical point.
Care to elaborate? Because that is utter nonsense.
"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.
(This is an illustrative example made for easy understanding, not something I specifically went and compared)
We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.
Prove or give a counter-example of the following statement:
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
The above is a video clip of Hinton basically contradicting what you’re saying.
So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.
So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.