Well, it in fact depends on what intelligence is to your understanding:
-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.
- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.
- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.
The main point is: neither algorithms nor rationality can point beyond itself.
In other words: You cannot think out of the box - thinking IS the box.
(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)
2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.
3. Physical laws - where do they really come from? From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable? I honestly don’t know.
In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.
Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"
You seem to be laboring under the mistaken idea that "algorithmic" does not encompass everything allowed by physics. But, humoring this idea, then if physical laws allow it, why can this "more than algorithmic" cognition not be done artificially? As you say - we can obviously do it. What magical line is preventing an artificial system from doing the same?
Why not use that as the title of your paper? That a more fundamental claim.
But it is the fundamental objection he would need to overcome.
There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.
This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".
Your claim here also goes against the physical interpretation of the Church-Turing thesis.
Without rigorously addressing this, there is no point taking your papers seriously.
1. THEOREM: Let a semantic frame be defined as Ω = (Σ, R), where
Σ is a finite symbol set and R is a finite set of inference rules.
Let Ω′ = (Σ′, R′) be a candidate successor frame.
Define a frame jump as: Frame Jump Condition: Ω′ extends Ω if Σ′\Σ ≠ ∅ or R′\R ≠ ∅
Let P be a deterministic Turing machine (TM) operating entirely within Ω.
Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
(Whereas Σ = the set of all finite symbol strings in the frame; derivable outputs are formed from Σ under the inference rules R.)
Proof Sketch: P’s tape alphabet is fixed to Σ and symbols derived from Σ. By induction, no computation step can introduce a symbol not already in Σ. ∎
2. APPLICATION: Newton → Special Relativity
Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame) Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)
Let φ = “The speed of light is invariant in all inertial frames.” Let Tᴿ be the theory of special relativity. Let Pᴺ be a TM constrained to Σᴺ.
By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.
But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ
→ Therefore Pᴺ ⊬ φ → Tᴿ ⊈ L(Pᴺ)
Thus:
Special Relativity cannot be derived from Newtonian physics within its original formal frame.
3. EMPIRICAL CONFLICT Let: Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t) Axiom N₂: Ether model for light speed Data D: Michelson–Morley ⇒ c = const
In Ωᴺ, combining N₁ and N₂ with D leads to contradiction. Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ But by Lemma 1: impossible within Pᴺ. -> Frame must be exited to resolve data.
4. FRAME JUMP OBSERVATION
Einstein introduced Σᴿ — a new frame with new symbols and transformation rules. He did so without derivation from within Ωᴺ. That constitutes a frame jump.
5. FINALLY
A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅
B: Einstein was human
C: Therefore, humans can initiate frame jumps (i.e., generate formal systems containing symbols/rules not computable within the original system).
Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps. But human cognition demonstrably can.
QED.
BUT: Can Humans COMPUTE those functions? (As you asked)
-> Answer: a) No - because frame-jumping is not a computation.
It’s a generative act that lies outside the scope of computational derivation. Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.
In each case, the cognitive system fails not from error, but from structural constraint. AND: The same constraint exists for human rationality.
Is that not the other way around? “…how long do I have to stay at high speed in order for my younger brother to look as grown up as myself?”
But I'm so used to AGI being conflated with ASI that it didn't seem worth it compared to the more fundamental errors.
Wrt ‘AGI/ASI’, while they’re not the same, after reading Nick Bostrom (and more recently https://ai-2027.com) I hang towards AGI being a blib on the timeline towards ASI. Who knows.
More interestingly, humans are capable of assessing the results of their "neural misfires" ("hmm, there's something to this"), whereas even if we could make a computer do such mistakes, it wouldn't know its Penny Lane from its Daddy's Car[0], even if it managed to come up with one.
Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.
Why do you think it mustn't be algoritmic?
Why do you think humans are capable of doing anything that isn't algoritmic?
This statement, and your lack of mention of the Church-Turing thesis in your papers suggests you're using a non-standard definition of "algoritmic", and your argument rests on it.
Ai currently has issues with seeing what's missing. Seeing the negative space.
When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.
Basically the ability to say, "this has stopped making sense" and stop or change approach.
Also, we clearly do path exploration and semantic compression in our sleep.
We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).
Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.
I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.
We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.
There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.
It's a deeply philosophical question what constitutes a subjective experience of "green" or whatever... but intelligence is a bit more tractable IHO.
Similar to how "computer code" and "video game world" are the same thing. Everything in the video game world is perfectly encoded in the programming. There is nothing transcendent happening, it's two different views of the same core object.
Have you not met the average person on the street? (/s)
You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).
All of that just sounds hard, not mathematically impossible.
As I understand it, this is mostly a rehash on the dated Lucas Penrose argument, which most Mind Theory researches refute.
So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.
So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.
What does humility have to do with anything?
> So because of this we know reality is governed by maths.
That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.
Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.
> What does humility have to do with anything?
Not the GP but I think humility is kinda relevant here.
Let me repharse it. As far as we know all of reality is governed by the principles of logic and therefore math. This is the most likely possibility and we have based all of our technology and culture and science around this. It is the fundamental assumption humanity has made on reality. We cannot consistently demonstrate disproof against this assumption.
>Not the GP but I think humility is kinda relevant here.
How so? If I assume all of reality is governed by math, but you don't. How does that make me not humble but you humble? Seems personal.
What you said is only true for the bits of humanity you have decided to focus upon -- capitalist, technology-driven modern societies. If you looked beyond that, there are cultures that build society upon other assumptions. You might think those other modes are "wrong", but that's your personal view. For me, I personally don't think any of these are "true" in the absolute sense, as much as I don't think yours is "true". They're just ways humans with our mortal brains try to grapple with a reality that we don't understand.
As a sidenote, probability does not mean the thing you think it means. There's no reasonable frequentist interpretation for fundamental truth of reality, so you're just saying your Bayesian subjective probability says that math is "the most likely possibility". Which is fine, except everyone has their own different priors...
Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.
We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.
If we're OK with descriptions so lossy that they fit in a sentence, we also understand the human brain:
A electrochemical network with external inputs and some feedback loops, pumping ions around to trigger voltage cascades to create muscle contractions as outputs.
No. The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.
That "can" should be "could", else it presumes too much.
For both human brains and surprisingly small ANNs, far smaller than LLMs, humanity collectively does not yet know the defining characteristics of the aspects we care about.
I mean, humanity don't agree with itself what any of the three initials of AGI mean, there's 40 definitions of the word "consciousness", there are arguments about if there is either exactly one or many independent G-factors in human IQ scores, and also if those scores mean anything beyond correlating with school grades, and human nerodivergence covers various real states of existance that many of us find incomprehensible (sonetimes mutually, see e.g. most discussions where aphantasia comes up).
The main reason I expect little from an AI is that we don't know what we're doing. The main reason I can't just assume the least is because neither did evolution when we popped out.
https://www.reddit.com/r/singularity/comments/1lbbg0x/geoffr...
https://youtu.be/qrvK_KuIeJk?t=284
In that video above George Hinton, directly says we don't understand how it works.
So I don't speak just for myself. I speak for the person who ushered in the AI revolution, I speak for Experts in the field who know what they're talking aboutt. I don't speak for people who don't know what they're talking about.
Even though we know it's a feedforward network and we know how the queries are tokenized you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
Don't try to just argue with me. Argue with the experts. Argue with the people who know more than you, Hinton.
That isn't what Hinton said in the first link. He says essentially:
People don't understand A so they think B.
But actually the truth is C.
This folksy turn of phrase is about a group of "people" who are less knowledgeable about the technology and have misconceptions.
Maybe he said something more on point in the second link, but your haphazard use of urls doesn't make me want to read on.
And that's okay - his humility isn't holding anyone back here. I'm not claiming to have memorized every model weight ever published, either. But saying that we don't know how AI works is empirically false; AI genuinely wouldn't exist if we weren't able to interpret and improve upon the transformer architecture. Your statement here is a dangerous extrapolation.
> you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
You'd think this, but it's actually wrong. If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time. It's a lot of math, but it's wholly possible to compute exactly what AI would say ahead of time if you can solve for the non-deterministic seeded entropy, or remove it entirely.
LLM weights and tokenizer are both always idempotent, the inference software often introduces variability for more varied responses. Just so we're on the same page here.
Care to elaborate? Because that is utter nonsense.
"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.
(This is an illustrative example made for easy understanding, not something I specifically went and compared)
We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.
The above is a video clip of Hinton basically contradicting what you’re saying.
So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.
So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.