Preferences

like_any_other parent
So does the human brain transcend math, or are humans not generally intelligent?

ICBTheory
Hi and thanks for engaging :-)

Well, it in fact depends on what intelligence is to your understanding:

-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.

- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.

- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.

The main point is: neither algorithms nor rationality can point beyond itself.

In other words: You cannot think out of the box - thinking IS the box.

(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)

like_any_other OP
Let me steal another users alternate phrasing: Since humans and computers are both bound by the same physical laws, why does your proof not apply to humans?
ICBTheory
Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving. (And also: I am bound by thermodynamics as my mother in Law is, still i get disarranged by her mere presence while I always have to put laxatives in her wine to counter that)

2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.

3. Physical laws - where do they really come from? From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable? I honestly don’t know.

In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.

Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"

like_any_other OP
> Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving.

You seem to be laboring under the mistaken idea that "algorithmic" does not encompass everything allowed by physics. But, humoring this idea, then if physical laws allow it, why can this "more than algorithmic" cognition not be done artificially? As you say - we can obviously do it. What magical line is preventing an artificial system from doing the same?

nialv7
If by algorithmic you just mean anything that a Turing machine can do, then your theorem is asserting that the Church-Turing thesis isn't true.

Why not use that as the title of your paper? That a more fundamental claim.

vidarh
The lack of mention of the Church-Turing thesis in both papers suggest he hasn't even considered that angle.

But it is the fundamental objection he would need to overcome.

There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.

vidarh
> Basically because physical laws obviously allow more than algorithmic cognition and problem solving.

This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".

Your claim here also goes against the physical interpretation of the Church-Turing thesis.

Without rigorously addressing this, there is no point taking your papers seriously.

ICBTheory
No problem here is you proof - although a bit long:

1. THEOREM: Let a semantic frame be defined as Ω = (Σ, R), where

Σ is a finite symbol set and R is a finite set of inference rules.

Let Ω′ = (Σ′, R′) be a candidate successor frame.

Define a frame jump as: Frame Jump Condition: Ω′ extends Ω if Σ′\Σ ≠ ∅ or R′\R ≠ ∅

Let P be a deterministic Turing machine (TM) operating entirely within Ω.

Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.

(Whereas Σ = the set of all finite symbol strings in the frame; derivable outputs are formed from Σ under the inference rules R.)

Proof Sketch: P’s tape alphabet is fixed to Σ and symbols derived from Σ. By induction, no computation step can introduce a symbol not already in Σ. ∎

2. APPLICATION: Newton → Special Relativity

Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame) Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)

Let φ = “The speed of light is invariant in all inertial frames.” Let Tᴿ be the theory of special relativity. Let Pᴺ be a TM constrained to Σᴺ.

By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.

But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ

→ Therefore Pᴺ ⊬ φ → Tᴿ ⊈ L(Pᴺ)

Thus:

Special Relativity cannot be derived from Newtonian physics within its original formal frame.

3. EMPIRICAL CONFLICT Let: Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t) Axiom N₂: Ether model for light speed Data D: Michelson–Morley ⇒ c = const

In Ωᴺ, combining N₁ and N₂ with D leads to contradiction. Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ But by Lemma 1: impossible within Pᴺ. -> Frame must be exited to resolve data.

4. FRAME JUMP OBSERVATION

Einstein introduced Σᴿ — a new frame with new symbols and transformation rules. He did so without derivation from within Ωᴺ. That constitutes a frame jump.

5. FINALLY

A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅

B: Einstein was human

C: Therefore, humans can initiate frame jumps (i.e., generate formal systems containing symbols/rules not computable within the original system).

Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps. But human cognition demonstrably can.

QED.

BUT: Can Humans COMPUTE those functions? (As you asked)

-> Answer: a) No - because frame-jumping is not a computation.

It’s a generative act that lies outside the scope of computational derivation. Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.

In each case, the cognitive system fails not from error, but from structural constraint. AND: The same constraint exists for human rationality.

aoeusnth1
The standard model is computable, so no. Physical law does not allow for non-computable behavior.
catoc
“Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?"”

Is that not the other way around? “…how long do I have to stay at high speed in order for my younger brother to look as grown up as myself?”

rcxdude
Staying at high speed is symmetric! You'd both appear to age slower from the other's POV. It's only if one brother turns around and comes back, therefore accelerating, that you get an asymmetry.
ben_w
Indeed. One of my other thoughts here on the Relativity example was "That sets the bar high given most humans can't figure out special relativity even with all the explainers for Einstein's work".

But I'm so used to AGI being conflated with ASI that it didn't seem worth it compared to the more fundamental errors.

catoc
Given rcxdude’s reply it appears I am one of those humans who can’t figure out special relativity (let alone general)

Wrt ‘AGI/ASI’, while they’re not the same, after reading Nick Bostrom (and more recently https://ai-2027.com) I hang towards AGI being a blib on the timeline towards ASI. Who knows.

geoka9
Humans are fallible in a way computers are not. One could argue any creative process is an exercise in fallibility.

More interestingly, humans are capable of assessing the results of their "neural misfires" ("hmm, there's something to this"), whereas even if we could make a computer do such mistakes, it wouldn't know its Penny Lane from its Daddy's Car[0], even if it managed to come up with one.

[0]https://www.youtube.com/watch?v=LSHZ_b05W7o

ben_w
Hang on, hasn't everyone spent the past few years complaining about LLMs and diffusion models being very fallible?

And we can get LLMs to do better by just prompting them to "think step by step" or replacing the first ten attempts to output a "stop" symbolic token with the token for "Wait… "?

I think humans have some kind of algorithm for deciding what's true and consolidating information. What that is I don't know.
ICBTheory
I guess so too... but whatever it is: it cannot possibly be something algorithmic. Therefore it doesn't matter in terms of demonstrating that AI has a boundary there, that cannot be transcended by tech, compute, training, data etc.
Why can't it be algorithmic? If the brain uses the same process on all information, then that is an algorithmic process. There is some evidence that it does do the same process to do things like consolidating information, processing the "world model" and so on.

Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.

donkeybeer
Explain what you mean by "algorithm" and "algorithmic". Be very precise. You are using this vague word to hinge on your entire argument and it is necessary you explain first what it means. Since from reading your replies here it is clear you are laboring under a defitnition of "algorithm" quite different from the accepted one.
vidarh
Why can't it be algorithmic?

Why do you think it mustn't be algoritmic?

Why do you think humans are capable of doing anything that isn't algoritmic?

This statement, and your lack of mention of the Church-Turing thesis in your papers suggests you're using a non-standard definition of "algoritmic", and your argument rests on it.

fellowniusmonk
This paper is about the limits in current systems.

Ai currently has issues with seeing what's missing. Seeing the negative space.

When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.

Basically the ability to say, "this has stopped making sense" and stop or change approach.

Also, we clearly do path exploration and semantic compression in our sleep.

We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).

Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.

I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.

We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.

There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.

Yep definitely agree with this.
ImHereToVote
Humans use soul juice to connect to the understandome. Machines can't connect to the understandome because of Gödels incompleteness, they can only make relationships between tokens. Not map them to reality like we can via magic.
Workaccount2
Stochastic parrots all the ways down

https://ai.vixra.org/pdf/2506.0065v1.pdf

xeonmc
I think the latter fact is quite self-demonstrably true.
mort96
I would really like to see your definition of general intelligence and argument for why humans don't fit it.
ninetyninenine
Colloquially anything that matches humans in general intelligence and is built by us is by definition an agi and generally intelligent.

Humans are the bar for general intelligence.

umanwizard
How so?
deadbabe
First of all, math isn’t real any more than language isn’t real. It’s an entirely human construct, so it’s possible you cannot reach AGI using mathematical means, as math might not be able to fully express it. It’s similar to how language cannot fully describe what a color is, only vague approximations and measurements. If you wanted to create the color green, you cannot do it by describing various properties, you must create the actual green somehow.
hnfong
As a somewhat colorblind person, I can tell you that the "actual green" is pretty much a lie :)

It's a deeply philosophical question what constitutes a subjective experience of "green" or whatever... but intelligence is a bit more tractable IHO.

Workaccount2
I don't think it would be unfair to accept the brain state of green as an accurate representation of green for all intents and purposes.

Similar to how "computer code" and "video game world" are the same thing. Everything in the video game world is perfectly encoded in the programming. There is nothing transcendent happening, it's two different views of the same core object.

like_any_other OP
Fair enough. But then, AGI wouldn't really be based on math, but on physics. Why would an artificially-constructed physical system have (fundamentally) different capabilities than a natural one?
add-sub-mul-div
My take is that it transcends any science that we'll understand and harness in the lifetime of anyone living today. It for all intents and purposes transcends science from our point of view, but not necessarily in principle.
lexicality
> are humans not generally intelligent?

Have you not met the average person on the street? (/s)

ben_w
Noted /s, but truly this is why I think even current models are already more disruptive than naysayers are willing to accept that any future model ever could be.
topspin
I'm noting the high frequency of think pieces from said naysayers. It's every day now: they're all furiously writing about flaws and limitations and extrapolating these to unjustifiable conclusions, predicting massive investment failures (inevitable, and irrelevant,) arguing AGI is impossible with no falsifiable evidence, etc.
autobodie
Humans do a lot of things that computers don't, such as be born, age (verb), die, get hungry, fall in love, reproduce, and more. Computers can only metaphorically do these things, human learning is correlated with all of them, and we don't confidently know how. Have some humility.
andyjohnson0
TFA presents an information-theoretic argument forAGI being impossible. My reading of your parent commenter is that they are asking why this argument does not also apply to humans.

You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).

onlyrealcuzzo
The point is that if it's mathematically possible for humans, than it naively would be possible for computers.

All of that just sounds hard, not mathematically impossible.

As I understand it, this is mostly a rehash on the dated Lucas Penrose argument, which most Mind Theory researches refute.

daedrdev
Taking GLP-1 makes me question how much hunger is really me versus my hormones controlling me.
ninetyninenine
We don’t even know how LLMs work. But we do know the underlying mechanisms are governed by math because we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.

So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.

So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.

What does humility have to do with anything?

hnfong
> we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.

> So because of this we know reality is governed by maths.

That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.

Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.

> What does humility have to do with anything?

Not the GP but I think humility is kinda relevant here.

ninetyninenine
>That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.

Let me repharse it. As far as we know all of reality is governed by the principles of logic and therefore math. This is the most likely possibility and we have based all of our technology and culture and science around this. It is the fundamental assumption humanity has made on reality. We cannot consistently demonstrate disproof against this assumption.

>Not the GP but I think humility is kinda relevant here.

How so? If I assume all of reality is governed by math, but you don't. How does that make me not humble but you humble? Seems personal.

hnfong
I guess it's kinda hubris on my part to question your ability to know things with such high certainty about things that philosophers have been struggling to prove for millenia...

What you said is only true for the bits of humanity you have decided to focus upon -- capitalist, technology-driven modern societies. If you looked beyond that, there are cultures that build society upon other assumptions. You might think those other modes are "wrong", but that's your personal view. For me, I personally don't think any of these are "true" in the absolute sense, as much as I don't think yours is "true". They're just ways humans with our mortal brains try to grapple with a reality that we don't understand.

As a sidenote, probability does not mean the thing you think it means. There's no reasonable frequentist interpretation for fundamental truth of reality, so you're just saying your Bayesian subjective probability says that math is "the most likely possibility". Which is fine, except everyone has their own different priors...

bigyabai
> We don’t even know how LLMs work

Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.

We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.

ben_w
> Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.

If we're OK with descriptions so lossy that they fit in a sentence, we also understand the human brain:

A electrochemical network with external inputs and some feedback loops, pumping ions around to trigger voltage cascades to create muscle contractions as outputs.

bigyabai
Yes. As long as we're confident in our definitions, that makes the questions easy. Is that the same as a feedforward algorithm inferring static weights to create a tokenized response string? Do you necessarily need an electrochemical network with external stimuli and feedback to generate legible text?

No. The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.

ben_w
> The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.

That "can" should be "could", else it presumes too much.

For both human brains and surprisingly small ANNs, far smaller than LLMs, humanity collectively does not yet know the defining characteristics of the aspects we care about.

I mean, humanity don't agree with itself what any of the three initials of AGI mean, there's 40 definitions of the word "consciousness", there are arguments about if there is either exactly one or many independent G-factors in human IQ scores, and also if those scores mean anything beyond correlating with school grades, and human nerodivergence covers various real states of existance that many of us find incomprehensible (sonetimes mutually, see e.g. most discussions where aphantasia comes up).

The main reason I expect little from an AI is that we don't know what we're doing. The main reason I can't just assume the least is because neither did evolution when we popped out.

int_19h
The fact that it doesn't operate identically or even similarly on the physical layer doesn't mean that similar processes cannot emerge on higher levels of abstraction.
hnfong
Pretty sure in most other contexts you wouldn't agree a medieval scribe knows how a fax machine works.
ninetyninenine
George Hinton the person largely responsible about the AI revolution has this to say:

https://www.reddit.com/r/singularity/comments/1lbbg0x/geoffr...

https://youtu.be/qrvK_KuIeJk?t=284

In that video above George Hinton, directly says we don't understand how it works.

So I don't speak just for myself. I speak for the person who ushered in the AI revolution, I speak for Experts in the field who know what they're talking aboutt. I don't speak for people who don't know what they're talking about.

Even though we know it's a feedforward network and we know how the queries are tokenized you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.

Don't try to just argue with me. Argue with the experts. Argue with the people who know more than you, Hinton.

staticman2
"In that video above George Hinton, directly says we don't understand how it works."

That isn't what Hinton said in the first link. He says essentially:

People don't understand A so they think B.

But actually the truth is C.

This folksy turn of phrase is about a group of "people" who are less knowledgeable about the technology and have misconceptions.

Maybe he said something more on point in the second link, but your haphazard use of urls doesn't make me want to read on.

bigyabai
Hinton invented the neural network, which is not the same as the transformer architecture used in LLMs. Asking him about LLM architectures is like asking Henry Ford if he can build a car from a bunch of scrap metal; of course he can't. He might understand the engine or the bodywork, but it's not his job to know the whole process. Nor is it Hinton's.

And that's okay - his humility isn't holding anyone back here. I'm not claiming to have memorized every model weight ever published, either. But saying that we don't know how AI works is empirically false; AI genuinely wouldn't exist if we weren't able to interpret and improve upon the transformer architecture. Your statement here is a dangerous extrapolation.

> you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.

You'd think this, but it's actually wrong. If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time. It's a lot of math, but it's wholly possible to compute exactly what AI would say ahead of time if you can solve for the non-deterministic seeded entropy, or remove it entirely.

LLM weights and tokenizer are both always idempotent, the inference software often introduces variability for more varied responses. Just so we're on the same page here.

IAmGraydon
>We don’t even know how LLMs work.

Care to elaborate? Because that is utter nonsense.

Workaccount2
We understand and build the trellis that the LLMs "grow" on. We don't have good insight into how a fully grown LLM actually turns any specific input into any specific output. We can follow it through the network, but it's a totally senseless noisy mess.

"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.

(This is an illustrative example made for easy understanding, not something I specifically went and compared)

EPWN3D
We don't know the path for how a given input produces a given output, but that doesn't mean we don't know how LLMs work.

We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.

ninetyninenine
https://youtu.be/qrvK_KuIeJk?t=284

The above is a video clip of Hinton basically contradicting what you’re saying.

So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.

So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.

This item has no comments currently.