2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.
3. Physical laws - where do they really come from? From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable? I honestly don’t know.
In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.
Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"
You seem to be laboring under the mistaken idea that "algorithmic" does not encompass everything allowed by physics. But, humoring this idea, then if physical laws allow it, why can this "more than algorithmic" cognition not be done artificially? As you say - we can obviously do it. What magical line is preventing an artificial system from doing the same?
Why not use that as the title of your paper? That a more fundamental claim.
But it is the fundamental objection he would need to overcome.
There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.
This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".
Your claim here also goes against the physical interpretation of the Church-Turing thesis.
Without rigorously addressing this, there is no point taking your papers seriously.
1. THEOREM: Let a semantic frame be defined as Ω = (Σ, R), where
Σ is a finite symbol set and R is a finite set of inference rules.
Let Ω′ = (Σ′, R′) be a candidate successor frame.
Define a frame jump as: Frame Jump Condition: Ω′ extends Ω if Σ′\Σ ≠ ∅ or R′\R ≠ ∅
Let P be a deterministic Turing machine (TM) operating entirely within Ω.
Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
(Whereas Σ = the set of all finite symbol strings in the frame; derivable outputs are formed from Σ under the inference rules R.)
Proof Sketch: P’s tape alphabet is fixed to Σ and symbols derived from Σ. By induction, no computation step can introduce a symbol not already in Σ. ∎
2. APPLICATION: Newton → Special Relativity
Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame) Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)
Let φ = “The speed of light is invariant in all inertial frames.” Let Tᴿ be the theory of special relativity. Let Pᴺ be a TM constrained to Σᴺ.
By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.
But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ
→ Therefore Pᴺ ⊬ φ → Tᴿ ⊈ L(Pᴺ)
Thus:
Special Relativity cannot be derived from Newtonian physics within its original formal frame.
3. EMPIRICAL CONFLICT Let: Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t) Axiom N₂: Ether model for light speed Data D: Michelson–Morley ⇒ c = const
In Ωᴺ, combining N₁ and N₂ with D leads to contradiction. Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ But by Lemma 1: impossible within Pᴺ. -> Frame must be exited to resolve data.
4. FRAME JUMP OBSERVATION
Einstein introduced Σᴿ — a new frame with new symbols and transformation rules. He did so without derivation from within Ωᴺ. That constitutes a frame jump.
5. FINALLY
A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅
B: Einstein was human
C: Therefore, humans can initiate frame jumps (i.e., generate formal systems containing symbols/rules not computable within the original system).
Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps. But human cognition demonstrably can.
QED.
BUT: Can Humans COMPUTE those functions? (As you asked)
-> Answer: a) No - because frame-jumping is not a computation.
It’s a generative act that lies outside the scope of computational derivation. Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.
In each case, the cognitive system fails not from error, but from structural constraint. AND: The same constraint exists for human rationality.
This is really sloppy work, I'd encourage you to look deeper into how (eg) HOL models "theories" (roughly corresponding to your idea of "frame") and how they can evolve. There is a HOL-in-HOL autoformalization. This provides a sound basis for considering models of science.
Noncomputability is available in the form of Hilbert's choice, or you can add axioms yourself to capture what notion you think is incomputable.
Basically I don't accept that humans _do_ in fact do a frame jump as loosely gestured at, and I think a more careful modeling of what the hell you mean by that will dissolve the confusion.
Of course I accept that humans are subject to the Goedelian curse, and we are often incoherent, and we're never quite surely when we can stop collecting evidence or updating models based on observation. We are computational.
But if we let an AGI operate on Ω2 = (English, Science), that semantic frame would have encompassed both Newton and Einstein.
Your argument boils down into one specific and small semantic frame not being general enough to do all of AGI, not that _any_ semantic frame is incapable of AGI.
Your proof only applies to the Newtonian semantic frame. But your claim is that it is true for any semantic frame.
No sysem starting from Ω₁ can generate Ω₂ unless Ω₂ is already implicit. ... If you build a system trained on all of science, then yes, it knows Einstein because you gave it Einstein. But now ask it to generate the successor of Ω² (call it Ω³ ) with symbols that don’t yet exist. Can it derive those? No, because they’re not in Σ². Same limitation, new domain. This isn’t about “a small frame can’t do AGI.” It’s about every frame being finite, and therefore bounded in its generative reach. The question is whether any algorithmic system can exeed its own Σ and R. The answer is no. That’s not content-dependent, that’s structural.
If anything, your argument is begging the question - it's a logical fallacy - because your argument rests on humans exceeding the Turing computable, to use human abilities as evidence. But if humans do not exceed the Turing computable, then everything humans can do is evidence that something is Turing computable, and so you can not use human abilities as evidence something isn't Turing computable.
And so your reasoning is trivially circular.
EDIT:
To go into more specific errors, this is fasle:
> Let P be a deterministic Turing machine (TM) operating entirely within Ω.
>
> Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
P can do so by simulating a TM P' whose alphabet includes σ. This is fundamental to the theory of computability, and holds for any two sets of symbols: You can always handle the larger alphabet by simulating one machine on the other.
When your "proof" contains elementary errors like this, it's impossible to take this seriously.
Is that not the other way around? “…how long do I have to stay at high speed in order for my younger brother to look as grown up as myself?”
But I'm so used to AGI being conflated with ASI that it didn't seem worth it compared to the more fundamental errors.
Wrt ‘AGI/ASI’, while they’re not the same, after reading Nick Bostrom (and more recently https://ai-2027.com) I hang towards AGI being a blib on the timeline towards ASI. Who knows.
Well, it in fact depends on what intelligence is to your understanding:
-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.
- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.
- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.
The main point is: neither algorithms nor rationality can point beyond itself.
In other words: You cannot think out of the box - thinking IS the box.
(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)