Preferences

Unless you can prove that humans exceed the Turing computable, the headline is nonsense unless you can also show that the Church-Turing thesis isn't true.

Since you don't even appear to have dealt with this, there is no reason to consider the rest of the paper.


haneul
> In plain language:

> No matter how sophisticated, the system MUST fail on some inputs.

Well, no person is immune to propaganda and stupididty, so I don't see it as a huge issue.

vidarh OP
I have no idea how you believe this relates to the comment you replied to.
harimau777
If I'm understanding correctly, they are arguing that the paper only requires that an intelligent system will fail for some inputs and suggest that things like propaganda are inputs for which the human intelligent system fails. Therefore, they are suggesting that the human intelligent system does not necessarily refute the paper's argument.
vidarh OP
If so, then the papers argument isn't actually trying to prove that AGI is impossible, despite the title, and the entire discussion is pointless.
amelius
But what then is the relevance of the study?
haneul
I suppose it disproves embodied, fully meat-space god if sound?
amelius
I'm looking at the title again and it seems wrong, because AGI ~ human intelligence. Unless human intelligence has non physical components to it.
I think OP answered the question here:

https://www.hackerneue.com/item?id=44349516

vidarh OP
No, he didn't.

It's not even close to addressing the argument.

could you explain for a layman
vidarh OP
I'm not sure if this will help, but happy to elaborate further:

The set of Turing computable functions is computationally equivalent to the lambda calculus, is computationally equivalent to the generally recursive functions. You don't need to understand those terms, only to know that these functions define the set of functions we believe to include all computable functions. (There are functions that we know to not be computable, such as e.g. a general solution to the halting problem)

That is, we don't know of any possible way of defining a function that can be computed that isn't in those sets.

This is basically the Church-Turing thesis: That a function on the natural numbers can be effectively computable (note: this has a very specific meaning, it's not about performance) only if it is computable by a Turing machine.

Now, any Turing machine can simulate any other Turing machine. Possibly in a crazy amount of time, but still.

The brain is at least a Turing machine in terms of computabilitity if we treat "IO" (speaking, hearing, for example) as the "tape" (the medium of storage in the original description of the Turing machine). We can prove this, since the smallest Turing machine is a trivial machine with 2 states and 3 symbols that any moderate functional human is capable of "executing" with pen and paper.

(As an aside: It's almost hard to construct a useful computational system that isn't Turing complete; "accidental Turing completeness" regularly happens, because it is very trivial to end up with a Turing complete system)

An LLM with a loop around it and temperature set to 0 can trivially be shown to be able to execute the same steps, using context as input and the next token as output to simulate the tape, and so such a system is Turing complete as well.

(Note: In both cases, this could require a program, but since for any Turing machine of a given size we can "embed" parts of the program by constructing a more complex Turing machine with more symbols or states that encode some of the actions of the program, such a program can inherently be embedded in the machine itself by constructing a complex enough Turing machine)

Assuming we use a definition of intelligence that a human will meet, then because all Turing machines can simulate each other, then the only way of showing that an artificial intelligence can not theoretically be constructed to at least meet the same bar is by showing that humans can compute more than the Turing computable.

If we can't then "worst case" AGI can be constructed by simulating every computational step of the human brain.

Any other argument about the impossibility of AGI inherently needs to contain a something that disproves the Church-Turing thesis.

As such, it's a massive red flag when someone claims to have a proof that AGI isn't possible, but haven't even mentioned the Church-Turing thesis.

somewhereoutth
> then the only way of showing that an artificial intelligence can not theoretically be constructed to at least meet the same bar is by showing that humans can compute more than the Turing computable.

I would reframe: the only way of showing that artificial intelligence can be constructed is by showing that humans cannot compute more than the Turing computable.

Given that Turing computable functions are a vanishingly small subset of all functions, I would posit that that is a rather large hurdle to meet. Turing machines (and equivalents) are predicated on a finite alphabet / state space, which seems woefully inadequate to fully describe our clearly infinitary reality.

vidarh OP
Given that we know of no computable function that isn't Turing computable, and the set of Turing computable functions is known to be equivalent to the lambda calculus and equivalent to the set of general recursive functions, what is an immensely large hurdle would be to show even a single example of a computable function that is not Turing computable.

If you can do so, you'd have proven Turing, Kleen, Church, Goedel wrong, and disproven the Church-Turing thesis.

No such example is known to exist, and no such function is thought to be possible.

> Turing machines (and equivalents) are predicated on a finite alphabet / state space, which seems woefully inadequate to fully describe our clearly infinitary reality.

1/3 symbolically represents an infinite process. The notion that a finite alphabet can't describe inifity is trivially flawed.

somewhereoutth
Function != Computable Function / general recursive function.

That's my point - computable functions are a [vanishingly] small subset of all functions.

For example (and close to our hearts!), the Halting Problem. There is a function from valid programs to halt/not-halt. This is clearly a function, as it has a well defined domain and co-domain, and produces the same output for the same input. However it is not computable!

For sure a finite alphabet can describe an infinity as you show - but not all infinity. For example almost all Real numbers cannot be defined/described with a finite string in a finite alphabet (they can of course be defined with countably infinite strings in a finite alphabet).

sgt101
Compute functions != Intelligence though.

For example learning from experience (which LLMs cannot do because they cannot experience anything and they cannot learn) is clearly an attribute of an intelligent machine.

LLMs can tell you about the taste of a beer, but we know that they have never tasted a beer. Flight simulators can't take you to Australia, no matter how well they simulate the experience.

vidarh OP
> Compute functions != Intelligence though.

If that is true, you have a proof that the Church-Turing thesis is false.

> LLMs can tell you about the taste of a beer, but we know that they have never tasted a beer. Flight simulators can't take you to Australia, no matter how well they simulate the experience.

For this to be relevant, you'd need to show that there are possible sensory inputs that can't be simulated to a point where the "brain" in question - be it natural or artificial - can't tell the difference.

Which again, would boil down to proving the Church-Turing thesis wrong.

sgt101
>If that is true, you have a proof that the Church-Turing thesis is false.

We're talking the physical version right? I don't have any counter examples that I can describe, but I could hold that that's because human language, perception and cognition cannot capture the mechanisms that are necessary to produce them.

But I won't as that's cheating.

Instead I would say that although I can't disprove PCT it's not proven either, and unlike other unproven things like P!=NP this is about physical systems. Some people think that all of physical reality is discrete (quantized), if they are right then PCT could be true. However, I don't think this is so as I think that it means that you have to consider time as unreal, and I think that's basically as crazy as denying consciousness and free will. I know that a lot of physicists are very clever, but those of them that have lost the sense to differentiate between a system for describing parts of the universe and a system that defines the workings of the universe as we cannot comprehend it are not good at parties in my experience.

>For this to be relevant, you'd need to show that there are possible sensory inputs that can't be simulated to a point where the "brain" in question - be it natural or artificial - can't tell the difference.

I dunno what you mean by "relevant" here - you seem to be denying that there is a difference between reality and unreality? Like a Super Cartesian idea where you say that not only is the mind separate from the body but that the existence of bodies or indeed the universe that they are instantiated in is irrelevant and doesn't matter?

Wild. Kinda fun, but wild.

I stand by my point though, computing functions about how molecules interact with each other and lead to the propagation of signals along neural pathways to generate qualia is only the same as tasting beer if the qualia are real. I don't see that there is any account of how computation can create a feeling of reality or what it is like to. At some point you have to hit the bottom and actually have an experience.

harimau777
I think that may depend on how someone defines intelligence. For example, if intelligence includes the ability to feel emotion or appreciate art, then I think it becomes much more plausible that intelligence is not the same as computation.

Of course, simply stating that isn't in of itself a philisophically rigorous argument. However, given that not everyone has training in philosophy and it may not even be possible to prove whether "feeling emotion" can be achieved via computation, I think it's a reasonable argument.

> > Compute functions != Intelligence though.

> If that is true, you have a proof that the Church-Turing thesis is false.

Well, depends on how. I think being able to compute (arbitrary) functions is much more than is necessary for intelligence.

TheOtherHobbes
What program would a Turing machine run to spontaneously prove the incompleteness theorem?

Can you prove such a program may exist?

vidarh OP
Assuming the Church-Turing thesis is true, the existence of any brain now or in the past capable of proving it is proof that such a program may exist.

If the Church-Turing thesis can be proven false, conversely, then it may be possible that such a program can't exist - it is a necessary but not sufficient condition for the Church-Turing thesis to be false.

Given we have no evidence to suggest the Church-Turing thesis to be false, or for it to be possible for it to be false, the burden falls on those making the utterly extraordinary claim that they can't exist to actually provide evidence for those claims.

Can you prove the Church-Turing thesis false? Or even give a suggestion of what a function that might be computable but not Turing computable would look like?

Keep in mind that explaining how to compute a function step by step would need to contain at least one step that can't be explain in a way that allows the step to be computable by a Turing machine, or the explanation itself would instantly disprove your claim.

The very notion is so extraordinary as to require truly extraordinary proof and there is none.

A single example of a function that is not Turing computable that human intelligence can compute should be low burden if we can exceed the Turing computable.

Where are the examples?

harimau777
> Assuming the Church-Turing thesis is true, the existence of any brain now or in the past capable of proving it is proof that such a program may exist.

Doesn't that assume that the brain is a Turing machine or equivalent to one? My understanding is that the exact nature of the brain and how it relates to the mind is still an open question.

Given what I see in these discussions, I suspect your use of the word "spontaneously" is a critical issue for you, but also not for me.

None of us exist in a vacuum*, we all react to things around us, and this is how we come to ask questions such as those that led Gödel to the incompleteness theorems.

On the other hand, for "can a program prove it?", this might? I don't know enough Lean (or this level of formal mathematics) myself to tell if this is a complete proof or a WIP: https://github.com/FormalizedFormalLogic/Incompleteness/blob...

* unless we're Boltzmann brains, in which case we have probably hallucinated the existence of the question in addition to all evidence leading to our answer

throw310822
An accurate-enough physical simulation of Kurt Gödel's brain.

Such a program may exist- unless you think such a simulation of a physical system is uncomputable, or that there is some non-physical process going on in that brain.

This item has no comments currently.