> with no clear reason whatsoever as to why
It's not clear to me how you can understand that fire has particular causal powers (to burn, and so on) that are not instantiated in a simulation of fire; and yet not understand the same for biological processes.
The world is a particular set of causal relationships. "Computational" descriptions do not have a causal semantics, so aren't about properties had in the world. The program itself has no causal semantics, it's about numbers.
A program which computes the fibonacci sequence describes equally-well the growth of a sunflower's seeds and the agglomeration of galactic matter in certain galaxies.
A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. A simulation of fire is, by definition, not on fire -- that is fire.
A simulation is a game to help us think about the world: the ability to derive some descriptive statements about a system without instantiating the properties of that system is a trivial thing, and it is always disappointing at how easily it fools our species. You can move beads of wood around and compute the temperature of the sun -- this means nothing.
What we mean by a simulation is, by definition, a certain kind of "inference game" we play (eg., with beads and chalk) that help us think about the world. By definition, if that simulation has substantial properties, it isn't a simulation.
If the claim is that an electrical device can implement the actual properties of biological intelligence, then the claim is not about a simulation. It's that by manufacturing some electrical system, plugging various devices into it, and so on -- that this physical object has non-simulated properties.
Searle, and most other scientific naturalists who appreciate the world is real -- are not ruling out that it could be possible to manufacture a device with the real properties of intelligence.
It's just that merely by, eg., implementing the fibonacci sequence, you havent done anything. A computation description doesnt imply any implementation properties.
Further, when one looks at the properties of these electronic systems and the kinds of causal realtions they have with their environments via their devices, one finds very many reasons to suppose that they do not implement the relevant properties.
Just as much as when one looks at a film strip under a microscope, one discovers that the picture on the screen was an illusion. Animals are very easily fooled, apes most of all -- living as we do in our own imaginations half the time.
Science begins when you suspend this fantasy way of relating to the world, look it its actual properties.
If your world view requires equivocating between fantasy and reality, then sure, anything goes. This is a high price to pay to cling on to the idea that the film is real, and there's a train racing towards you in your cinema seat.
This is kind of a no-true-scotsman esque argument though, isn't it? "substantial properties" are... what, exactly? It's not a subjective question. One could, and many have, insist that fire that really burns is merely a simulation. It would be impossible from the inside to tell. In that case, what is fantasy, and what is reality?
S is a simulation of O iff there is an inferential process, P, by which properties of O can be estimated from P(S) st. S does not implement O
Eg., "A video game is a simulation of a fire burning if, by playing that game, I can determine how long the fire will burn w/o there being any fire involved"
S is an emulation model of O iff ...as-above.. S implements O (eg., "burning down a dollhouse to model burning down a real house").
Searle described himself as a "naive realist" although, as was typical for him, this came with a ton of caveats and linguistic escape hatches. This was certainly my biggest objection and I passed many an afternoon in office hours trying to pin him down to a better position.
Saying that the symbols in the computer don't mean anything, that it is only we who give them meaning, presupposes a notion of meaning as something that only human beings and some things similar to us possess. It is an entirely circular argument, similarly to the notion of p-zombies or the experience of seizing red thought experiment.
If indeed the brain is a biological computer, and if our mind, our thinking, is a computation carried out by this computer, with self-modeling abilities we call "qualia" and "consciousness", then none of these arguments hold. I fully admit that this is not at all an established fact, and we may still find out that our thinking is actually non-computational - though it is hard to imagine how that could be.
Fire is the result of the intrinsic reactivity of some chemicals like fuels and oxidizers that allows them to react and generate heat. A simulation of fire that doesn't generate heat is missing a big part of the real thing, it's very simplified. Compared to real fire, a simulation is closer to a fire emoji, both just depictions of a fire. A fire isn't the process of calculating inside a computer what happens, it's molecules reacting a certain way, in a well understood and predictable process. But if your simulation is accurate and does generate heat then it can burn down a building by extending the simulation into the real world with a non-simulated fire.
Consciousness is an emergent property from putting together a lot of neurons, synapses, chemical and physical processes. So you can't analyze the parts to simulate the end result. You cannot look at the electronic neuron and conclude a brain accurately made of them won't generate consciousness. It might generate something even bigger, or nothing.
And in a very interesting twist of the mind, if an accurate simulation of a fire can extend in the real world as a real fire, then why wouldn't an accurate simulation of a consciousness extent in the real world as a real consciousness?
I associate the key with "K", and my screen displays a "K" shape when it is pressed -- but there is no "K", this is all in my head. Just as much as when I go to the cinema and see people on the screen: there are no people.
By ascribing a computational description to a series of electrical devices (whose operation distributes power, etc.) I can use this system to augment by own thinking. Absent the devices, the power distribution, their particular casual relationships to each other, there is no computer.
The computational description is an observer-relative attribution to a system; there are no "physical" properties which are computational. All physical properties concern spatio-temporal bodies and their motion.
The real dualism is to suppose there are such non-spatio-temporal "process". The whole system called a "computer" is an engineered electrical device whose construction has been designed to achive this illusion.
Likewise I can describe the solar system as a computational process, just discretize orbits and give their transition in a while(true) loop. That very same algorithm describes almost everything.
Physical processes are never "essentially" computational; this is just a way of specifying some highly superficial feature which allows us to ignore their causal properties. Its mostly a useful description when building systems, ie., an engineering fiction.
This notion of causality is interesting. When a human claims that he is conscious, there a causal chain from the fact that they are conscious to their claiming so. When a neuron-level simulation of a human claims it is conscious, there must be a similar causal chain, with a similar fact at its origin.
We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.”
The fact is that they can’t.
Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side.
Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features.
This is not a theory (or is one, but false) according to Popper as far as I understand, because the only way to check understanding that I know of is to ask questions, and LLMs passes it. So in order to satisfy falsifiability another test must be devised.
I find this thesis very plausible. LLMs inhabit the world of language, not our human everyday world, so their understanding of it will always be second-hand. An approximation of our own understanding of that world, itself imperfect, but at least aiming for the real thing.
The part about overcoming this limitation by instantiating the system in hardware I find less convincing, but I think I know where he comes from with that as well: by giving it hardware sensors, the machine would not have to simulate the world outside as well - on top of the inner one.
The inner world can more easily be imagined as finite, at least. Many people seem to take this as a given, actually, but there's no good reason to expect that it is. Plank limits from QM are often brought up as an argument for digital physics, but in fact they are only a limit on our knowledge of the world, not on the physical systems themselves.
The question is only if future LLMs might be good enough to trick anyone in most iterations, whether we would be forced to admit they understand meaning.
While I don't disagree with the substance of this post, I don't think this was one of Searle's arguments. There was definitely an Embodied Cognition camp on campus, but that was much more in Lakoff's wheelhouse.
His views are perfectly consistent with non-dualism and if you think his views are muddy, that doesn't mean they are (they are definitively not muddy, per a large consensus). For the record, I am a substance dualist, and his arguments against dualism are pretty interesting, precisely because he argues that you can build something that functions in a different way than symbol manipulation while still doing something that looks like symbol manipulation (but also has this special property called consciousness, kind of like our brains).
Is this true? I don't know (I, of course, would argue "no"), but it does seem at least somewhat plausible and there's no obvious counter-argument.
It does make sense, and there's work being done on this front, (Penrose & Hameroff's Orch OR comes to mind). We obviously don't know exactly what such a mechanism would look like, but the theory itself is not inconsistent. Also, there's all kinds of p-zombies, so we likely need some specificity here.
It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema
Side note: while the Chinese Room put him on the map, he had as much to say about Philosophy of Language as he did of Mind. It was of more than passing interest to him.
> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.
I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.
I have, however, heard him say the following:
1. The structure and arrangement of neurons in the human nervous system creates consciousness.
2. The exact causal mechanism for this is phenomenon is unknown.
3. If we were to engineer a set of circumstances such that the causal mechanism for consciousness (whatever it may be) were present, we would have to conclude that the resulting entity- be it biological, mechanical, etc., is conscious.
He didn't have anything definitive to say about the causal mechanism of consciousness, and indeed he didn't see that as his job. That was to be an exercise left to the neuroscientists, or in his preferred terminology, "brain stabbers." He was confident only in his assertion that it couldn't be caused by mere symbol manipulation.
> it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.
He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism:
https://faculty.wcas.northwestern.edu/paller/dialogue/proper...
The Chinese room is an argument caked in notions of language, but it is in fact about consciousness more broadly. Syntax and semantics are not merely linguistic concepts, though they originate in that area. And while Searle may have been interested in language as well, that is not what this particular argument is mainly about (the title of the article is Minds, Brains, and Programs - the first hint that it's not about language).
> I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.
He said both things in the paper that introduced the Chinese room concept, as an answer to the potential rebuttals.
Here is a quote about the brain that would be run in software:
> 3. The Brain Simulator reply (MIT and Berkley)
> [...] The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.
And here is the bit about creating a real electrical brain, that he considers could be conscious:
> "Yes, but could an artifact, a man-made machine, think?"
> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.
> He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism: https://faculty.wcas.northwestern.edu/paller/dialogue/proper...
I don't find this paper convincing. He admits at every step that materialism makes more sense, and then he asserts that still, consciousness is not ontologically the same thing as the neurobiological states/phenomena that create it. He admits that usually being causally reducible means being ontologically reducible as well, but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction. I am simply not convinced.
At this point I'm pretty sure we've had a misunderstanding. When I referred to "language" in my original post, you seem to have construed this as a reference to the Chinese language in the thought experiment. On the contrary, I was referring to software specifically, in the sense that a computer program is definitionally a sequence of logical propositions. In other words, a speech act.
> [...] The problem with the brain simulator is that it is simulating the wrong things about the brain.
This quote is weird and a bit unfortunate. It seems to suggest an opening: the brain simulator doesn't work because it simulates the "wrong things," but maybe a program that simulates the "right things" could be conscious. Out of context, you could easily reach that conclusion, and I suspect that if he could rewrite that part of the paper he probably would, because the rest of the paper is full of blanket denials that any simulation would be sufficient. Like this one: >The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms.
Regarding the electrical brain:
> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.
Right, so he describes one example of an "electrical brain" that seems like it'd satisfy the conditions for consciousness, while clearly remaining open to the possibility that a different kind of artificial (non-electrical) brain might also be conscious. I'll assume you're using this quote to support your previous statement:
> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.
I think it's fairly obvious why this is different from a simulation. If you build a system that reproduces the consciousness-causing mechanism of neurons, then... it will cause consciousness. Not simulated consciousness, but the real deal. If you build a robot that can reproduce the ignition-causing mechanism of a match striking a tinderbox, then it will start a real fire, not a simulated one. You seem to think that Searle owes us an explanation for this. Why? How are simulations even relevant to the topic?
> I don't find this paper convincing.
The title of the paper is "Why I Am Not a Property Dualist." Its purpose is to explain why he's not a property dualist. Arguments against materialism are made in brief.
> He admits at every step that materialism makes more sense
Did we read the same paper?
> He admits that usually being causally reducible means being ontologically reducible as well,
Wrong, but irrelevant
> but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction.
Examples and explanations are easy to provide, because there are several:
> But in the case of consciousness, causal reducibility does not lead to ontological reducibility. From the fact that consciousness is entirely accounted for causally by neuron firings, for example, it does not follow that consciousness is nothing but neuron firings. Why not? What is the difference between consciousness and other phenomena that undergo an ontological reduction on the basis of a causal reduction, phenomena such as color and solidity? The difference is that consciousness has a first person ontology; that is, it only exists as experienced by some human or animal, and therefore, it cannot be reduced to something that has a third person ontology, something that exists independently of experiences. It is as simple as that.
First-person vs. third-person ontologies are the key, whether you buy them or not. Consciousness is the only possible example of a first-person ontology, because it's the only one we know of
> “Consciousness” does not name a distinct, separate phenomenon, something over and above its neurobiological base, rather it names a state that the neurobiological system can be in. Just as the shape of the piston and the solidity of the cylinder block are not something over and above the molecular phenomena, but are rather states of the system of molecules, so the consciousness of the brain is not something over and above the neuronal phenomena, but rather a state that the neuronal system is in.
I could paste a bunch more examples of this, but the key takeaway is that consciousness is a state, not a property.
I think this muddies the water unnecessarily. Computation is not language, even though we typically write software in so called programming languages. But the computation itself is something different from the linguistic-like description of software. The computation is the set of states, and the relationships between them, that a computer goes through.
> > He admits at every step that materialism makes more sense
> Did we read the same paper?
I should have been clearer - I meant that he admits that materialism makes more sense than idealism or property dualism, but I realize that this comes off as suggesting it makes more sense than his own position, which of course he does not.
> > He admits that usually being causally reducible means being ontologically reducible as well,
> Wrong, but irrelevant
Both you and he seem to find a single example of a phenomenon that is causally reducible to some constituent part, but that is not ontological reducible to that constitutent part - consciousness (he would add intentionality, I think, given the introduction, but it's not clear to me this is even a meaningfully separatable concept from consciousness). And you both claim that this is the case because of this special feature of "first person ontology", which is a different thing than "third person ontology" - which seems to me to simply be dualism by another name.
I think it's entirely possible to reject the notion of a meaningful first person ontology completely. It's very possible that the appearance of a first person narrative that we experience is a retroactive illusion we create that uses our models of how other people function on ourselves. That is, we are simple computers that manipulate symbols in our brains, that generate memories of their recent state as being a "conscious experience", which is just what we invented as a model of why other animals and physical phenomena more broadly behave the way they do (since we intuitively assign emotions and intentions to things like clouds and fires and mountains, to explain their behavior).
In hindsight, choosing the word "language" was probably more distracting than helpful. We could get into a debate about whether computation is essentially another form of language-like syntactic manipulation, but it does share a key feature with language: observer-relative ontology. @mjburgess has already made this case with you at length, and I don't think I could improve on what's already been written, so I'll just leave it at that.
> I should have been clearer - I meant that he admits that materialism makes more sense than idealism or property dualism, but I realize that this comes off as suggesting it makes more sense than his own position, which of course he does not.
I'm not sure that I saw this specific claim made, but it's not especially important. What's more important is understanding what his objection to materialism is, such that you can a)agree with it or b)articulate why you think he's wrong. That said, it isn't the main focus of this paper, so the argument is very compressed. It also rests on the assumption that you believe that consciousness is real (i.e. not an illusion), and given the rest of your comment, I'm not sure that you do.
> Both you and he seem to find a single example of a phenomenon that is causally reducible to some constituent part, but that is not ontological reducible to that constitutent part - consciousness
Yes, although to be clear, I'm mainly interested in correctly articulating the viewpoint expressed in the paper. My own views don't perfectly overlap with Searle's
> (he would add intentionality, I think, given the introduction, but it's not clear to me this is even a meaningfully separatable concept from consciousness)
I doubt he'd add it as a discrete entry because, as you correctly observe, intentionality is inseparable from consciousness (but the reverse is not true)
> And you both claim that this is the case because of this special feature of "first person ontology", which is a different thing than "third person ontology" - which seems to me to simply be dualism by another name.
Ok good, this is directly interacting with the paper's thesis: why he's not a (property) dualist. He's trying to thread the needle between materialism and dualism. His main objection to property dualism is that consciousness doesn't exist "over and above" the brain, on which it is utterly dependent. This is probably his tightest phrasing of his position:
> The property dualist means that in addition to all the neurobiological features of the brain, there is an extra, distinct, non physical feature of the brain; whereas I mean that consciousness is a state the brain can be in, in the way that liquidity and solidity are states that water can be in.
Does his defense work for you? Honestly I wouldn't blame you if you said no. He spends a full third of the paper complaining about the English language (this is a theme) and how it prevents him from cleanly describing his position. I get it, even if I find it a little exhausting, especially when the stakes are starting to feel kinda low.
> I think it's entirely possible to reject the notion of a meaningful first person ontology completely.
On first reading, this sounds like you might be rejecting the idea of consciousness entirely. Or do you think it's possible to have a 'trivial' first person ontology?
> It's very possible that the appearance of a first person narrative that we experience is a retroactive illusion we create that uses our models of how other people function on ourselves. That is, we are simple computers that manipulate symbols in our brains, that generate memories of their recent state as being a "conscious experience", which is just what we invented as a model of why other animals and physical phenomena more broadly behave the way they do (since we intuitively assign emotions and intentions to things like clouds and fires and mountains, to explain their behavior).
I'm not sure where to start with this, so I'll just pick a spot. You seem to deny that "conscious experience" is a real thing (which is equivalent to "what it's like to be a zombie") but we nonetheless have hallucinated memories of experiences which, to be clear, we did not have because we don't really have conscious experiences at all. But how do we replay those memories without consciousness? Do we just have fake memories about remembering fake memories? And where do the fake fake fake memories get played, in light of the fact that we have no inner lives except in retrospect?
D.R. Hofstadter posited that we can extract/separate the software from the hardware it runs on (the program-brain dichotomy), whereas Searle believed that these were not two layers but consciousness was in effect a property of the hardware. And from that, as you say, follows that you may re-create the property if your replica hardware is close enough to the real brain.
IMHO, philosophers should be rated by the debate their ideas create, and by that, Searle was part of the top group.
> “No, his argument is that consciousness can't be instantiated purely in software…“
The confusion is very interesting to me, maybe because I’m a complete neophyte on the subject. That said, I’ve often wondered if consciousness is necessarily _embodied_ or emerged from pure presence into language & body. Maybe the confusion is intentional?
It's quite sad that people don't take the idea of consciousness being fundamental more seriously, given that's the only thing people actually deal with 100% of the time.
As for Searle, I think his argument is basically an appeal to common-sensical thinking, instead of anything based on common assumptions and logic. As an outsider, it feels very much that modern day philosophy is follows some kind of social media influencer logic, where you get respect for putting forward arguments that people agree with, instead of arguments that are non-intuitive yet rigorous and make people rethink their priors.
I mean, even today, here, you'd get similar arguments about "AI can never think because {reason that applies to humans as well}"... I suspect it's almost ingrained to the human psyche to feel this way.
No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example. But his belief, which he articulates very explicitly in the article, is that you couldn't create a machine consciousness by running even a perfect simulation of a biological brain on a digital computer, neuron for neuron and synapse for synapse. He likens this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building.
Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why. His ideas are very much muddy, and while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate, it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.