That's an assumption, though. A plausible assumption, but still an assumption.
We know you can execute an LLM on pen and paper, because people built them and they're understood well enough that we could list the calculations you'd need to do. We don't know enough about the human brain to create a similar list, so I don't think you can reasonably make a stronger statement than "you could probably simulate..." without getting ahead of yourself.
Yes, or what about leprechauns?
It's been kinda discussed to oblivion in the last century, interesting that it seems people don't realize the "existing literature" and repeat the same arguments (not saying anyone is wrong).
An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.
An LLM is always a description. An LLM operating on a computer is identical to a description of it operating on paper (if much faster).
That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.
If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
Being a functionalist ( https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m... ) myself, I don't know the answer on the top of my head.
I can smell a "real" candle, a "real" candle can burn my hand. The term real here is just picking out a conceptual schema where its objects can feature as relata of the same laws, like a causal compatibility class defined by a shared causal scope. But this isn't unique to the question of real vs simulated. There are causal scopes all over the place. Subatomic particles are a scope. I, as a particular collection of atoms, am not causally compatible with individual electrons and neutrons. Different conceptual levels have their own causal scopes and their own laws (derivative of more fundamental laws) that determine how these aggregates behave. Real (as distinct from simulated) just identifies causal scopes that are derivative of our privileged scope.
Consciousness is not like the candle because everyone's consciousness is its own unique causal scope. There are psychological laws that determine how we process and respond to information. But each of our minds are causally isolated from one another. We can only know of each other's consciousness by judging behavior. There's nothing privileged about a biological substrate when it comes to determining "real" consciousness.
I'm not against this conclusion ( https://en.wikipedia.org/wiki/Philosophical_zombie ) but it doesn't seem to be compatible with what most people believe in general.
A candle in Canada can't melt wax in Mexico, and a real candle can't melt simulated wax. If you want to differentiate two things along one axis, you can't just point out differences that may or may not have any effect on that axis. You have to establish a causal link before the differences have any meaning. To my knowledge, intelligence/consciousness/experience doesn't have a causal link with anything.
We know our brains cause consciousness the way we knew in 1500 that being on a boat for too long causes scurvy. Maybe the boat and the ocean matter, or maybe they don't.
A simulation of a tree growing (say) is a lot more like the idea of love than it is... a real tree growing. Making the simulation more accurate changes that not a bit.
Thanks for stating your views clearly. I have some questions to try and understand them better:
Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?
What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?
Build a simulation of creatures that evolve from simple structures (think RNA, DNA).
Now, if in this simulation, after many many iterations, the creatures start talking about consciousness, what does that tell us?
It might if the simulation includes humans observing the candle.
Whatever that something that it actually does in the real, physical world is produces the cogito in cogito, ergo sum and I doubt you can get it just by describing what all the subatomic particles are doing, any more than a computer or pen-and-paper simulated hurricane can knock your house down, no matter how perfectly simulated.
A pen and paper simulation of a brain would also be "a thing happening" as you put it. You have to explain what is the magical ingredient that makes the brain's computations impossible to replicate.
You could connect your brain simulation to an actual body, and you'd be unable to tell the difference with a regular human, unless you crack it open.
I'm not. You might want me to be, but I'm very, very much not.
Of course a GPU involves things happening. No amount of using it to describe a brain operating gets you an operating brain, though. It's not doing what a brain does. It's describing it.
(I think this is actually all somewhat tangential to whether LLMs "can think" or whatever, though—but the "well of course they might think because if we could perfectly describe an operating brain, that would also be thinking" line of argument often comes up, and I think it's about as wrong-headed as a thing can possibly be, a kind of deep "confusing the map for the territory" error; see also comments floating around this thread offhandedly claiming that the brain "is just physics"—like, what? That's the cart leading the horse! No! Dead wrong!)