Preferences

> Right, so you agree that there is a clear difference between a mammal and the device we're discussing.

A difference that you have not demonstrated the relevance of.

If I run an AI on my laptop and unplug the charger, this runs until the battery dies. If I have a mammal that does not eat, it lives until it starves.

If I run an AI on a desktop and unplug the mains, it ceases function in milliseconds (or however long the biggest capacitor in the PSU lasts). If I (for the sake of argument) had a device that could instantly remove all the ATP from a mammal's body, they'd also be dead pretty quick.

If I have an android, purely electric motors and no hydraulics, and the battery connector comes loose, it ragdolls. Same for a human who has a heart attack.

An AI that is trained with rewards for collecting energy to recharge itself, does so. One that has no such feedback, doesn't. Most mammals have such a mechanism from evolution, but there are exceptions where that signal is missing (not just weird humans), and they starve.

None of these things say anything about intelligence.

> I'm not sure why introducing a certain type of rare scam artist into the modeling of this thought experiment would make things clearer or more interesting.

Because you're talking about the effect of mammals ceasing the consumption of food, and they're an example of mammals ceasing the consumption of food.


cess11
This is not about intelligence, it's about autonomy. Your laptop does not exhibit autonomy, it is a machine slave. It is not embodied and it does not have the ability for self-governance.

It is somewhat disconcerting that there are people that feel that they could be constrained into living like automatons and still have autonomy, and viciously defend the position that a dead computing device actually has the freedom of autonomy.

ben_w OP
> This is not about intelligence, it's about autonomy.

OK. Then why bring up physical autonomy in a discussion about AGI where the prior use was "autonomy" in the context of "autonomously seek information themselves"?

> Your laptop does not exhibit autonomy, it is a machine slave. It is not embodied and it does not have the ability for self-governance.

Is the AI running on my laptop, more or less of a slave, than I am a slave to the laws of physics, which determine the chemical reactions in my brain and thus my responses to caffeine, sleep deprivation, loud music, and potentially (I've not been tested) flashing lights?

And why did either of us, you and I, respond to each other's comments when they're just a pattern of light on a display (or pressure waves on your ear, if you're using TTS)?

What exactly is "self-governance"? Be precise here: I am not a sovereign, and the people who call themselves "sovereign citizens" tend to end up very surprised by courts ignoring their claims of self-governance and imprisoning or fining them anyway.

But also, re autonomy:

1. I did mention androids — those do exist, the category is broader than Musk vapourware, film props, and Brent Spiner in face paint.

2. Did Stephen Hawking have autonomy? He could get information when he requested it, but ever decreasing motor control over his body. That sounds very much like what LLMs do these days.

If he did not have autonomy, why does autonomy matter?

If he did have autonomy, specifically due to the ability to get information on request which is what LLMs do now, then what separates that specifically from what is demonstrated by LLMs accessing the internet from a web search?

If he did have autonomy, but only because of the wheelchair and carers who would take him places, then what separates that specifically from even the silly toy demonstrations where someone puts an LLM in charge of a Boston Dynamics "Spot", or even one of those tiny DIY Arduino rolling robot kits?

The answer "is alive" is not the same as "autonomous".

The answer "has feelings" leads to a long-standing philosophical problem that is not only not solved, but people don't agree on what the question is asking, and also unclear why it would matter* for any of the definitions I've heard.

The answer "free will" is, even in humans, either provably false or ill-defined to the point of meaninglessness. For example "just now I used free will to drink some coffee", but if I examine my physical state closely, I expect to find one part of my brain had formed a habit, and potentially another which had responded to a signal within my body saying "thirsty" — but such things are mechanistic (thirst in particular can be modified very easily with a variety of common substances besides water), and fMRI scans show that our brains generate decisions like these before our conscious minds report the feeling of having decided.

* at least, why it would matter on this topic; for questions where there is a moral subject who may be harmed by the answer to that question, "has feelings" is to me the primary question.

This item has no comments currently.