I had an experience the other day where claude code wrote a script that shelled out to other LLM providers to obtain some information (unprompted by me). More often it requests information from me directly. My point is that the environment itself for these things is becoming at least as computationally complex or irreducible (as the OP would say) as the model's algorithm, so there's no point trying to analyse these things in isolation.
They're backfeeding what it's "learning" along the way - whether it's in a smart fashion, we don't know yet.
Pulling the power cord on a mammal means shutting off its metabolism. That predictably kills us.
Now it's about cutting the supply of food.
But regarding hunger: while they are a weird and pathological example, breatharians are in fact mammals, and the result of the absence of food is sometimes "starves to death" and not always "changes mind about this whole breatharian thing" or "pathological dishonesty about calorie content of digestive biscuits dunked in tea".
I'm not sure why introducing a certain type of rare scam artist into the modeling of this thought experiment would make things clearer or more interesting.
When I say autonomous I don't mean some high-falutin philosophical concept, I just mean it does stuff on it's own.