Preferences

ICBTheory parent
1. I appreciate the comparison — but I’d argue this goes somewhat beyond the No Free Lunch theorem.

NFL says: no optimizer performs best across all domains. But the core of this paper doesnt talk about performance variability, it’s about structural inaccessibility. Specifically, that some semanti spaces (e.g., heavy-tailed, frame-unstable, undecidable contexts) can’t be computed or resolved by any algorithmic policy — no matter how clever or powerful. The model does not underperform here, the point is that the problem itself collapses the computational frame.

2. OMG, lool. ... just to clarify, there’s been a major misunderstanding :)

the “weight-question”-Part is NOT a transcript from my actual life... thankfully - I did not transcribe a live ChatGPT consult while navigating emotional landmines with my (perfectly slim) wife, then submit it to PhilPapers and now here…

So - NOT a real thread, - NOT a real dialogue with my wife... - just an exemplary case... - No, I am not brain dead and/or categorically suicidal!! - And just to be clear: I dont write this while sitting in some marital counseling appointment, or in my lawyer's office, the ER, or in a coroners drawer

--> It’s a stylized, composite example of a class of decision contexts that resist algorithmic resolution — where tone, timing, prior context, and social nuance create an uncomputably divergent response space.

Again : No spouse was harmed in the making of that example.

;-))))


andoando
Just a layman here so Im not sure if Im understanding (probably not), but humans dont analyze every possible scenario ad infinitum, we go based on the accumulation of our positive/negative experiences from the past. We make decisions based on some self construed goal and beliefs as to what goes towards those goals, and these are arbitrary with no truth. Napolean for example conquered Europe perhaps simiply becuause he thought he was the best to rule it, not through a long chain of questions and self doubt

We are generally intelligent only in the sense that our reasoning/modeling capabilities allow us to understand anything that happens in space-time.

john-h-k
> Specifically, that some semanti spaces (e.g., heavy-tailed, frame-unstable, undecidable contexts) can’t be computed or resolved by any algorithmic policy — no matter how clever or powerful. The model does not underperform here, the point is that the problem itself collapses the computational frame.

I see no proof this doesn’t apply to people

ben_w
> the “weight-question”-Part is NOT a transcript from my actual life... thankfully - I did not transcribe a live ChatGPT consult while navigating emotional landmines with my (perfectly slim) wife, then submit it to PhilPapers and now here…

You have wildly missed my point.

You do not need to even have a spouse in order to try asking an AI the same question. I am not married, and I was still able to ask it ask it to respond to that question.

My point is that you clearly have not asked ChatGPT, because ChatGPT's behaviour clearly contradicts your claims about what AI would do.

So: what caused you to write to claim that AI would respond as you say they would respond, when the most well-known current generation model clearly doesn't?

andoando
I read some of the paper, and it does seem silly to me to state this:

"But here’s the peculiar thing: Humans navigate this question daily. Not always successfully, but they do respond. They don’t freeze. They don’t calculate forever. Even stranger: Ask a husband who’s successfully navigated this question how he did it, and he’ll likely say: ‘I don’t know… I just… knew what to say in that moment....What’s going on here? Why can a human produce an answer (however imperfect) while our sophisticated AI is trapped in an infinite loop of analysis?” ’"

LLM's don't freeze either. In your science example too, we already have LLMs that give you very good answers to technical questions, so on what grounds is this infinite cascading search based on?

I have no idea what you're saying here either: "Why can’t the AI make Einstein’s leap? Watch carefully: • In the AI’s symbol set Σ, time is defined as ‘what clocks measure-universally’ • To think ‘relative time,’ you first need a concept of time that says: • ‘flow of time varies when moving, although the clock ticks just the same as when not moving' • ‘Relative time’ is literally unspeakable in its language • "What if time is just another variable?", means: :" What if time is not time?"

"AI’s symbol set Σ, time is defined as ‘what clocks measure-universally", it is? I don't think this is accurate of LLM's even, let alone any hypothetical AGI. Moreover LLM's clearly understand what "relative" means, so why would they not understand "relative time?".

In my hypothetical AGI, "time" would mean something like "When I observe something, and then things happens in between, and then I observe it again", and relative time would mean something like "How I measure how many things happen in between two things, is different from how you measure how many things happen between two things"

Dave_Wishengrad (dead)

This item has no comments currently.