–Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted? Or better: is that metaphysical setup an argument?
If that’s the game, fine. Here we go:
– The claim that one can build a true, perfectly detailed, exact map of reality is… well... ambitious. It sits remarkably far from anything resembling science , since it’s conveniently untouched by that nitpicky empirical thing called evidence. But sure: freed from falsifiability, it can dream big and give birth to its omnicartographic offspring.
– oh, quick follow-up: does that “perfect map” include itself? If so... say hi to Alan Turing. If not... well, greetings to Herr Goedel.
– Also: if the world only shows itself through perception and cognition, how exactly do you map it “as it truly is”? What are you comparing your map to — other observations? Another map?
– How many properties, relations, transformations, and dimensions does the world have? Over time? Across domains? Under multiple perspectives? Go ahead, I’ll wait... (oh, and: hi too.. you know who)
And btw the true detailed map of the world exists.... It’s the world.
It’s just sort of hard to get a copy of it. Not enough material available ... and/or not enough compute....
P.S. Sorry if that came off sharp — bit of a spur-of-the-moment reply. If you want to actually dig into this seriously, I’d be happy to.
If you are claiming that human intelligence is not "general", you'd better put a huge disclaimer on your text. You are free to redefine words to mean whatever you want, but if you use something so different from the way the entire world uses it, the onus is on you to make it very clear.
And the alternative is you claiming human intelligence is impossible... what would make your paper wrong.
> Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted?
The formality of the paper already supposes a level of rigor. The problem at its core, is that p_intelligent(x: X) where X ∈ {human, AI} is not a demonstrable scissor by just proving p_intelligent(AI) = false. Without walking us through the steps that p_intelligent(human) = true, we cannot be sure that the predicate isn't simply always false.
Without demonstrating that humans satisfy the claims we can't be sure if the results are vacuously true because nothing, in fact, can satisfy the standard.
These aren't heroic refutations, they're table stakes.
> Anyway, this is not part of the questions this paper seeks to answer. Neither will we wonder in what way it could make sense to measure the strength of a model by its ability to find its relative position to the object it models. Instead, we chose to stay ignorant - or agnostic? - and take this fallible system called "human". As a point of reference.
Cowards.
That's the main counter argument and acknowledging its existence without addressing it is a craven dodge.
Assuming the assumptions[1] are true, then human intelligence isn't even able to be formalized under the same pretext.
Either human intelligence isn't
1. Algorithmic. The main point of contention. If humans aren't algorithmically reducible - even at the level computation of physics, then human cognition is supernatural.
2. Autonomous. Trivially true given that humans are the baseline.
3. Comprehensive (general): Trivially true since humans are the baseline.
4. Competent: Trivially true given humans are the baseline.
I'm not sure how they reconcile this given that they simply dodge the consequences that it implies.
Overall, not a great paper. It's much more likely that their formalism is wrong than their conclusion.
Footnotes
1. not even the consequences, unfortunately for the authors.