Preferences

southernplaces7 parent
Without taking this rather sketchy paper too seriously, my simple and heuristic take is as follows: AGI constructed through raw information processing in the way an LLM works probably won't go anywhere near AGI, but since something, though we don't know what, gives us self-directed reasoning and sentience, and thus natural general intelligence, than some form of AI is at least a possibility.

This applies unless we discover either some essentially non-physical aspect of consciousness that can't be recreated through any artificial compute we're capable of, or fail to discover a mechanism by which artificial reasoning can imitate the heuristic mechanisms that we humans apparently use to navigate the world and our internal selves. (since we don't know what consciousness is, either one is possible)


This item has no comments currently.