Preferences

Since everyone's spitballing their idea of AGI, my personal take is that AGI should be a fully autonomous system that have a stable self-image of some sort, can act on its own volition, understand the outcome of its actions, learn from cause-and-effect, and can continue doing so indefinitely.

So far, LLMs aren't even remotely close to this, as they only do what they are told to do (directly or otherwise), they can't learn without a costly offline retraining process, they do not care in the slightest what they're tasked with doing or why, and they do not have anything approximating a sense of self beyond what they're told to be.


Yeah my definition of AGI has always been close to this. The key factors:

- It's autonomous

- It learns (not retraining, but true learning)

- By definition some semblance of consciousness must arise

This is why I think we're very far from anything close to this. Easily multiple decades if not far longer.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal