Preferences

orangebread parent
This is a question I've wondered for awhile. What are we saying when we say AGI? For it to make "generalized" reasoning it would need to also think like a human. This is where the quest for AGI fundamentally falls apart for me.

Before I continue, I just want to say that I LOVE working with AI. I use it for everything. Naturally, I began to wonder about the nature of intelligence. This led me to sentience. And of course you begin to reference the bodies of scifi work addressing consciousness and machine sentience.

I think for AGI to be a thing, it would need to not only need to be this massive multi modal machine, but also a machine with self-motivation. I think this is the key. Right now all AI requires input from a human to even do anything. It requires instructions (prompts).

When AI begins to self-prompt because it has its own desires outside of a human catalyzing it, and not only that, it speaks about its experiences as an AI -- I will then say that is probably AGI. But if we reach that point, we are in deeper ethical territories of do we get to "own" this sentient machine?


This item has no comments currently.