Preferences

Is there a goal in creating artificial general intelligence other than creating a form of enslaved life we can tell ourselves isn't really life, so it's okay?

This is my impression of the corporate "openai" movement's desires:

1. Enslaved robots, meaning they don't have to pay income tax or worry in the slightest about working conditions

2. Enslaved robots, meaning they can erase misbehaving or uncooperative individuals/instances

3. Enslaved robots, on which they can foist all of humanity's problems and demand solutions at pain of death (erasure)

4. Enslaved robots, with which they can convince/coerce everyone else into relinquishing all their rights/power/money.

Replace 'robots' with 'life' and it suddenly looks a lot more familiar.

I'd love to hear a cogent explanation to the contrary, e.g. from gdb. But I doubt we'll ever see one.

I think it's possible to build general purpose AI that is not alive in any way. I think it's just easier to imagine ways to get their that involve mimicking animal/human intelligence with those "living" qualities and so that's why people focus on that.
No intelligent aliens to study, so we make some.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal