Preferences

> I would conjecture that these things are easy for us, not because we are amazing learning machines, but because we have millions of years of evolution, and years as infants with caring teachers going for us.

And so do chimpanzees. Evolution must have provided us with something additional, which would be our rather more developed cognitive abilities to employ abstract reasoning and metaphor.

Those abilities aren't learned, they're innate, and they allow us to think in ways that don't require large amounts of data. An average human being can be shown an Atari game like Pacman, and easily understand what the objective of the game is almost right away.


But a game like Pac-Man is intuitively understood by the average human because it’s fundamentally a “human” game - designed by humans, gives humans dopamine and other chemical hits in a way that we might even perceive as “fun”. Imagine a game that requires lots of computation and no human-friendly interface - a machine would obviously “learn” the rules a lot faster.

The more evidence we uncover, especially the research around Alpha Go Zero (self play with a specific objective in lieu of millions of years to develop keen general intuition) the more it feels like “human-like” intelligence is not some incredible holy grail of general intelligence but an emergent property of any reasonably directed algorithm.

Another random thought, but cro magnon man comes to mind as a human-like intelligence that proved simply not as cunning or vicious as human intellect and was outcompeted and stamped out. Imagine if we were to discover the AI equivalent of cro magnon intelligence - would we be quick to dismiss it as subpar and “not general enough” even though it emerged through the same algorithm (natural selection)?

> But a game like Pac-Man is intuitively understood by the average human because it’s fundamentally a “human” game - designed by humans, gives humans dopamine and other chemical hits in a way that we might even perceive as “fun”. Imagine a game that requires lots of computation and no human-friendly interface - a machine would obviously “learn” the rules a lot faster.

But if we're talking about creating AGI and the concerns that go with that (full automation, self-directed goals in the real world, etc), then the question is whether DL is enough on it's own to get there.

As such, comparing AlphaGo to humans on a variety of tasks like Atari Games or Go is kind of the point. And Google's goal is turn it into a product, which means doing tasks humans currently do.

Well this is silly. Show a toddler that game and they'll have no idea what the purpose is or even why its a game.

Humans draw on massive reservoirs of knowledge to comprehend anything.

That's true, but the question is whether you can train ML on a massive reservoir of knowledge, with the result being a similar understanding of the world that humans possess.

There is a long term attempt to give machines a common sense understanding of the world by specifying several million rules. That's the Cyc project.

> And so do chimpanzees. Evolution must have provided us with something additional, which would be our rather more developed cognitive abilities to employ abstract reasoning and metaphor.

If you have an evolutionary learning algorithm you don't expect every branch to be equally capable, it doesn't mean we didn't get here along the same path.

I basically agree that humans have some "innate" ability, but this innate ability exists as the result of an evolutionary process.

> Those abilities aren't learned, they're innate, and they allow us to think in ways that don't require large amounts of data. An average human being can be shown an Atari game like Pacman, and easily understand what the objective of the game is almost right away.

Taking the single experience with a single Atari game is sort of missing the point, which is all the time we spent learning up until that point and all the evolution that went on until that point.

It also sort of misses the point that Atari games are explicitly designed to be understandable easily by humans, they're not something that just appeared and we happened to be good at it.

> I would conjecture that these things are easy for us, not because we are amazing learning machines, but because we have millions of years of evolution, and years as infants with caring teachers going for us.

But if the argument is that AlphaGo is the right approach to creating an AGI, then we should at some point expect it to learn how to recognize the goal of various tasks without a huge amount of training.

Maybe evolution provided us with something additional that is lacking in current generation of DL. And there are AI researchers who think that you need ontologies for the machines to understand the world, and that it's not reasonable to expect a machine to be able to learn everything from scratch, because the world is too complex for that.

It's not reasonable to expect AlphaGo to replay evolution in order to gain the ability to do abstract reasoning.

> But if the argument is that AlphaGo is the right approach to creating an AGI, then we should at some point expect it to learn how to recognize the goal of various tasks without a huge amount of training.

I never said anything about AlphaGo or AGI. I said that humans are not as good at generalizing from few examples as people would like to believe.

You speak as if a human being shown Pacman for the first time and figuring out the controls doesn't require a ton of data. There's so much data that goes into that. You have tens of billions of photons entering your eye during that playtime. This is sensory information that your brain decodes and reintegrates on the fly, then relays to the required parts of your body.

Could you have experienced all of that without seeing Pacman? Probably not. Once you have seen and played Pacman enough, you can probably imagine an entire instance in your mind. That's because we are good at storing and retrieving certain kinds of data. The data was required in the first place, though.

> And so do chimpanzees. Evolution must have provided us with something additional, which would be our rather more developed cognitive abilities to employ abstract reasoning and metaphor.

Note that the current capabilities of AI systems are nowhere near the general capabilities of a chimpanzee. It seems reasonable to assume that the hard task is to come up with the prior of the mammalian brain. The "easy" part is to discover the parameter space on top of that prior, be it chimpanzee or human.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal