I think most people on this thread are missing the point entirely. Computer Science is the sustained study of processes and how to encode them. Sussman is simply pointing out the most complex, powerful (yet flexible) processes that he is aware of and stating that our models of computer programming can't even begin to describe such processes.
Seems like just the kind of thing that Sussman, who has been invested in logic programming, and constraint logic programming for 30 years, would be interested in.
Sad I'm going to miss this talk, I'm becoming convinced that future programming languages will need to combine our rich knowledge of object oriented, functional, and (constraint) logic programming.
elviejo
Completely agree with you.
In "My Favourite Interview Question"[1] The author asks:
How would you design a Monopoly game?
He goes on to say that with 'basíc' OOP you can model the elements: dice, buildings.
But What about the rules? One of his suggestions is to look to the Strategy, Visitor, and Command patterns.
But I disagree. I want to model the rules using Prolog!
That is what prolog is great at.
So can please the next high level language standup?
I just want:
top of the line OOP (Smalltalk)
constraint programming (prolog)
functional (haskell)
And Design By Contract (Eiffel)
And no... I'm not asking for the kitchen sink language [2]
Is simply that this concepts aren't exclusive and all of them helps us to better model reality.
Isn't this something the .Net Framework should be good at? Integrate your OOP (C#) program with functional (F#) modules, and so on. There are implementations of Eiffel, Smalltalk and Prolog for .Net too.
Confusion
Well, it's a bit of an attention seeking title and it seems people are taking it literally. I wouldn't bet that Sussman literally means it. "We really don't know how biology computes" sounds more like it.
I have never seen any reason to assume that there's something deeply different about the way the genome or the brain performs computations. My guess is it will turn out to be interesting from a technological point of view only.
babel17
It always astounds me how people can completely blend out one major thing: consciousness
Is it possible that machines develop consciousness like ours? THAT is the question that needs to be answered and which is far more interesting than "from a technological point of view only"
Confusion
To me that's not a question at all. The answer is a ringing: yes, of course. I'm squarely in the 'consciousness is an emergent phenomenon of certain configurations of matter', so to me it is only a question of the technology to create those configurations.
There is matter and only matter. Humans are a configurations of matter and those specific configurations have the property we hold so dear and call 'consciousness'. There is no reason whatsoever to suppose you couldn't synthetically arrive at a configuration of matter that displays the same emergent behavior. This is the basic thesis of the famous book 'Godel, Escher, Bach' by Hofstadter. I am a strange loop. Any similar strange loop will display the same properties.
Any question to this regard probably makes you a closet Cartesian dualist. When you drill down, most people turn out to be that. (Some intermediate philosophical positions are possible, but they are subtle and rarely held consistently by someone that hasn't spent a course studying the matter in detail).
What I'm guessing in my previous post is that it may turn out that you can most easily achieve a configuration that displays consciousness by using biological materials. In that case, any future conscious machine would probably have a core of material similar to our brain. Which more poignantly raises the ethical questions related to creating such machines.
(BTW, note we are conflating 'any consciousness' and 'intelligent consciousness' by focusing on humans)
shawndrost
As many of us know, one fact about dense programs is that they're difficult for humans to understand. Since human-created programs are sparse, we can understand them -- which means that we can improve them, and programs don't have to wait for aeons to improve via genetic selection.
Density is overrated.
epsilondelta
100 ms / 10 ms = 10 steps
But that's a straw man argument. The brain is a massively parallel net of neurons connected by synapses. Saying that it takes "10 steps" to respond to a stimulus is like saying that a GPU only applies a few pixel shaders on a scene per frame. Perhaps, but that's over millions of pixels.
Also, human DNA is unique from that of close mammals on a set of base pairs of size the order of 1-5 percent of our genome. So that means 10-50 MB. That's actually a pretty substantial size: it can even store a small operating system kernel (Linux can be compiled to under 10 MB).
Edit: The 1 GB figure comes from the fact that the human genome size is almost 3 billion base pairs, where each base pair exactly encodes 2 bits. So, (3x10^9)*2/8=750 MB which rounds up.
He didn't say it takes 10 neurons to compute the action, but ten steps. I am sure he is aware that millions or billions of neurons (? sorry, I am lazy) are involved in those 10 steps. His point is that even if we had those millions of neurons or transistors at our disposal, we don't know how to program them in such a way as to get a result in ten steps.
At least that is how I read it.
stiff
Also, human DNA is unique from that of close mammals on a set of base pairs of size the order of 1-5 percent of our genome. So that means 10-50 MB. That's actually a pretty substantial size: it can even store a small operating system kernel (Linux can be compiled to under 10 MB).
I "passionately love" metaphors and comparisons of this kind, they are so much meaningful. I mean, 50MB of "code" is "a bit" different when your computer is build from logic gates and a bit different when your computer is the universe, or, to be more precise, the laws of physics that determine the final form and function of the proteins that are build from the DNA. In other words, I don't think it is right to use a measure of information like it would be a measure of complexity (which such analogies imply). Also it completely ignores the very complicated process of development of the brain, where information is supplied from the outside all the time and without which the brain isn't too useful.
It's the same with the page linked:
We don’t have any idea how to make a description of such a complex machine that is both dense and flexible.
Just like the fact that we have computers build from gates and not laws of the universe to depend on didn't make the task of writing "dense and flexible descriptions" quite a bit harder. Especially we do it with our brains and we don't have millions of years of trial and error at our disposal like evolution has.
ckuehne
I agree with quite a bit of your argument. However, what exactly do you think do "computers build from gates" depend on if not the laws of physics?
stiff
The question is not really what does the computer depend on, but what does _the program_ depend on. The boolean algebra that computers are an implementation of is very simple and can be implemented in a wide variety of ways and regardless which one you choose, programs can be compiled and ran using the new architecture obtained in this way. So you could hypothetically build a biological computer that would realize ANDs, ORs etc., in the end providing the same instruction set as our current PC, and the unchanged Linux source code (example from the parent post) could be compiled an ran on it. In other words, while our computers depend on laws of physics in some way, those laws are not part of the computational model.
This is different with the DNA. There is no middle layer like Boolean algebra to abstract-out the device that does the computation. The protein that will be the result of connecting the amino acids specified by the DNA, it's form and function, is very highly dependent directly on physical laws in a very complicated way - if you simulate protein folding (and consider how hard this is in the first place), you can see how much the outcome will vary when you for example change the value of some physical constant by a small amount. Then all those proteins start interacting with each other in highly complicated ways, also dependent on a wide variety of physical laws and on the outside environment, and if you consider DNA a program, those physical laws are parts of the computational model, of course if the concept of a computational model makes any sense when studying non-man-made artefacts. That's roughly why applying computer science metaphors to the DNA always sounds a bit ridiculous to me.
Of course, it is a different question whether we can find a computational model that would explain to us the working of a healthy, fully-developed human brain, I think it is worth mentioning as it is easy to confuse those two questions.
JabavuAdams
I just assume poor phrasing. Proteins are much more complex than logic gates, so the universe is doing some "free" computation for you. I.e. you've pushed more computation to the runtime, and out of your source code.
juiceandjuice
"Parallel" isn't even the correct word to describe how the brain processes.
epsilondelta
That's true. The brain has billions of neurons, and trillions of synapses. Parallel doesn't even come close to describing the density of connections, or the method of "computation" done by the brain. But it was an easy analogy. Don't look too hard into it :)
ScottBurson
I beg to differ with Gerry. We know a lot about how to compute. It's the messy, opportunistic, gestalt-driven activity of thinking that we still don't know much about.
bdhe
Interesting point: It's the messy, opportunistic, gestalt-driven activity of thinking that we still don't know much about.
I have a couple of questions that I would love someone well-versed in AI to answer:
1. Does the difficulty of systems like driverless cars arise because we haven't been able to replicate the feedback loop mechanism that is largely hardwired? Is it some limitation of control theory (I'm just speculating). How is this so fundamentally harder than the ability to exponentiate one 1024-bit numer to another mod a third 1024-bit number (which is done in microseconds)?
2. With regards to aspects of AI that we might want to interact with, is human vision and visual post-processing done in the brain the hardest to replicate? Is it a matter of unknown algorithms or rather massive parallelism that gives humans a large advantage? If not, are other senses, like hearing (voice-recognition) or haptics harder to replicate?
epsilondelta
A lot of the challenges in making driverless cars has been in visual object recognition, which is probably the "visual post-processing" you mean.
It is important to note that humans, and other mammals, are very hard-wired to process vision and other inputs. For example, the retina is more than just an organic lens; it also encodes information about the motion of objects seen within its field of vision:
So in addition to post-processing, there is probably significant pre-processing done by the sensory extensions of our brain. Similarly, the physicist Georg Zweig studied the cochlea and found how it mechanically separates sound into its frequency distribution. Zweig's research on the cochlea also resulted in the discovery of the continuous wavelet transform, whose discrete version may be familiar through its use in JPEG2000.
About 1: exponentiating numbers follows a very simple formula, with well defined inputs and outputs (and we knew how to do it centuries before the first computer ever came to light). Nobody knows the formula to drive a car (if such a thing even makes sense). Same for most tasks which fall into AI.
puredangerOP
I don't know that it would exactly answer your questions, but if you're interested in these topics, I would highly recommend Jeff Hawkins' book "On Intelligence".
bluekeybox
> It's the messy, opportunistic, gestalt-driven activity of thinking that we still don't know much about.
Sometimes I have a feeling that the reason we don't know much about our own thought processes is because we like to imagine them as we would prefer them to be rather than trying to face our thinking the way it really is.
qjz
Perhaps one day we'll evolve into intelligent designers.
seats
Reminded me of this PZ Myers post that really rips into Kurzweil. There are problems with too simplistically thinking of DNA as a program.
What I would like to know is how compressible (transcribed) human genome is.
hxa7241
It is, I believe, fairly well compressible: down to about 200MB (from about 800MB).
(I think I got this from the book 'Genomes' by Brown, 2002.)
gcb
1GB for DNA data is assuming we know all that there is in a DNA.
JabavuAdams
It also assumes that all the information needed to build a new human is in the DNA, which is clearly false. A lot of information is in the "factory" and process.
Show me a human that has been created from conception to healthy birth in-vitro.
For the patching/dog example: you can't currently implant a dog embryo into a human woman and expect to get a viable birth.
If I have a really full-featured runtime, hey, I can make smaller programs. But ... not all the information required to create the program is in the source code.
infinite8s
Even more importantly, you can't take the nucleus from one species and implant it into the egg of another and get a viable organism (unless the species are very closely related).
azakai
Not sure why you are downmodded. I took your comment to imply that there might be additional information that is not in the standard way we understand DNA, which is a very legitimate question. But if you meant something else and I am wrong, maybe the downmod was justified ;)
Getting back to DNA, we measure information there based on the base pairs. But for all we know, there could be additional sources of information, like some subtle aspect of physical shape that DNA has, that is also inherited (as part of the replication process). Perhaps the amount of information encoded is substantially higher due to that.
(That's wild speculation, of course.)
wynand
You're right - DNA methylation is inherited (see epigenetics). So DNA alone does not give you enough information.
infinite8s
And methylation patterns are believed to affect higher-order structure in DNA. This higher order structure has to do with the way DNA is packaged when not being actively read for protein synthesis or cell division. Most of the time it is stored in a tightly coiled form that renders it inaccessible to most of the DNA machinery, and it's believed that the methylation patter on the DNA affects the structure of this compaction.
gcb
exactly. from the little i know about chemistry tests, most of them are done blending stuff and adding some reagent and see if that changes color or something else that is measurable.
I have no idea how they 'check' dna pairs, but i doubt they compare every subtlety of the molecules every time. If they do, wow! but still, we may not be able to accurately detect something else that is even smaller. since we only saw dna recently. the wikipedia article has dozens of diagrams, but only one blurry 100nm image, from 2004.
hasenj
> We don’t have any idea how to make a description of such a complex machine that is both dense and flexible.
I've been thinking about this topic for a while and trying to find the right "words" to describe what I've been thinking about. I think I just found them.
Seems like just the kind of thing that Sussman, who has been invested in logic programming, and constraint logic programming for 30 years, would be interested in.
Sad I'm going to miss this talk, I'm becoming convinced that future programming languages will need to combine our rich knowledge of object oriented, functional, and (constraint) logic programming.
But What about the rules? One of his suggestions is to look to the Strategy, Visitor, and Command patterns.
But I disagree. I want to model the rules using Prolog! That is what prolog is great at.
So can please the next high level language standup? I just want: top of the line OOP (Smalltalk) constraint programming (prolog) functional (haskell) And Design By Contract (Eiffel)
And no... I'm not asking for the kitchen sink language [2]
Is simply that this concepts aren't exclusive and all of them helps us to better model reality.
[1]http://weblog.raganwald.com/2006/06/my-favourite-interview-q... [2]http://zedshaw.com/essays/kitchensink.html
I have never seen any reason to assume that there's something deeply different about the way the genome or the brain performs computations. My guess is it will turn out to be interesting from a technological point of view only.
Is it possible that machines develop consciousness like ours? THAT is the question that needs to be answered and which is far more interesting than "from a technological point of view only"
There is matter and only matter. Humans are a configurations of matter and those specific configurations have the property we hold so dear and call 'consciousness'. There is no reason whatsoever to suppose you couldn't synthetically arrive at a configuration of matter that displays the same emergent behavior. This is the basic thesis of the famous book 'Godel, Escher, Bach' by Hofstadter. I am a strange loop. Any similar strange loop will display the same properties.
Any question to this regard probably makes you a closet Cartesian dualist. When you drill down, most people turn out to be that. (Some intermediate philosophical positions are possible, but they are subtle and rarely held consistently by someone that hasn't spent a course studying the matter in detail).
What I'm guessing in my previous post is that it may turn out that you can most easily achieve a configuration that displays consciousness by using biological materials. In that case, any future conscious machine would probably have a core of material similar to our brain. Which more poignantly raises the ethical questions related to creating such machines.
(BTW, note we are conflating 'any consciousness' and 'intelligent consciousness' by focusing on humans)
Density is overrated.
But that's a straw man argument. The brain is a massively parallel net of neurons connected by synapses. Saying that it takes "10 steps" to respond to a stimulus is like saying that a GPU only applies a few pixel shaders on a scene per frame. Perhaps, but that's over millions of pixels.
Also, human DNA is unique from that of close mammals on a set of base pairs of size the order of 1-5 percent of our genome. So that means 10-50 MB. That's actually a pretty substantial size: it can even store a small operating system kernel (Linux can be compiled to under 10 MB).
Edit: The 1 GB figure comes from the fact that the human genome size is almost 3 billion base pairs, where each base pair exactly encodes 2 bits. So, (3x10^9)*2/8=750 MB which rounds up.
http://www.nature.com/nature/journal/v431/n7011/abs/nature03...
At least that is how I read it.
I "passionately love" metaphors and comparisons of this kind, they are so much meaningful. I mean, 50MB of "code" is "a bit" different when your computer is build from logic gates and a bit different when your computer is the universe, or, to be more precise, the laws of physics that determine the final form and function of the proteins that are build from the DNA. In other words, I don't think it is right to use a measure of information like it would be a measure of complexity (which such analogies imply). Also it completely ignores the very complicated process of development of the brain, where information is supplied from the outside all the time and without which the brain isn't too useful.
It's the same with the page linked:
We don’t have any idea how to make a description of such a complex machine that is both dense and flexible.
Just like the fact that we have computers build from gates and not laws of the universe to depend on didn't make the task of writing "dense and flexible descriptions" quite a bit harder. Especially we do it with our brains and we don't have millions of years of trial and error at our disposal like evolution has.
This is different with the DNA. There is no middle layer like Boolean algebra to abstract-out the device that does the computation. The protein that will be the result of connecting the amino acids specified by the DNA, it's form and function, is very highly dependent directly on physical laws in a very complicated way - if you simulate protein folding (and consider how hard this is in the first place), you can see how much the outcome will vary when you for example change the value of some physical constant by a small amount. Then all those proteins start interacting with each other in highly complicated ways, also dependent on a wide variety of physical laws and on the outside environment, and if you consider DNA a program, those physical laws are parts of the computational model, of course if the concept of a computational model makes any sense when studying non-man-made artefacts. That's roughly why applying computer science metaphors to the DNA always sounds a bit ridiculous to me.
Of course, it is a different question whether we can find a computational model that would explain to us the working of a healthy, fully-developed human brain, I think it is worth mentioning as it is easy to confuse those two questions.
I have a couple of questions that I would love someone well-versed in AI to answer:
1. Does the difficulty of systems like driverless cars arise because we haven't been able to replicate the feedback loop mechanism that is largely hardwired? Is it some limitation of control theory (I'm just speculating). How is this so fundamentally harder than the ability to exponentiate one 1024-bit numer to another mod a third 1024-bit number (which is done in microseconds)?
2. With regards to aspects of AI that we might want to interact with, is human vision and visual post-processing done in the brain the hardest to replicate? Is it a matter of unknown algorithms or rather massive parallelism that gives humans a large advantage? If not, are other senses, like hearing (voice-recognition) or haptics harder to replicate?
It is important to note that humans, and other mammals, are very hard-wired to process vision and other inputs. For example, the retina is more than just an organic lens; it also encodes information about the motion of objects seen within its field of vision:
http://www.sciencedirect.com/science/article/pii/S0896627307...
So in addition to post-processing, there is probably significant pre-processing done by the sensory extensions of our brain. Similarly, the physicist Georg Zweig studied the cochlea and found how it mechanically separates sound into its frequency distribution. Zweig's research on the cochlea also resulted in the discovery of the continuous wavelet transform, whose discrete version may be familiar through its use in JPEG2000.
http://scienceworld.wolfram.com/biography/Zweig.html
Sometimes I have a feeling that the reason we don't know much about our own thought processes is because we like to imagine them as we would prefer them to be rather than trying to face our thinking the way it really is.
http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does...
http://www.kurzweilai.net/ray-kurzweil-responds-to-ray-kurzw...
(I think I got this from the book 'Genomes' by Brown, 2002.)
Show me a human that has been created from conception to healthy birth in-vitro.
For the patching/dog example: you can't currently implant a dog embryo into a human woman and expect to get a viable birth.
If I have a really full-featured runtime, hey, I can make smaller programs. But ... not all the information required to create the program is in the source code.
Getting back to DNA, we measure information there based on the base pairs. But for all we know, there could be additional sources of information, like some subtle aspect of physical shape that DNA has, that is also inherited (as part of the replication process). Perhaps the amount of information encoded is substantially higher due to that.
(That's wild speculation, of course.)
I have no idea how they 'check' dna pairs, but i doubt they compare every subtlety of the molecules every time. If they do, wow! but still, we may not be able to accurately detect something else that is even smaller. since we only saw dna recently. the wikipedia article has dozens of diagrams, but only one blurry 100nm image, from 2004.
I've been thinking about this topic for a while and trying to find the right "words" to describe what I've been thinking about. I think I just found them.