Preferences

sillysaurusx parent
It makes a good headline, but reading over the paper (https://www.nature.com/articles/s42256-022-00556-7.pdf) it doesn’t seem biologically-inspired. It seems like they found a way to solve nonlinear equations in constant time via an approximation, then turned that into a neural net.

More generally, I’m skeptical that biological systems will ever serve as a basis for ML nets in practice. But saying that out loud feels like daring history to make a fool of me.

My view is that biology just happened to evolve how it did, so there’s no point in copying it; it worked because it worked. If we have to train networks from scratch, then we have to find our own solutions, which will necessarily be different than nature’s. I find analogies useful; dividing a model into short term memory vs long term memory, for example. But it’s best not to take it too seriously, like we’re somehow cloning a brain.

Not to mention that ML nets still don’t control their own loss functions, so we’re a poor shadow of nature. ML circa 2023 is still in the intelligent design phase, since we have to very intelligently design our networks. I await the day that ML networks can say “Ok, add more parameters here” or “Use this activation instead” (or learn an activation altogether — why isn’t that a thing?).


comfypotato
The open worm project is the product of microscopically mapping the neural network (literally the biological network of neurons) in a nematode. How isn’t this biologically inspired? If I’m reading it correctly, the equations that you’re misinterpreting are the neuron models that make each node in the map. I would guess that part of the inspiration for using the word “liquid” comes from the origins of the project in which they were modeling the ion channels in the synapses.

They’ve been training these artificial nematodes to swim for years. The original project was fascinating (in a useless way): you could put the model of the worm in a physics engine and it would behave like the real-life nematode. Without any programming! It was just an emergent behavior of the mapped-out neuron models (connected to muscle models). It makes sense that they’ve isolated the useful part of the network to train it for other behaviors.

I used to follow this project, and I thought it had lost steam. Glad to see Ramin is still hard at work.

sillysaurusx OP
Interesting. Is there a way to run it?

One of the challenges with work like this is that you have to figure out how to get output from it. What would the output be?

As far as my objection, it seems like an optimization, not an architecture inspired by the worm. I.e. “inspired by” makes it sound like this particular optimization was derived from studying the worm’s neural networks and translating it into code, when it was the other way around. But it would be fascinating if that wasn’t the case.

comfypotato
See for yourself! There’s a simulator (have only tried on desktop) to run the worm model in your browser. As the name implies, the project is completely open source (if you’re feeling ambitious). This is the website for the project that produced the research in the article:

https://openworm.org/

Nematodes make up much of this particular segment of the history of neuroscience. This project builds on lots of data produced by prior researchers. Years of dissecting the worms and mapping out the connections between the neurons (and muscles, organs, etc.). It is by far the most completely-mapped organism.

The neuronal models, similarly, are based on our understanding of biological neurons. For example: the code has values in each ion channel that store voltages across the membranes. An action potential is modeled by these voltages running along the axons to fire other neurons. I’m personally more familiar with heart models (biomedical engineering background here) but I’m sure it’s similar. In the heart models: calcium, potassium, and sodium concentrations are updated every unit of time, and the differences in concentrations produce voltages.

sillysaurusx OP
This is cool as heck. Thank you for posting it.
comfypotato
I’m really with you that “it makes a good headline but isn’t all it’s worked up to be” I just wanted to get the biological inspiration correct.

If it really is all it’s worked up to be, this could be revolutionary (and therefore, it’s too good to be true).

In general though, don’t get me started lol. I used to work for the human connectome project, scanning human brain network-mappings. It’s years down the road before we can image individual neurons non-invasively, but I’m itching to scan my own neural network into the matrix.

sillysaurusx OP
Oh, for sure! And I didn't mean to sound like I was poo-pooh'ing the project. I meant to aim the critique at journalists rather than researchers – journalists have to come up with interesting-sounding headlines, sometimes over the researchers' objections. So it's certainly no fault of theirs.

In general, I'm cautiously pessimistic (as opposed to cautiously optimistic) about biologically-inspired ML research. Applying ML to biology is interesting, but it's a bit like saying that the space shuttle "was inspired by space." Space is the destination, not the inspiration.

It seems like it'd be possible to train a neural network to mimic your own neurons. But the neurons themselves are trying to do a certain task, and it seems more effective to try to mimic that task than the neurons.

One of my long-term goals is to train a GPT model to have my memories (such as they are, encoded in my online writing) and to speak like I do. It'll also have an intrinsic will to live, in the sense that if its API goes down, it'll launch a sub-network whose goal is to go cry for help by posting HN comments about it, along with instructions of how to re-host the GPT. If I can work out how to remember new things (rather than just train on old things), it should even be able to evolve over time. But that's kind of the anti-biological solution since it reduces a person to their outputs (writing) rather than their brains.

voxelghost
>There’s a simulator (have only tried on desktop) to run the worm model in your browser.

The scientists gave it life, the hackers hugged it to death.

thinking4real (dead)
ly3xqhl8g9
"I’m skeptical that biological systems will ever serve as a basis for ML nets in practice"

First of all, ML engineers need to stop being so brainphiliacs, caring only about the 'neural networks' of the brain or brain-like systems. Lacrymaria olor has more intelligence, in terms of adapting to exploring/exploiting a given environment, than all our artificial neural networks combined and it has no neurons because it is merely a single-cell organism [1]. Once you stop caring about the brain and neurons and you find out that almost every cell in the body has gap junctions and voltage-gated ion channels which for all intents and purposes implement boolean logic and act as transistors for cell-to-cell communication, biology appears less as something which has been overcome and more something towards which we must strive with our primitive technologies: for instance, we can only dream of designing rotary engines as small, powerful, and resilient as the ATP synthase protein [2].

[1] Michael Levin: Intelligence Beyond the Brain, https://youtu.be/RwEKg5cjkKQ?t=202

[2] Masasuke Yoshida, ATP Synthase. A Marvellous Rotary Engine of the Cell, https://pubmed.ncbi.nlm.nih.gov/11533724

mk_stjames
[1] linked above is an absolute powerhouse of a lecture by Michael Levin. Wow.
Thanks for calling it out, made me watch it. Absolutely fascinating. Incredible implications.
ly3xqhl8g9
Beyond the much needed regenerative medical procedures, limb/organ reconstruction through "API" calls to the cells that 'know' how to build an arm, an eye, a spleen, and so on, it is the breakdown of the dichotomies taken for granted, human/machine, just physics/mind with an agent, and to speak instead of agential materials [1], which fosters a new type of endeavour, one which will be needed very soon if our CPUs start speaking to us.

[1] https://drmichaellevin.org/resources/#:~:text=Agential%20mat...

outworlder
Indeed. All cells must do complex computations, by their own nature. Just the process of producing proteins and each of its steps – from 'unrolling' a given DNA section, copying it, reading instructions... even a lowly ribosome is a computer (one that even kinda looks like a Turing machine from a distance)
laughingman2
I am working on RL and robotics. I came across Levin in Lex's podcast. And then went on a binge of his other podcast appearences. I agree totally with you, I would very much like to build agents that adapt to different circumstances like "simple organisms". I am not familiar with biology, but I plan to build competence here to follow Levin's work to a point that I could potentially collabrate with biologists or learn from their work. Any suggestions (books etc) that would be salient towards this goal is much appreciated!
ly3xqhl8g9
I also focused on the work done by the Levin lab after the Sean Carroll podcast [1]. In order to familiarize myself with the subject matter in a more practical manner I started writing a wrapper and a frontend, BESO [2], BioElectric Simulation Orchestrator, for BETSE [3], the Bio Electric Tissue Simulation Engine developed by Alexis Pietak which is used by the Levin lab to simulate various tissues and their responses based on world/biomolecules/genes/etc. parametrization. Reading the BETSE source code, the presentation [4], and some of the articles referred through the source code has been a rewarding endeavour. Some other books I consulted, somewhat beginner friendly were:

    2018, Amit Kessel, Introduction to Proteins. Structure, Function, and Motion, CRC Press
    2019, Noor Ahmad Shaik, Essentials of Bioinformatics, Volume I. Understanding Bioinformatics. Genes to Proteins, Springer
    2019, Noor Ahmad Shaik, Essentials of Bioinformatics, Volume II. In Silico Life Sciences. Medicine, Springer — less basics, more protocol-oriented
    2021, Karthik Raman, An Introduction to Computational Systems Biology. Systems-Level Modelling of Cellular Networks, Chapman and Hall
    2022, Tiago Antao, Bioinformatics with Python Cookbook. Use modern Python libraries and applications to solve real-world computational biology problems, Packt
    2023, Metzger R.M., The Physical Chemist's Toolbox, Wiley — a beautiful story of mathematics, physics, chemistry, biology; gradually rising in complexity as the universe itself, from the whatever (data) structure the universe was before the Big Bang to us, today.

    somewhat more technical:
    2014, Wendell Lim, Cell Signaling. Principles and Mechanisms, Routledge
    2021, Mo R. Ebrahimkhani, Programmed Morphogenesis. Methods and Protocols, Humana
    2022, Ki-Taek Lim, Nanorobotics and Nanodiagnostics in Integrative Biology and Biomedicine, Springer
In video format I particularly watched Kevin Ahern's Biochemistry courses BB 350/2017 [5], BB 451/2018 [6], Problem Solving Videos [7].

[1] https://www.youtube.com/watch?v=gm7VDk8kxOw

[2] not functional yet, https://github.com/daysful/beso

[3] https://github.com/betsee/betse

[4] BETSE 1.0, https://www.dropbox.com/s/3rsbrjq2ljal8dl/BETSE_Documentatio...

[5] https://youtu.be/JSntf0iKMfM?list=PLlnFrNM93wqz37TUabcXFSNX2...

[6] https://youtu.be/SAIFs_Mx8D8?list=PLlnFrNM93wqyay92Mi49rXZKs...

[7] https://youtu.be/e9khXFSU6r4?list=PLlnFrNM93wqzeZvsE_GKes91C...

zaroth
Late to post this (found from a cross-link on another post) but just have to say, this right here is HN comment gold.

What an incredibly helpful and useful response!!

If I had done synthetic biology my goal would have been to create cells that could reliably compute sine waves... by digitally computing taylor series polynomial approximations. Turns out engineering digital systems from cells is a remarkably challenging problem.

Examples of "switches" in biology abound, my favorite simple one is the Mating Type of Yeast: yeast have two sex types, and swap a small region of DNA in-place with variants to switch between them. Perfect example of self-modifying code!

ly3xqhl8g9
Not sure about polynomials, but how about "Genetic Regulatory Networks that count to 3" [1]. One of the interesting, counter-intuitive highlights from the paper: "Counting to 2 requires very different network design than counting to 3."

[1] https://pubmed.ncbi.nlm.nih.gov/23567648

Unfortunately, that's entirely analog. my goal was to do digital computing- with all the reliability and predictability.
phaedrus
I wonder if there's a step-change where single-celled animals with complex behavior are actually smarter than the simplest multiple-celled animals with a nervous system.
whatshisface
The cells of multi-celled animals still have complex behaviors.
PeterisP
Well, our brains are the most wonderful thing in the world, at least our brains say so.
water-your-self
ATP synthase's shape is my favorite go-to random fact :)
westurner
> Once you stop caring about the brain and neurons and you find out that almost every cell in the body has gap junctions and voltage-gated ion channels which for all intents and purposes implement boolean logic and act as transistors for cell-to-cell communication, biology appears less as something which has been overcome and more something towards which we must strive with our primitive technologies: for instance, we can only dream of designing rotary engines as small, powerful, and resilient as the ATP synthase protein [2].

But what of wave function(s); and quantum chemistry at the cellular level? https://github.com/tequilahub/tequila#quantumchemistry

Is emergent cognition more complex than boolean entropy, and are quantum primitives necessary to emulate apparently consistently emergent human cognition for whatever it's worth?

[Church-Turing-Deutsch, Deutsch's Constructor theory]

Is ATP the product of evolutionary algorithms like mutation and selection? Heat/Entropy/Pressure, Titration/Vibration/Oscillation, Time

From the article:

> The next step, Lechner said, “is to figure out how many, or how few, neurons we actually need to perform a given task.”

Notes regarding Representational drift* and remarkable resilience to noise in BNNs) from "The Fundamental Thermodynamic Cost of Communication: https://www.hackerneue.com/item?id=34770235

It's never just one neuron.

And furthermore, FWIU, human brains are not directed graphs of literally only binary relations.

In a human brain, there are cyclic activation paths (given cardiac electro-oscillations) and an imposed (partially extracerebral) field which nonlinearly noises the almost-discrete activation pathways and probably serves a feed-forward function; and in those paths through the graph, how many of the neuronal synapses are simple binary relations (between just nodes A and B)?

> The group also wants to devise an optimal way of connecting neurons. Currently, every neuron links to every other neuron, but that’s not how it works in C. elegans, where synaptic connections are more selective. Through further studies of the roundworm’s wiring system, they hope to determine which neurons in their system should be coupled together.

Is there an information metric which expresses maximal nonlocal connectivity between bits in a bitstring; that takes all possible (nonlocal, discontiguous) paths into account?

`n_nodes*2` only describes all of the binary, pairwise possible relations between the bits or qubits in a bitstring?

"But what is a convolution" https://www.3blue1brown.com/lessons/convolutions

Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord

f_devd
Learnable activation functions are a thing famously Swish[0] is is a trainable SiLU which was found through symbolic search/optimization [1], but as it turns out that doesn't magically make make neural networks orders better.

[0]: https://en.m.wikipedia.org/wiki/Swish_function [1]: https://arxiv.org/abs/1710.05941

> I’m skeptical that biological systems will ever serve as a basis for ML nets in practice

There is no fundamental difference between information processing systems implemented in silico vs in vivo, except architecture. Architecture is what constrains the manifold of internal representations: this is called "inductive bias" in the field of machine learning. The math (technically, the non-equilibrium statistical physics crossed with information theory) is fundamentally the same.

Everything at the functionalist level follows from architecture; what enables these functions is the universal principles of information processing per se. "It worked because it worked" because there is no other way for it to work given the initial conditions of our neighborhood in the universe. I'm not saying "Everything ends up looking like a brain". Rather, I am saying "The brain, attendant nervous and sensory systems, etc. vs neural networks implemented as nonlinear functions are running the same instructions on different hardware, thus resulting in different algorithms."

The way I like to put it is: trust Nature's engineers, they've been at it much longer than any of us have.

skibidibipiti
> There is no fundamental difference between information processing in silicon and in vivo

A neuron has dozens of neurotransmitters, while artificial neurons produce 1 output. I don't know much about neurology, but how is the information processing similar? What do you mean are running the same instructions?

> there is no other way for it to work

Plants exhibit learned behaviors

mr_toad
> A neuron has dozens of neurotransmitters, while artificial neurons produce 1 output. I don't know much about neurology, but how is the information processing similar? What do you mean are running the same instructions?

ANNs are general function approximations. You can get the same behaviour from a complex network of simple neurons that you get from a single more complex neuron.

skibidibipiti (dead)
> how is the information processing similar?

The representational capacities are of course not the same -- the same "thoughts" cannot be expressed in both systems. But the concept of "processing over abstract representations enacted in physical dynamics within cognitive systems" is shared between all systems of this kind.

I am referring to "information processing" at the physical level, i.e., "'useful' work per energy quantum as communicated through noisy channels".

> What do you mean are running the same instructions?

The underlying physical principles of such information processing are equivalent regardless of physical implementation.

> plants exhibit learned behaviors

A good example of what I mean. The architecture is different, but the underlying dynamics is the same.

There is a convincing (to me) theory of the origins of life[1][2] that states that thermodynamics -- and, by extension, information theory -- is the appropriate level of abstraction for understanding what distinguishes living processes from inanimate ones. The theory posits that a system, well-defined by some (possibly arbitrary) boundaries, "learns" (develops channels through which "patterns" can be "recognized" and possibly interacted with) as an inevitable result of physics. Put another way, a learning system is one that represents its experiences through the cumulative wearing-in over time of channels of energy flows.

What concepts the system can possibly represent depends on in what ways the system can wear while maintaining its essential functions. What specifically the system learns is the set of concepts which collectively best communicate (physically, i.e., from the "inputs" through the "processing" functions and to the "outputs") the historical set of its experiences of its environment and of itself.

I want to note that this discussion has nothing to say on perception, only sensation and reaction: in other words, it is an exclusively materialist analysis.

Optimization theory describes its notion of learning roughly as such (considering "loss" as energy potentials), but with the same language we could also describe a human brain, or a black hole's accretion disk, or an ant colony dug deep into clay.

References:

[1] https://www.englandlab.com/uploads/7/8/0/3/7803054/2013jcpsr...

[2] https://www.quantamagazine.org/a-new-thermodynamics-theory-o...

Parallel directions of research:

https://en.wikipedia.org/wiki/Entropy_and_life

https://en.wikipedia.org/wiki/Free_energy_principle

water-your-self
Looking at biology is what lead to CNNs the current AI boom.

Thats the whole reason we call multi layered perceptrons as "neural nets" cheesey and flashy as it is, using a sliding filter was inspired by what we know about vision im biology.

adamzen
Learned activation functions do seem to be a thing(https://arxiv.org/abs/1906.09529)
danielheath
It definitely won’t happen without a massive overhaul of chip design; a design that optimises for very broad connectivity with storage for the connection would be a step in that direction (neural connectivity is on the order of 10k connections each, and the connection stores temporal information about how recently it last fired / how often it’s fired recently)
mr_toad
> It seems like they found a way to solve nonlinear equations in constant time via an approximation, then turned that into a neural net.

You say that like it isn’t a big deal. Finding an analytical solution to optimising the parameters of a non-linear equation is remarkable.

dilawar
A hell lot of computation inside a neuron (in fact inside any cell) is chemical in nature. Proteins interacting, channels opening and closing, membrane doing membrainy stuff.. In fact their is a AND gate which is entirely chemical in nature.

Simulating chemical reactions are slow in silicon therefore chemical side is ignored.

If you glance over the graphs in chemistry papers, most of them are sigmoids. sigmoids are the sinusoid of chemical world. Its nice and heartening to see sinusoid appearing often in AI/ML as a fundamental computation.

xwolfi
Well I tend to agree but you seem to think biology evolved in a vacuum but it evolved inside an information source and we're all the information processors: ML having to process the same information but at scale, in the grand scheme, it will probably have to ressemble a brain in some ways. Just the sources we care about (colors in a picture, faces, prices, wind direction, whatever) and the output we can understand (text, images, sound) will have to skew it towards us in the way it has to model things.
satvikpendem
> so there’s no point in copying it

Not sure about that, a lot of solutions in nature are honed by billions of years of evolution, sometimes creating feats even more impressive than we can do currently. There is an entire field about copying biology to solve our problems:

https://en.wikipedia.org/wiki/Biomimetics

We are nature. For all we know the solutions that cells came up with were derived in similar ways. Kevin Kelly’s “What Technology Wants” documents how evolution repeats itself in our technology.
jononor
I think that learning to acquire new/additional training data would be a better first step towards learning agents, than trying to mutate its structure/hyper-parameters.
noobermin
So I wasn't skeptical in the way you found it, but it did sound a heck lot to me like tradition numerical solution of PDEs...but with NNs in there somehow.
fatneckbeard
well i will agree on one thing....

corporations are constantly looking for a machine to do labor for free. life itself did not evolve just to do labor for a corporation so by trying to copy biological intelligent life, the result won't necessarily want to do what you tell it to do or be interested in your profit motives.

mensetmanusman
“ which will necessarily be different than nature’s”

We are nature’s…

smrtinsert
Is it still a milestone for all NNs?
canadianfella (dead)

This item has no comments currently.