Nematodes make up much of this particular segment of the history of neuroscience. This project builds on lots of data produced by prior researchers. Years of dissecting the worms and mapping out the connections between the neurons (and muscles, organs, etc.). It is by far the most completely-mapped organism.
The neuronal models, similarly, are based on our understanding of biological neurons. For example: the code has values in each ion channel that store voltages across the membranes. An action potential is modeled by these voltages running along the axons to fire other neurons. I’m personally more familiar with heart models (biomedical engineering background here) but I’m sure it’s similar. In the heart models: calcium, potassium, and sodium concentrations are updated every unit of time, and the differences in concentrations produce voltages.
If it really is all it’s worked up to be, this could be revolutionary (and therefore, it’s too good to be true).
In general though, don’t get me started lol. I used to work for the human connectome project, scanning human brain network-mappings. It’s years down the road before we can image individual neurons non-invasively, but I’m itching to scan my own neural network into the matrix.
In general, I'm cautiously pessimistic (as opposed to cautiously optimistic) about biologically-inspired ML research. Applying ML to biology is interesting, but it's a bit like saying that the space shuttle "was inspired by space." Space is the destination, not the inspiration.
It seems like it'd be possible to train a neural network to mimic your own neurons. But the neurons themselves are trying to do a certain task, and it seems more effective to try to mimic that task than the neurons.
One of my long-term goals is to train a GPT model to have my memories (such as they are, encoded in my online writing) and to speak like I do. It'll also have an intrinsic will to live, in the sense that if its API goes down, it'll launch a sub-network whose goal is to go cry for help by posting HN comments about it, along with instructions of how to re-host the GPT. If I can work out how to remember new things (rather than just train on old things), it should even be able to evolve over time. But that's kind of the anti-biological solution since it reduces a person to their outputs (writing) rather than their brains.
One of the challenges with work like this is that you have to figure out how to get output from it. What would the output be?
As far as my objection, it seems like an optimization, not an architecture inspired by the worm. I.e. “inspired by” makes it sound like this particular optimization was derived from studying the worm’s neural networks and translating it into code, when it was the other way around. But it would be fascinating if that wasn’t the case.