My human brain doesn't use the same algorithm for learning to play a song on a piano as learning to play a new board game. I'm not an AI person, but it seems reasonable to imagine we'd have different "modules" to apply as needed.
AlphaGo probably sucks at conversation. ChatGPT can't play Go. The part of my brain writing this couldn't throw a baseball. The physics engine that lets me throw a baseball couldn't write this. Is there a reason we'd want or need one specific AI approach to be universally applicable?
[0] https://en.wikipedia.org/wiki/Themes_in_A_Song_of_Ice_and_Fi...
I think the transformers architecture, or something very similar with eventually-on-policy time series forecasting in a markov decision process, is the right answer actually and was what I have been trying to make progress on for a long time[1].
The difficulty is the scale. Every synapse of a neuron is effectively a neuron itself, and every synapse acts on the synapses around it. So before you’ve even got to the neuron as a whole you’ve already got the equivalent of thousands of neurons and logic gates. Then the final result gets passed on to thousands more neurons.
I don’t know how you would recreate such complexity in programming. It’s not just the scale, it’s the flexibility of the structure.
Theory of AI is just going to be some weird network with a shit-ton of compute power, where the latter is more important to the outcome than the former.