We built machines that could do calculations far faster than humans could long before we had any idea what neural configuration humans were using to do them. We gave those machines short and long term memory without understanding how human brains do the same. Then we wrote software on those machines that could outperform the best humans in chess without even the slightest inclination on how human brains do that. And then we started making software that vastly exceeded those early chess bots when we didn't even understand how that software performed its calculations (ie. neural networks). And now we have software that can read and write that we understand even less than we understood earlier NNs.
Empirically, it does not seem necessary to understand one version of a thing to produce a superior version. Why should the remaining unsolved cognitive tasks break that pattern?
Ai in practice doesn't have anything to do with simulation, nor with 'solving a significant part of the most complex brains'.
The immediate threat is that humans will use the leverage that LLMs give to replace and to influence other humans, in other words to gain power over us.
Whether this is AGI or not is beside the point.
Ah, but the replacing humans issue is a completely different "threat", and one we've overcome many many times since the Luddites and even before. Every time we make something more efficient we face this exact problem, and we eventually solve it.
As for the influencing part, what specific actions to gain power over us can be achieved now with LLMs, that could not be achieved before using a few tens of thousands of paid humans?
> what specific actions to gain power over us can be achieved now with LLMs, that could not be achieved before using a few tens of thousands of paid humans?
It's being able to do it without having to employ the tens of thousands of humans that makes it different. With the LLM you are able to react much faster and pay fewer people more money.
Just to clarify, I'm definitely not saying "neuron simulation" is required in any way. I'm just asking, how can we be very close to "solving" a significant part of the most complex brains, yet miles away from solving the simplest brains?
You should be able to answer that question (or a steelmanned version of it), not just ridicule strawmen.