> "So... what does the thinking?"
> "You're not understanding, are you? The brain does the thinking. The meat."
> "Thinking meat! You're asking me to believe in thinking meat!"
https://www.mit.edu/people/dpolicar/writing/prose/text/think...
This is meant to be some kind of Chinese room argument? Surely a 1e18 context window model running at 1e6 tokens per second could be AGI.
Personally I'm hoping for advancements that will eventually allow us to build vehicles capable of reaching the moon, but do keep me posted on those tree growing endeavors.
This argument works better for state space models. A transformer would still steps context one token at a time, not maintain an internal 1e18 state.
DAG architectures fundamentally cannot be AGI and you cannot even use them as a building block for a hypothetical AGI if they're immutable at runtime.
Any time I hear the goal being "AGI" in the context of these LLMs, I feel like listening to a bunch of 18th-century aristocrats trying to get to the moon by growing trees.
Try to create useful approximations using what you have or look for new approaches, but don't waste time on the impossible. There's no iterative improvements here that will get you to AGI.