You're actually making the argument FOR what I'm saying.
From what I can see, Buzsáki says "cognition" is bad for neuroscience BECAUSE it's philosophically loaded.
Doesn't that reinforce my point that it's good for LLM engineering PRECISELY because it's philosophically loaded?
Basically, if cognition is philosophically inherited then maybe that's why it's working for LLMs with Plato's model. Because training corpora are philosophy-heavy?
I am not disputing that. It is flawed. I agree. But what if those flaws impregnate a system to the point they produce functional utility? Because when a system is trained on that same folk psychology, it can yield measurable results.
This isn't about "does cognition exist", or even "is cognition correct". It's about "what happens when you stop listening to theory and start looking at empirical evidence that seems to show that these structured relational patterns are producing measurable differences." If its distinguishable empirically, does it matter if it's "folk science", "vague" or "an illusion".
If I add 10 words to a prompt, regardless of what any expert says, and it actually produces better quality, isn't that worth exploring?
Anyway, let's see what independent testing shows. We might be hitting a philosophical impasse.
See, it's much simpler.
Concrete test setup:
- Flawed codebase given to agents for review
- Agent A: Standard behavioural instructions
- Agent B: Same + COGNITION::ETHOS (4 lines added)
Agent B found 20% more flaws than Agent A.
Only variable: those 4 lines.Objective measurement: count of flaws detected.
N=40 runs, statistically significant improvement.
The evidence is all in the repo.
https://pmc.ncbi.nlm.nih.gov/articles/PMC7415918/