The nice thing about the transformer architecture is that they can cross these domains, to an extent. I have a very spatial way of reasoning through problems and using an LLM, especially an agentic one like Claude Code with access to my local file system as a research assistanmt, is a great aid.
I just have to remember how I built something and where the code is. We can take a quick dive into the code base and I don't have to yet again attempt to serialize my mental model of my system into something someone else may understand.
It can be difficult to explain why using the path on the underlying mount volume's EBS volume to carry meta data through filebeat, logstash, redis and kinesis to that little log stream processor was in fact the cleanest solution and how SMS was invented. It's easier when you can get the LLM to do it ;)
It's more the latter for me. I don't think there's necessarily one type of internal thought, I think there's likely a multimodal landscape of thought. Maybe spatial reasoning modes are more geometric, and linguistic modes are more sequential.
I think the human brain builds predictive models for all of its abilities for planning and control, and I think all of these likely have a type of thought for planning future "moves".