Preferences

NonHyloMorph parent
I think the terminology here isn't sharp. One of the first headlines is: "Language is not necessary nor sufficient for thought" I disagree. Language is not necessary for cognitive processes in individuals/organisms. It is absolutely necessary for what we commonly refer to as thought (bit of a pretentious we: it involves you in the group of people who have some idea about philosophy (e.g. baseline-heidegger)/the humanities/psychoanalysis etc.) that which we refer to as thought. Thought can be a decentralised process that is happening "between" individuals ("Die Sprache spricht" - Language is speaking by heidegger points into that direction). Thought is also, imho, a symbolic process (which involves sign systems, mathematics, languages, images). Not everything going on as a cognitive process is therefor constituting thought. That's why one can act thoughtless- but not "cognitionless".

trueismywork
I disagree. There can be thought without any way to express it any langauge yet. Only with a lot of communication can we get to the an approximation of what it means and hence it can mean slightly different thing ti everyone. Koans can be a good example of this
mpascale00
I think you make a good point that much of what we call thinking is really discourse either with another ^[0], with media, or with one's own self. These are largely mediated by language, but still there are other forms of communicative _art_ which externalize thought.

The other thoughts here largely provide within-indivudal examples: others noted Hellen Keller and that some folks do not experience internal monologue. These tell us about the sort of thinking that does happen within a person, but I think that there are many forms of communication which are not linguistic, and therefore there is also external thinking which is non-linguistic.

The observation that not all thought utilizes linguistic representations (see particularly the annotated references in the bibliography) tells us something about the representations that may be useful for reasoning, thought, etc. That though language _can_ represent the world it is both not the only way and certainly not the only way used by biological beings.

^[0]: It Takes Two to Think https://www.nature.com/articles/s41587-023-02074-2

fsckboy
>(e.g. baseline-heidegger)/the humanities/psychoanalysis etc.)

and pre-heidegger, pre-psychoanalysis, what then, how did somebody, e.g. heidegger, think of those thoughts without the vocabulary to do so? ahhh, apparently, they didn't need to. Turns out, language is not required for thought, thought can invent language.

Peteragain
Okay so rephrasing the question, how should we characterise the type of thinking we do without language? And the more interesting question IMO what thinking can an agent do without symbolic representation?

The original Vygotsky claim was that learning a language introduces the human mind to thinking in terms of symbols. Cats don't do it; infants don't either.

Isamu
>what thinking can an agent do without symbolic representation?

The language model is exclusively built upon the symbols present in the training set, but various layers can capture higher level patterns of symbols and patterns of patterns. Depending on how you define symbolic representation, the manipulation of the more abstract patterns of patterns may be what you are getting at.

Peteragain
I think the argument is that yes LLMs find patterns in token sequences. Assign tokens to moves in a chess game and the tokens are predictive of what happened in the past and of what chess players will do in the future. The LLM is not doing semantics; the humans who generated the corpus are doing the thinking. The LLM has no representation of goals or plans, rooks or bishops, it's just glorified auto complete from a corpus of tokens that we humans understand as refering to things in the world.
Isamu
>The LLM is not doing semantics; the humans who generated the corpus are doing the thinking.

Agreed, this bears repeating. This point is not obvious to someone interacting with the LLM. Because it is able to mash up custom responses doesn’t make it a thinking machine, the thinking was done ahead of time as is the case when you read a book. What passes for intelligence here is the mash-up, a smooth blending of digested text, which was selected by statistical relevance.

Peteragain
That's how it works, but what does an LLM do is the real question. I'm working on the idea that this statistical model can be used for control. And that is enough for the evolution of agency. The claim from Vygotsky is that thinking with symbols is given to us by our learning of language. "Cultural linguistics".
graemefawcett
They're "repeat after me" machines, not "think for me" machines.

For the former task, they're brilliant but everyone seems to have fallen for the branding and forgotten the technology behind it. Given an input, they set off a chain reaction of probability that results in structured language, in the form of tokens, as the output. The structure of that language is easier to predict - you ask it for an app that's your next business idea and it'll give you an app that looks like your next business idea. And that's it.

Because that's all you've given it. It's not going to fill in the blanks for you. It can't. Not its job.

If you were building a workflow, would you put something called "Generative" in one of those diamond shaped boxes that normally controls flow? That sounds more like a source to me, something to be filtered and gated and shaped before use.

That's what context is supposed to be for. Not "here's a series of instructions now go do them"

They'll be lost before they get to number three, they have no sense of time you know. Cause and effect is a simulation at best. They have TodoWrites now, those are brilliant for best approximation which is really all we need at the moment, but procedural prompting is still why everyone thinks "AI" (/Generative/AI) is broken.

They're going to give the same structured text regardless, you asked for a program after all. Give them more context, you call it RAG, I call it a nice chat - whatever it is, you are responsible for the thinking in the partnership. They're the hyperactive neurodivergent kid that can type 180wps and remembers all of StackOverflow, you're the patient parent that needs to remind them to clean their room before they go out (or completely remove all traces of the legacy version of feature X that you just upgraded so you don't end up with 4 overlapping graph representations). You're responsible for the remembering, you're responsible for the thinking - they're just responsible for amplifying your thoughts and letting you explore solution spaces you might not have had the time for otherwise.

Or you can build something to help you do that. Structured memory (mine's spatial, the direction of the edges itself encodes meaning) with computational markdown as the storage mechanism so we can embed code, data and prose in the same node.

I demoed a thing on here the other day that shows how I setup Rspec tests that execute when you read the spec that describes the feature you're building. A logical evolution of Obie's keynote. Now they just do it automatically (mostly, if they're new - fresh context - I have to reference the tag that explains the pattern so they pick it up first)

It's still not thinking in the traditional sense of the word, where some level of conscious rationality is responsible for breakthrough. Given, however, how much of human progress has been through accident (Fleming, Spencer, Goodyear, Fahlberg, Rontgen, Hofmann) or misunderstanding (Penzias and Wilson, Post Its, Viagra).

Most of human break through has been through pattern recognition, conscious or unconscious. For that, language is all that is needed and language is sufficient. If an idea can be described by language and if we suppose that the grammar of a language allows its atoms (and therefore its ideas) to be composed and decomposed, then does it not allow then that a consciousness (machine or otherwise) trained in the use of that language can form new ideas through the mere act of synthesis?

naasking
I think there are other sorts of reasoning, like spatial reasoning. If you're trying to sort a set of physical items in front of you in order of size, are you thinking about the items linguistically, or is your mind working on some different internal representation?

It's more the latter for me. I don't think there's necessarily one type of internal thought, I think there's likely a multimodal landscape of thought. Maybe spatial reasoning modes are more geometric, and linguistic modes are more sequential.

I think the human brain builds predictive models for all of its abilities for planning and control, and I think all of these likely have a type of thought for planning future "moves".

graemefawcett
The nice thing about the transformer architecture is that they can cross these domains, to an extent. I have a very spatial way of reasoning through problems and using an LLM, especially an agentic one like Claude Code with access to my local file system as a research assistanmt, is a great aid.

I just have to remember how I built something and where the code is. We can take a quick dive into the code base and I don't have to yet again attempt to serialize my mental model of my system into something someone else may understand.

It can be difficult to explain why using the path on the underlying mount volume's EBS volume to carry meta data through filebeat, logstash, redis and kinesis to that little log stream processor was in fact the cleanest solution and how SMS was invented. It's easier when you can get the LLM to do it ;)

DrierCycle (dead)
balamatom
Neither do, necessarily, language users.
Peteragain
One can certainly use language to _do_ things without thinking. Polly was a robot that gave a tour of the MIT labs, but it used pre recorded descriptions at various locations. The HUMANS gave meaning to the sounds.
DrierCycle (dead)
habbekrats
i think you are right, but its hard to explain as ppl can interpret your words in many ways depending on their context.

i think this: you dont need language for an idea, to have it, or be creative.

to think about it outside of that, like asking critical questions, inner dialogue _about_ the ideas and creativity, that is i think what is 'thought' and that requires language as its sort of inner communication....

DrierCycle
Language may ultimately be maladaptive as it is arbitrary and disconnected from thought. Who cares about the gibberish of logic/philosophy when survival is at stake in ecological balance? The key idea is, there are events. They are real. The words we use are false/inaccurate externalizations of those events. Words and symbols are bottlenecks that place the events out of analog reach but fool us by our own simulation processes into thinking they are accurate.

Words are essentially very poor forms of interoception or metacognition. They "explain" our thoughts to us by fooling us. Yet how much of the senses/perceptions are accessible in consciousness. Not very much. The computer serves to further the maladaption by both accelerating the symbols and sutomating them, which puts the initial real events even further from reach. The only game is how much we can fool the species through the lowres inputs the PFC demands. This appears to be a sizable value center for Silicon Valley, and it seems to require coders to ignore the whole of experience and rely solely on the bottleneck simulations centers of the PFC which themselves are disconnected from direct sensory access. Computers, 'social' media, AI, code, VR essentially "play" the PFC.

How these basic thought experiments that have been tested in cog neuroscience since the 90s in the overthrow of the cog sci models of the 40s-80s were not taught as primer classes in AI and comp sci is beyond me. It takes now third gen neurobiology crossed with linguistics to set the record straight.

These are not controversial ideas now.

drdeca
What does "PFC" stand for?
DrierCycle
Sorry, pre frontal cortex.
Lionga
Based on your definition a child that can not speak/understand language yet can not think? Hint: It clearly can.

There are a lot of things I can think about that I do not have words for. I can only communicate these things in a unclear way, as language is clearly a subset of thought, not a superset.

Only if your definition of thought is that is is language based, which is just typical philosophy circular logic.

pessimizer
I've started to believe that language is often anti-thought. When we are doing what LLMs do, we aren't really thinking, we're just imitating sounds based on a sound stimulus.

Learning a second language let me notice how much of language has no content. When you're listening to meaningless things in your second language, you think you're misunderstanding what they're saying. When you listen to meaningless things in your first language, you've been taught to let the right texture of words slip right in. That you can reproduce an original and passable variation of this emptiness on command makes it seem like it's really cells indicating that they're from the same organism, not "thought." Not being able to do it triggers an immune response.

The fact that we can use it to encode thoughts for later review confuses us about what it is. The reason why it can be used to encode thoughts is because it was used to train us from birth, paired with actual simultaneous physical stimulus. But the physical stimulus is the important part, language is just a spurious association. A spurious association that ultimately is used to carry messages from the dead and the absent, so is essential to how human evolution has proceeded, but it's still an abused, repurposed protocol.

I'm an epiphenomenalist, though.

suddenlybananas
>Learning a second language let me notice how much of language has no content.

What on earth do you mean?

MarkusQ
I see what you did there. :)

This item has no comments currently.