Preferences

>The LLM is not doing semantics; the humans who generated the corpus are doing the thinking.

Agreed, this bears repeating. This point is not obvious to someone interacting with the LLM. Because it is able to mash up custom responses doesn’t make it a thinking machine, the thinking was done ahead of time as is the case when you read a book. What passes for intelligence here is the mash-up, a smooth blending of digested text, which was selected by statistical relevance.


Peteragain
That's how it works, but what does an LLM do is the real question. I'm working on the idea that this statistical model can be used for control. And that is enough for the evolution of agency. The claim from Vygotsky is that thinking with symbols is given to us by our learning of language. "Cultural linguistics".
graemefawcett
They're "repeat after me" machines, not "think for me" machines.

For the former task, they're brilliant but everyone seems to have fallen for the branding and forgotten the technology behind it. Given an input, they set off a chain reaction of probability that results in structured language, in the form of tokens, as the output. The structure of that language is easier to predict - you ask it for an app that's your next business idea and it'll give you an app that looks like your next business idea. And that's it.

Because that's all you've given it. It's not going to fill in the blanks for you. It can't. Not its job.

If you were building a workflow, would you put something called "Generative" in one of those diamond shaped boxes that normally controls flow? That sounds more like a source to me, something to be filtered and gated and shaped before use.

That's what context is supposed to be for. Not "here's a series of instructions now go do them"

They'll be lost before they get to number three, they have no sense of time you know. Cause and effect is a simulation at best. They have TodoWrites now, those are brilliant for best approximation which is really all we need at the moment, but procedural prompting is still why everyone thinks "AI" (/Generative/AI) is broken.

They're going to give the same structured text regardless, you asked for a program after all. Give them more context, you call it RAG, I call it a nice chat - whatever it is, you are responsible for the thinking in the partnership. They're the hyperactive neurodivergent kid that can type 180wps and remembers all of StackOverflow, you're the patient parent that needs to remind them to clean their room before they go out (or completely remove all traces of the legacy version of feature X that you just upgraded so you don't end up with 4 overlapping graph representations). You're responsible for the remembering, you're responsible for the thinking - they're just responsible for amplifying your thoughts and letting you explore solution spaces you might not have had the time for otherwise.

Or you can build something to help you do that. Structured memory (mine's spatial, the direction of the edges itself encodes meaning) with computational markdown as the storage mechanism so we can embed code, data and prose in the same node.

I demoed a thing on here the other day that shows how I setup Rspec tests that execute when you read the spec that describes the feature you're building. A logical evolution of Obie's keynote. Now they just do it automatically (mostly, if they're new - fresh context - I have to reference the tag that explains the pattern so they pick it up first)

It's still not thinking in the traditional sense of the word, where some level of conscious rationality is responsible for breakthrough. Given, however, how much of human progress has been through accident (Fleming, Spencer, Goodyear, Fahlberg, Rontgen, Hofmann) or misunderstanding (Penzias and Wilson, Post Its, Viagra).

Most of human break through has been through pattern recognition, conscious or unconscious. For that, language is all that is needed and language is sufficient. If an idea can be described by language and if we suppose that the grammar of a language allows its atoms (and therefore its ideas) to be composed and decomposed, then does it not allow then that a consciousness (machine or otherwise) trained in the use of that language can form new ideas through the mere act of synthesis?

This item has no comments currently.