brainless parent
I am not sure I got your point about English. I thought Karpathy was talking about English being the language of prompts, not output. Outputs can be English but if the goal is to compute using the output, then we need structured output (JSON, snippets of code, etc.), not English.
Entertain me in an exercise:
First, instruct a friend/colleague of how to multiply two 2 digit numbers in plain English.
Secondly (ideally with a different friend, to not contaminate tests), explain the same but using only maths formulas.
Where does the prompting process start and where does it end? Is it a one-off? Is the prompt clear enough? Do all the parties involved communicate within same domain objects?
Hopefully my example is not too contrived.
Yes the prompts are clear enough but it depends on the capacity of the people involved. People have to internalize the math (or any other) concepts from language into some rules, syntax, etc.
This is what an agent can do with an LLM. LLMs can help take English and generate some sort of an algorithm. The agent stores algorithm not the prompt. I do not know what current commercially available agents do but this was always clear to me.