- threethirtytwoNot yet. This is the transitional period where the US is blamed and laughed at and then finally abandoned for China.
- Indeed, don’t comment further: you didn’t even have the respect to respond to me directly. That is categorically deliberately inflammatory. Just respond to the guy you’re talking to like 99% of HN. Why avoid it? It’s a tactic, that’s why, and also pointless.
I’m not a victim of anything. But you are definitely a perpetrator and instigator.
- I should clarify a mistake in how I phrased this earlier. Emotions are shaped by both innate biology and environment. Early mirroring, co regulation, and social interaction are essential for normal emotional development. That evidence is well established. Where I disagree is the leap from that fact to the claim that narrative is a crucial or primary mechanism of emotional formation.
Emotional regulation and differentiation emerge long before narrative competence. Infants acquire affective patterns through direct interaction and embodied feedback, not stories or symbolic self models. Cultural differences reflect how emotions are framed and expressed, not that narrative creates them. Narrative comes later as a descriptive layer that organizes experience, but it is downstream of emotion, not its cause.
- That quote is a category error. It’s about moral judgment of people, not epistemic evaluation of claims. I’m not condemning Le Guin, her character, or anyone who enjoys fiction. I’m saying a specific explanatory claim about how stories relate to truth and human cognition is false.
If “judge not” applied here, then no scientific criticism is permissible at all. You couldn’t say a theory is wrong, a model is flawed, or a claim is unsupported, because the critic is also imperfect. That standard would immediately end every serious discussion on HN.
Quoting scripture in response to an evolutionary and cognitive argument isn’t a rebuttal. It’s a frame shift from “is this claim true” to “are you allowed to say it.” That avoids engaging the substance entirely.
If you think the argument is wrong, point to the error. If not, appealing to moral humility doesn’t rescue a claim from being false.
- What Le Guin is expressing is a beautiful idea, but it is a false beauty. It feels like truth because it aligns with how narrative engages the human mind, not because it accurately explains what stories are or why they exist. The sense that fiction reveals destiny, inner depth, or essential humanity is an illusion created by evolved cognitive machinery, not evidence of genuine insight.
From an evolutionary and cognitive standpoint, imaginative fiction is not a privileged tool for understanding who we are. It is a byproduct of more basic adaptations. The human brain evolved as a prediction engine optimized for survival in social groups. Its primary function is to anticipate outcomes, model other agents, and reduce uncertainty well enough to reproduce. Narrative arises because the brain naturally organizes experience into causal sequences involving agents, not because stories convey deeper truths about the self.
Fiction works by hijacking the same neural systems used for social reasoning, memory, and planning. When reading a story, the mind runs simulations of social situations. This feels like insight, but feeling insight is not the same as acquiring accurate models of reality. Fantasy and science fiction are not special forms of wisdom. They are simply inputs that exaggerate certain variables, making simulations emotionally vivid rather than epistemically reliable.
Le Guin’s claim that someone without stories would be ignorant of their emotional or spiritual depths is not supported by biology. Emotions are not learned through narrative. They are innate regulatory systems shaped by natural selection. Fear, attachment, anger, desire, and joy exist prior to language and independently of story exposure. Stories can name, frame, or intensify these states, but they do not create or deepen them in any fundamental sense.
The universality of storytelling also does not imply that it is an adaptive route to understanding. Evolution does not favor truth or self knowledge. It favors fitness. Many of the most persistent stories humans tell are systematically false. Myths, religious narratives, romantic ideals, and national legends endure because they exploit cognitive biases like agency detection, pattern completion, and emotional salience. Their spread demonstrates susceptibility, not insight.
Fantasy and science fiction do not teach us to imagine better worlds. They teach us to imagine compelling ones. A narrative can feel profound while being completely disconnected from reality. Inspiration and accuracy are orthogonal. The persuasive power of stories comes from their alignment with evolved psychological vulnerabilities, not from their correspondence with truth.
So the correct technical framing is this. Stories are not tools invented to gain understanding of humanity or destiny. They are artifacts produced by brains shaped for survival under uncertainty. They can be pleasurable, motivating, or culturally stabilizing. They can sometimes illuminate patterns of behavior. But their beauty should not be confused with truth. The feeling of depth they produce is an illusion, not a discovery.
- I hope hardware becomes so cheap local models become the standard.
- If I am more than a next token predictor… doesn’t that mean I’m a next token predictor + more? Do you not predict the next word you’re going to say? Of course you do, you do that and more.
Humans ARE next token predictors technically and we are also more than that. That is why calling someone a next token predictor is a mischaracterization. I think we are in agreement you just didn’t fully understand my point.
But the claim for LLMs are next token predictors is the SAME mischaracterization. LLMs are clearly more than next token predictors. Don’t get me wrong LLMs aren’t human… but they are clearly more than just a next token predictor.
The whole point of my post is to point out how the term stochastic parrot is weaponized to dismiss LLMs and mischaracterize and hide the current abilities of AI. The parent OP was using the technical definition as an excuse to use the word as a means to achieve his own ends namely be “against” AI. It’s a pathetic excuse I think it’s clear the LLM has moved beyond a stochastic parrot and there’s just a few stragglers left who can’t see that AI is more than that.
You can be “against” AI, that’s fine but don’t mischaracterize it… argue and make your points honestly and in good faith. Using the term stochastic parrot and even what the other poster did in attempt to accuse me of inflammatory behavior is just tactics and manipulation.
- Look at your response. You first dismissed me completely by saying I don’t know what technically means. Then you mischaracterization my statement as an intent to inflame. These are highly insulting and dismissive statements.
You’re not willing to have good faith discussion. You took the worst possible interpretation of my statement and crafted a terse response to shut me down. I only did two things. First I explained myself… then I called you out for what you did while remaining civil. I don’t skirt around HN rules as a means to an end, which is what I believe you’re doing? I’m ok with what you’re doing… but I will call it out.
- I am not using inflammatory language to hurt anyone. I am illustrating a point on the contrast between technical meaning and non-technical meanings. One meaning is offensive the other meaning is technically correct. Don't start a witch hunt by deliberately misinterpreting what I'm saying.
So technical means something like this: in a technical sense you are a stochastic parrot. You are also technically an object. But in everyday language we don't call people stochastic parrots or objects because language is nuanced and the technical meaning is rarely used at face value and other meanings are used in place of the technical one.
So when people use a term in conversation and go by the technical meaning it's usually either very strange or done deliberately to deceive. Sort of like how you claim you don't know what "technically" means and sort of how you deliberately misinterpreted my words as "inflammatory" when I did nothing of the sort.
I hope you learned something basic about the English today! Good day to you sir!
- Monoliths can scale to handle tons of users. Microservices are only needed for a specific type of scaling. For example Netflix you need http servers but you also need servers to handle video streaming. Or for google the search engine must be different from Gmail. Most companies provide 1 or few services that can be handled and scaled as a monolith to handle anything.
- You're not a skeptic but you're not fully a supporter either. You live in this grey zone of contradictions.
First you find them useful but not intelligent. That is a bit of a contradiction. Basically anyone who has used AI, seriously knows that while it can be used to regurgitate generic filler and bootstrap code it can also be used to solve complex domain specific problems that is not at all part of its training data. This by definition makes it intelligent and it makes it so we know the LLM understands the problem it was given. it would be This by definition makes it intelligent, and it makes it so we know the LLM understands the problem it was given. It would be disingenuous for me not to mention how wrong and how much an LLM hallucinates, so obviously the thing has flaws and is not super intelligence. But you have to judge the entire spectrum of what it does. It gets things right and it gets things wrong and getting something complex right makes it intelligent while getting something wrong does not predude it from intelligence.
Second most non skeptics aren't saying all human work is going to be obsolete. no one can predict the future. But you've got to be blind if you don't see the trendline of progress. Literally look at the progress of AI for the past 15 years. You have to be next level delusional if you can't project another 15 years and see that obviously a super intelligence or at least an intelligence comparable to humans is not a reasonable prediction. Most skeptics like you ignore the trendline and cling to what Yann lecunn said about llms being stochastic parrots. It is very likely something with human intelligence exists in the future and in our lifetimes, whether or not its an LLM remains to be seen but we can't ignore where the trendlines are pointing.
- It's wrong because it’s deliberately used to mischaracterize the current abilities of AI. Technically it's not wrong but the context of usage in basically every case is that the person saying it is deliberately trying to use the concept to downplay AI as just a pattern matching machine.
- Are you going to address a single point I or others have made? Or you gonna dodge everything with some dismissive remark? I think it’s clear you’re wrong.
You know one thing an LLM does better than me and many other people? It admits it’s wrong after it’s been proven wrong. Humans, including me have a hard time doing that but I’m not the one that’s wrong here. You are wrong, and that’s ok. Don’t know why people need to go radio silent or say stupid shit just to dodge the irrefutable reality of being completely and utterly wrong.
- > However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.
Very few people say this. But it’s realistically to say at the least in the next decade our jobs are going out the window.
- Bro. You’re gonna have a hard time finding people panic posting about how they’re going to lose their jobs in a month. Literally find me one. Then show me that the majority of people posting were panicking.
That is literally not what happened. You’re hallucinating. The majority of people on HN were so confident in their coding abilities that they weren’t worried at all. Just a cursory glance at the conversations back then and that is what you will see OVERALL.
- Someone also believed the internet would take over the world. They were right.
So we could be right or we could be wrong. What we do know is that from 2 years ago a lot of what people were saying or “believed” about LLMs are now categorically wrong.
- Colloquially, it just means there’s no thinking or logic going on. LLMs are just pattern matching an answer.
From what we do know about LLMs we do know that it is not trivial pattern matching, the output formulated is literally by the definition of machine learning itself completely original information not copied from the training data.
- No. Many of the answers it produces can only be attributed to intelligence. Not all but a many can be. We can prove that these answers are not parroted.
As for “understanding” we can only infer this from input and output. We can’t actually know if it “understands” because we don’t actually know how these things work and in addition to that, we don’t have a formal definition of what “understanding” is.
- Did you not look at the evidence I posted? It’s not about you or I it’s about humanity. I have two on the ground people who are central to AI saying humanity doesn’t understand AI.
If you say you understand LLMs then my claim is then that you are lying. Nobody understands these things and people core to building these things are in absolute agreement with me.
I build LLMs for a living, btw. So it’s not just other experts saying these things.. I know what I’m talking about on a fundamental level.