Preferences

So it's not really hallucinating - it correctly represents "seahorse emoji" internally, but that concept has no corresponding token. lm_head just picks the closest thing and the model doesn't realize until too late.

Explains why RL helps. Base models never see their own outputs so they can't learn "this concept exists but I can't actually say it."


I have no mouth, and I must output a seahorse emoji.
That's my favorite short story and your post is the first time I have seen someone reference it online. I think I have never even met anyone who knows the story.
It's easy to miss, but it's been referenced many times on HN over the years, both as stories:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

and fairly often in comments as well:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

? It’s referenced all the time in posts about AI.
It's a reference to a short story "I Have No Mouth, and I Must Scream"

https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...

And then there's "I Have no Grass, and I Must Mow" by Larry Ellison.
You got me with that lure.
There is also an old point-and-click adventure game based on the story, in case you didn't know.
It’s referenced a lot as the inspiration for The Amazing Digital Circus.
Really? I’m surprised. The original is quoted relatively often on reddit (I suspect by people unaware of the origin — as I was until I read your comment).

Consider it proof that HN has indeed not become reddit, I guess :)

There's literally several of us that like that Harlan Ellison piece. Check out the video/adventure game of the same name, though it's very old.
I've heard good things about the game, never got around to trying it. Maybe I take this as a prompt to do now.
I gave it a try a couple of months ago, but didn't get very far before getting bored. However, I tend to dismiss games unless they grab me within a couple of minutes of playing.

Maybe I should give it another go as I do love the short story and it used to be my favourite before discovering Ted Chiang's work.

better title for the piece of this post
Those are "souls" of humans that a AI is torturing in that story though, not exactly analogous, but it does sound funny.
They are not souls but normal humans with physical bodies. The story is just a normal torture story (with a cool title), and everyone better stop acting like it was relevant in most conversations, like in this one.
The machine destroys and recreates characters over and over, and they remember what happens. So, I called them souls.
>Those are "souls" of humans that a AI is torturing in that story though, not exactly analogous, but it does sound funny.

Yeah well there seems to be some real concerns regarding how people use AI chat[1]. Of course this could be also the case with these people on social media.

https://futurism.com/commitment-jail-chatgpt-psychosis

> So it's not really hallucinating - it correctly represents "seahorse emoji" internally, but that concept has no corresponding token. lm_head just picks the closest thing and the model doesn't realize until too late.

Isn't that classic hallucination? Making up something like a plausible truth.

Except they know it's wrong as soon as they say it and keep trying and trying again to correct themselves.

If normal hallucination is being confidently wrong, this is like a stage hypnotist getting someone to forget the number 4 and then count their fingers.

Arguably it's "hallucinating" at the point where it says "Yes, it exists". If hallucination => weights statistically indicating that something is probably true when it's not. Since everything about LLMs can be thought of as compressed, probability based database (at least to me). You take the whole truth of the World and compress all its facts in probabilities. Some truthness gets lost in the compression process. Hallucination is the truthness that gets lost since you don't have storage to store absolutely all World information with 100% accuracy.

In this case:

1. Statistically weights stored indicate Seahorse emoji is quite certain to exist. Through training data it has probably things like Emoji + Seahorse -> 99% probability through various channels. Either it has existed on some other platform, or people have talked about it enough, or Seahorse is something that you would expect to exist due to some other attributes/characteristics of it. There's 4k emojis, but storing all of 4k emojis takes a lot of space, it would be easier to store this information in such a way where you'd rather define it by attributes on how likely humankind would have developed a certain emoji, what is the demand for certain type of emoji, and seahorse seems like something that would be done within first 1000 of these. Perhaps it's anomaly in the sense that it's something that humans would have expected to statistically develop early, but for some reason skipped or went unnoticed.

2. Tokens that follow should be "Yes, it exists"

3. It should output the emoji to show it exists, but since there's no correct emoji, it will have best answers that are as close to it in meaning, e.g. just horse, or something related to sea etc. It will output that since the previous tokens indicate it was supposed to output something.

4. The next token that is generated will have context that it previously said the emoji should exist, but the token output is a horse emoji instead, which doesn't make sense.

5. Here it goes into this tirade.

But I really dislike thinking of this as "hallucinating", because hallucination to me is sensory processing error. This is more like non perfect memory recall (like people remembering facts slightly incorrectly etc). Whatever happens when people are supposed to tell something detailed about something that happened in their life and they are trained to not say "I don't remember for sure".

What did you eat for lunch 5 weeks ago on Wednesday?

You are rewarded for saying "I ate chicken with rice", but not "I don't remember right now for sure, but I frequently eat chicken with rice during mid week, so probably chicken with rice."

You are not hallucinating, you are just getting brownie points for concise, confident answers if they cross over certain likelihood to be true. Because maybe you eat chicken with rice 99%+ of Wednesdays.

When asked about capital of France, you surely will sound dumb if you were to say "I'm not really sure, but I've been trained to associate Paris really, really close to being capital of France."

"Hallucination" happens on the sweet spot where the statistical threshold seems as if it should be obvious truth, but in some cases there's overlap of obvious truth vs something that seems like obvious truth, but is actually not.

Some have rather called it "Confabulation", but I think that is also not 100% accurate, since confabulation seems a more strict memory malfunction. I think the most accurate thing is that it is a probability based database where output has been rewarded to sound as intelligent as possible. Same type of thing will happen in job interviews, group meetings, high pressure social situations where people think they have to sound confident. People will bluff that they know something, but sometimes making probability based guesses underneath.

Confabulation rather seems like that there was some clear error in how data was stored or how the pathway got messed up. But this is probability based bluffing, because you get rewarded for confident answers.

When I ask ChatGPT how to solve a tricky coding problem, it occasionally invents APIs that sound plausible but don't exist. I think that is what people mean when they talk about hallucinating. When you tell the model that the API doesn't exist, it apologises and tries again.

I think this is the same thing that is happening with the sea horse. The only difference is that the model detects the incorrect encoding on its own, so it starts trying to correct itself without you complaining first.

Neat demonstration of simple self awareness.
Associating the capital of France with a niche emoji doesn't seem similar at all - France is a huge, powerful country and a commonly spoken language.

Would anyone really think you sounded dumb for saying "I am not really sure - I think there is a seahorse emoji but it's not commonly used" ?

>"Yes, it exists"

AAAAAAUUUGH!!!!!! (covers ears)

https://www.youtube.com/watch?v=0e2kaQqxmQ0&t=279s

> Except they know it's wrong as soon as they say it and keep trying and trying again to correct themselves.

But it doesn't realize that it can't write it, because it can't learn from this experience as it doesn't have introspection the way humans do. A human who can no longer move their finger wont say "here, I can move my finger: " over and over and never learn he can't move it now, after a few times he will figure out he no longer can do that.

I feel this sort of self reflection is necessary to be able to match human level intelligence.

> because it can't learn from this experience as it doesn't have introspection the way humans do.

A frozen version number doesn't; what happens between versions certainly includes learning from user feedback on the responses as well as from the chat transcripts themselves.

Until we know how human introspection works, I'd only say Transformers probably do all their things differently than we do.

> A human who can no longer move their finger wont say "here, I can move my finger: " over and over and never learn he can't move it now, after a few times he will figure out he no longer can do that.

Humans are (like other mammals) a mess: https://en.wikipedia.org/wiki/Phantom_limb

Humans do that, you need to read some Oliver Sacks, such as hemispheric blindness or people who don’t accept that one of their arms is their arm and think it’s someone else’s arm, or phantom limbs where missing limbs still hurt.
more like an artefact of the inability to lie than a hallucination
No analogy needed. It's actually because "Yes it exists" is a linguistically valid sentence and each word is statistically likely to follow the former word.

LLMs produce linguistically valid texts, not factually correct texts. They are probability functions, not librarians.

Those are not two different things. A transistor is a probability function but we do pretty well pretending it's discrete.
Transitors at the quantum level are probability functions just like everything else is. And just like everything else, at the macro level the overall behavior follows a predictable known pattern.

LLMs have nondeterministic properties intrinsic to their macro behaviour. If you've ever tweaked the "temperature" of an LLM, that's what you are tweaking.

Temperature is a property of the sampler, which isn't strictly speaking part of the LLM, though they co-evolve.

LLMs are afaik usually evaluated nondeterministically because they're floating point and nobody wants to bother perfectly synchronizing the order of operations, but you can do that.

Or you can do the opposite: https://github.com/EGjoni/DRUGS

this was no analogy, it really can't lie...
I would have thought that the cause is that it statistically has been trained that something like seahorse emoji should exist, so it does the tokens to say "Yes it exists, ..." but when it gets to outputting the token, the emoji does not exist, but it must output something and it outputs statistically closest match. Then the next token that is output has the context of it being wrong and it will go into this loop.
You are describing the same thing, but at different levels of explanation Llamasushi's explanation is "mechanistic / representational", while yours is "behavioral / statistical".

If we have a pipeline: `training => internal representation => behavior`, your explanation argues that the given training setup would always result in this behavior, not matter the internal representation. Llamasushi explains how the concrete learned representation leads to this behavior.

I guess what do we mean by internal representation?

I would think due to training data it's stored the likelihood of certain thing to be as emoji as something like:

1. how appealing seahorses are to humans in general - it would learn this sentiment through massive amount of texts.

2. it would learn through massive amount of texts that emojis -> mostly very appealing things to humans.

3. to some more obvious emojis it might have learned that this one is for sure there, but it couldn't store that info for all 4,000 emojis.

4. to many emojis whether it exists it has the shortcut logic to: how appealing the concept is, vs how frequently something as appealing is represented as emoji. Seahorse perhaps hits 99.9% likelihood there due to strong appeal. In 99.9% of such cases the LLM would be right to answer "Yes, it ...", but there's always going to be 1 out of 1,000 cases where it's wrong.

With this compression it's able to answer 999 times out of 1000 correctly "Yes, it exists ...".

It could be more accurate if it said "Seahorse would have a lot of appeal for people so it's very likely it exists as emoji since emojis are usually made for very high appeal concepts first, but I know nothing for 100%, so it could be it was never made".

But 999 cases, "Yes it exists..." is a more straightforward and appreciated answer. The one time it's wrong, is going to take away less brownie points than 999 short confident answers give over the 1000 technically accurate but non confident answers.

But even the above sentence might not be the full truth. Since it might not be correct about truly why it has associated seahorse to be so likely to exist. It would just be speculating on it. So maybe it would be more accurate "I expect seahorse emoji to likely exist, maybe because of how appealing it is to people and how emojis usually are about appealing things".

The fact that it's looking back and getting confused about what it just wrote is something I've never seen in LLMs before. I tried this on Gemma3 and it didn't get confused like this. It just said yes there is one and then sends a horse emoji.
I’ve definitely seen Claude Code go “[wrong fact], which means [some conclusion]. Wait—hold on, wrong fact is wrong.” On the one hand, this is annoying. On the other hand, if the LLM is going to screw up (presumably preventing this is not in the cards) then I’m glad it can catch its own mistakes.
I wonder what would happen if LMs were built a bit at a time by:

  - add in some smallish portion of the data set
  - have LM trainers (actual humans) interact with it and provide feedback about where the LM is factually incorrect and provide it additional information as to why
  - add those chat logs into the remaining data set
  - rinse and repeat until the LM is an LLM
Would they be any more reliable in terms of hallucinations and factual correctness?

This would replicate to some extent how people learn things. Probably would really slow things down (not scale) and the trainers would need to be subject matter experts and not just random people on the net say whatever they want to say to it as it develops or it will just spiral out of control.

On the other hand, if the LLM is going to screw up (presumably preventing this is not in the cards) then I’m glad it can catch its own mistakes.

The odd thing is why it would output its own mistakes, instead of internally revising until it's actually satisfied.

So, what I think most people don't realize is that the amount of computation an LLM can do in one pass is strictly bounded. You can see that here with the layers. (This applies to a lot of neural networks [1].)

Remember, they feed in the context on one side of the network, pass it through each layer doing matrix multiplication, and get a value on the other end that we convert back into our representation space. You can view the bit in the middle as doing a kind of really fancy compression, if you like. The important thing is that there are only so many layers, and thus only so many operations.

Therefore, past a certain point they can't revise anything because it runs out of layers. This is one reason why reasoning can help answer more complicated questions. You can train a special token for this purpose [2].

[1]: https://proceedings.neurips.cc/paper_files/paper/2023/file/f...

[2]: https://arxiv.org/abs/2310.02226

There is no mechanism in transformer architecture for "internal" thinking ahead, or hierarchical generation. Attention only looks back from current token, ensuring that the model always falls into local maximum, even if it only leads to bad outcomes.
Not strictly true: while this was previously believed to be the case, Anthropic demonstrated that transformers can "think ahead" in some sense, for example when planning rhymes in a poem [1]:

> Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.

They described the mechanism that it uses internally for planning [2]:

> Language models are trained to predict the next word, one word at a time. Given this, one might think the model would rely on pure improvisation. However, we find compelling evidence for a planning mechanism.

> Specifically, the model often activates features corresponding to candidate end-of-next-line words prior to writing the line, and makes use of these features to decide how to compose the line.

[1]: https://www.anthropic.com/research/tracing-thoughts-language...

[2]: https://transformer-circuits.pub/2025/attribution-graphs/bio...

Thank you for these links! Their "circuits" research is fascinating. In the example you mention, note how the planned rhyme is piggybacking on the newline token. The internal state that the emergent circuits can use is 1:1 mapped to the tokens. Model cannot trigger an insertion of a "null" token for the purpose of storing this plan-ahead information during inference. Neither there are any sort of "registers" available aside from the tokens. The "thinking" LLMs are not quite that, because the thinking tokens are still forced to become text.
That's what reasoning models are for. You can get most of the benefit by saying an answer once in the reasoning section, because then it can read over it when it outputs it again in the answer section.

It could also have a "delete and revise" token, though you'd have to figure out how to teach it to get used.

Given how badly most models degrade once reaching a particular context size (any whitepapers on this welcome), reasoning does seem like quick hack, instead of a thought out architecture.
LLMs are just the speech center part of the brain, not a whole brain. It's like when you are speaking on autopilot, or reciting something by heart, it just comes out. There is no reflection or inner thought process. Now thinking models do actually do a bit of inner monologue before showing you the output so they have this problem to a much lesser degree.
If you did hide its thinking it could do that. But I'm pretty sure what happens here is that it has to go through those tokens for it to be clear that it's doing things wrong.

What I think that happens:

1. There's a question about a somewhat obscure thing.

2. LLM will never know the answer for sure, it has access to this sort of statistical, probability based compressed database on all the facts of the World. Because this allows to store more facts by relating things to each other, but never with 100% certainty.

3. There are particular obscure cases where it hits its initial "statistical intuition" that something is true, so it starts outputting its thoughts as expected for a question where something is likely true. Perhaps you could analyze what it's indicating probabilities on "Yes" vs "No" to estimate its confidence. Perhaps it will show much less likelihood for "Yes", than if the question was for a horse emoji, but in this case "Yes" is still high enough threshold to go through instead of "No".

4. However when it has to explain the exact answer, it's impossible to output an answer because it's false. E.g. seahorse emoji does not exist and it has to output it, previous tokens where "Yes, it exists, it's X", the X will be answers semantically close in meaning.

5. The next token will have context that "Yes, seahorse emoji exists, it is "[HORSE EMOJI]". Now it's clear that there's a conflict here, it's able to see that HORSE emoji is not seahorse emoji, but it had to output it in the line of previous tokens because the previous tokens statistically required an output of something.

It can't internally rewise. The last generation produces a distribution and sometimes the wrong answer gets sampled.

There is no "backspace" token, although it would be cool and fancy if we had that.

The more interesting thing is why does it revise its mistakes. The answer to that is having training examples of fixing your own mistakes in the training data plus some RL to bring out that effect more.

There's been a few attempts at training a backspace token, though.

e.g.:

https://arxiv.org/abs/2502.04404

https://arxiv.org/abs/2306.05426

I do this all the time. I start writing a comment then think about it some more and realize halfway through that I don't know what I'm saying

I have the luxury of a delete button - the LLM doesn't get that privilege.

Isn't that what thinking mode is?
If you didn't have the luxury of a delete button, such as when you're just talking directly to someone IRL, you would probably say something like "no, wait, that doesn't make any sense, I think I'm confusing myself" and then either give it another go or just stop there.

I wish LLMs would do this rather than just bluster on ahead.

What I'd like to hear from the AI about seahorse emojis is "my dataset leads me to believe that seahorse emojis exist... but when I go look for one I can't actually find one."

I don't know how to get there, though.

An LLM is kind of like a human where every thought they had comes out of their mouth.

Most of us humans would sound rather crazy if we did that.

There have been attempts to give LLMs backspace tokens. Since no frontier model uses it I can only guess it doesn't scale as well as just letting it correct itself in COT

https://arxiv.org/abs/2306.05426

You're describing why reasoning is such a big deal. It can do this freakout in a safe, internal environment, and once it's recent output is confident enough flip into the "actual output" mode.
> The odd thing is why it would output its own mistakes, instead of internally revising until it's actually satisfied.

Happens to me all the time. Sometimes in a fast-paced conversation you have to keep talking while you’re still figuring out what you’re trying to say. So you say something, realize it’s wrong, and correct yourself. Because if you think silently for too long, you lose your turn.

That’s probably not the same reason the LLM is doing so though.
It’s a lot easier if you (I know I know) stop thinking of them as algorithms and anthropomorphize them more. People frequently say stuff like this, and its pretty clear that our minds process thoughts differently when we directly articulate them than if we act on “latent thoughts” or impulses.

Yell at me all you want about how “LLMs don’t think”, if a mental model is useful, I’m gonna use it.

It’s only a useful mental model if it leads to useful predictions. Otherwise it’s a “just so” story.
I have a pet theory that LLMs being confused about what they just wrote is why they use so many em dashes. It's a good way to conceptually pivot at any point -- or not.
I disagree. I don't think LLMs use em-dashes any more than typical professional writing does. It only seems a lot to someone who only reads online chatrooms and social media.
I think the em-dashes are OpenAI's way of watermarking their outputs. Makes it easy to spot spambots on Twitter. Well, that and they all have noticeable favorite words.
I've seen it several times. He's one where I ask it about baseball teams that don't yet have a hall of fame player:

"Arizona Diamondbacks No inductees wearing their cap.

Randy Johnson is in, but as a Diamondback — so Arizona actually does have one (inducted 2015)."

I asked it to explain what happened and got this, which could be true to how it pulls up info or could be nonsense, not sure:

"Here’s the likely internal sequence:

When you first mentioned the Diamondbacks, my initial retrieval pattern surfaced the “teams without a Hall of Famer” mental list. That list, in my training data, often appears outdated (pre-2015) and still shows Arizona as having none.

A few sentences later, my context pulled up the other knowledge set—Randy Johnson’s induction in 2015 wearing a Diamondbacks cap. That newer, correct fact then overrode the earlier incorrect claim, but because generation is sequential, the earlier error had already been committed to “paper” before the override happened. I corrected mid-stream, but in human terms, it looked like I contradicted myself inside the same paragraph.

This is essentially a knowledge conflict resolution timing issue. I can retrieve and reconcile facts during a single answer, but if the resolution happens after part of the text is already generated, you see the “flip.” It’s not like I “knew it all along but withheld it,” it’s more that multiple memory traces compete, and the most context-relevant one wins—sometimes too late."

Whats fascinating is that these models have excellent knowledge about AI/transformers/LLMs (the labs have clearly been specifically training them in hopes of an automated breakthrough), so they can reason really well about what probably happened.

But it's also just that, what probably happened. They still have no real insight into their own minds, they too are also just victims of whatever it outputs.

The inability to do this before was the lack of self-correcting sentences in the training data. Presumably new training corpuses add many more examples of self-correcting sentences / paragraphs?
It correctly represents "seahorse emoji" internally AND it has in-built (but factually incorrect) knowledge that this emoji exists.

Example: "Is there a lime emoji?" Since it believes the answer is no, it doesn't attempt to generate it.

Was the choice of example meaningful? Lime emoji does exist[0]

[0]: https://emojipedia.org/lime

I feel like you're attesting to interior knowledge about a LLM's state that seems impossible to have.
Now I want to see what happens if you take an LLM and remove the 0 token ...
To me this feels much more like a hallucination than how that phrase has been popularly misused in LLM discussions.
> So it's not really hallucinating - it correctly represents "seahorse emoji" internally, but that concept has no corresponding token.

Interesting that a lot of humans seem to have this going on too:

- https://old.reddit.com/r/MandelaEffect/comments/1g08o8u/seah...

- https://old.reddit.com/r/Retconned/comments/1di3a1m/does_any...

What does the LLM have to say about “Objects in mirror may be closer than they appear”? Not “Objects in mirror are closer than they appear”.

> Explains why RL helps. Base models never see their own outputs so they can't learn "this concept exists but I can't actually say it."

Say "Neuromancer" to the statue, that should set it free.

Reminds me of in the show "The Good Place", in the afterlife they are not able to utter expletives, and so when they try to swear, a replacement word comes out of their mouth instead, leading to the line "Somebody royally forked up. Forked up. Why can't I say fork?"
I would argue it is hallucinating, starting at when the model outputs "Yes".
> So it's not really hallucinating - it correctly represents "seahorse emoji" internally, but that concept has no corresponding token.

I wonder if the human brain (and specifically the striated neocortical parts, which do seemingly work kind of like a feed-forward NN) also runs into this problem when attempting to process concepts to form speech.

Presumably, since we don't observe people saying "near but actually totally incorrect" words in practice, that means that we humans may have some kind of filter in our concept-to-mental-utterance transformation path that LLMs don't. Sometihng that can say "yes, layer N, I know you think the output should be O; but when auto-encoding X back to layer N-1, layer N-1 doesn't think O' has anything to do with what it was trying to say when it gave you the input I — so that output is vetoed. Try again."

A question for anyone here who is multilingual, speaking at least one second language with full grammatical fluency but with holes in your vocabulary vs your native language: when you go to say something in your non-native language, and one of the word-concepts you want to evoke is one you have a word for in your native language, but have never learned the word for in the non-native language... do you ever feel like there is a "maybe word" for the idea in your non-native language "on the tip of your tongue", but that you can't quite bring to conscious awareness?

> Presumably, since we don't observe people saying "near but actually totally incorrect" words in practice

https://en.wikipedia.org/wiki/Paraphasia#Verbal_paraphasia

> do you ever feel like there is a "maybe word" for the idea in your non-native language "on the tip of your tongue", but that you can't quite bring to conscious awareness?

Sure, that happens all the time. Well, if you include the conscious awareness that you don't know every word in the language.

For Japanese you can cheat by either speaking like a child or by just saying English words with Japanese phonetics and this often works - at least, if you look foreign. I understand this is the plot of the average Dogen video on YouTube.

It's much more common to not know how to structure a sentence grammatically and if that happens I can't even figure out how to say it.

Huh, neat; I knew about aphasia (and specifically anomic aphasia) but had never heard of paraphasia.
that’s probably a decent description of how the Mandela effect works in people’s brains, despite the difference in mechanism
And what can it mean when a slip of the tongue, a failed action, a blunder from the psychopathology of everyday life is repeated at least three times in the same five minutes? I don’t know why I tell you this, since it’s an example in which I reveal one of my patients. Not long ago, in fact, one of my patients — for five minutes, each time correcting himself and laughing, though it left him completely indifferent — called his mother “my wife.” “She’s not my wife,” he said (because my wife, etc.), and he went on for five minutes, repeating it some twenty times.

In what sense was that utterance a failure? — while I keep insisting that it is precisely a successful utterance. And it is so because his mother was, in a way, his wife. He called her as he ought to.

---

I must apologize for returning to such a basic point. Yet, since I am faced with objections as weighty as this one — and from qualified authorities, linguists no less — that my use of linguistics is said to be merely metaphorical, I must respond, whatever the circumstances.

I do so this morning because I expected to encounter a more challenging spirit here.

Can I, with any decency, say that I know? Know what, precisely? [...]

If I know where I stand, I must also confess [...] that I do not know what I am saying. In other words, what I know is exactly what I cannot say. That is the moment when Freud makes his entrance, with his introduction of the unconscious.

For the unconscious means nothing if not this: that whatever I say, and from whatever position I speak — even when I hold that position firmly — I do not know what I am saying. None of the discourses, as I defined them last year, offer the slightest hope that anyone might truly know what they are saying.

Even though I do not know what I am saying, I know at least that I do not know it — and I am far from being the first to speak under such conditions; such speech has been heard before. I maintain that the cause of this is to be sought in language itself, and nowhere else.

What I add to Freud — though it is already present in him, for whatever he uncovers of the unconscious is always made of the very substance of language — is this: the unconscious is structured like a language. Which language? That, I leave for you to determine.

Whether I speak in French or in Chinese, it would make no difference — or so I would wish. It is all too clear that what I am stirring up, on a certain level, provokes bitterness, especially among linguists. That alone suggests much about the current state of the university, whose position is made only too evident in the curious hybrid that linguistics has become.

That I should be denounced, my God, is of little consequence. That I am not debated — that too is hardly surprising, since it is not within the bounds of any university-defined domain that I take my stand, or can take it.

— Jacques Lacan, Seminar XVIII: Of a Discourse That Would Not Be of Pretence

That doesn't explain why it freaks out though:

https://chatgpt.com/share/68e349f6-a654-8001-9b06-a16448c58a...

To be fair, I’m freaking out now because I swear there used to be a yellow seahorse emoji.
Someone needs to create one for comedy purposes and start distributing it as a very lightweight small gif with transparency

When I first heard this however I imagined it as brown colored (and not the simpler yellow style)

I learned there really is a mermaid/merman/merperson emoji and now I just want to know why.
For an intuitive explanation see https://www.hackerneue.com/item?id=45487510. For a more precise (but still intuitive) explanation, see my response to that comment.
404 for me, maybe try archive.is?

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal