The training sets of most LLMs contain a copious amount of content from Libgen (or now: Anna's Archive), where em dashes are frequently used in literary writing.
Who the hell knows how the initial biases of LLM's broke.
My IRC name (gmaxwell) is a token in the GPT3 tokenizer.
It's used a lot in formal writing (academic papers, books etc) which are probably a large portion of chatGPTs training. If the HRL was done by professional writers then it was probably additionally biased toward using them.
People are more casual on the web. It's sort of like how people can often tell when it's me in IM without my name because I properly use periods while that's unusual in that medium. ChatGPT is so correct it feels robotic.
It’s the most likely explanation I believe. I have no idea about the content distribution of the training data but I would have assumed twitter and Reddit content would completely dwarf the literary content. Somewhat good that if it’s indeed not the case!
It isn't about wide use. It is about a character that almost no-one enters explicitly. Nearly all usages are copy paste, or inadvertent/unintended conversion by an application such as Microsoft Word that converts regular quotes to smart quotes, etc. In that respect, we see that an AI is performing identically to a real human. An AI does not and most likely would not add see a purpose an em or en dash to any text, unless it was an article about em or en dashes, or they knew the person they were speaking with uses en or em dashes.
* If it was not widely used before where/how did (chat)GPT picked it up?
(edit: formatting)