- edstarch parentOnly $40?
- >I really, truly don't understand. This isn't just about manners, mores, or self-reflection. The inability or unwillingness to think about your behavior or its likely reception are stupefying.
Shower thought: what does a typical conversation with an LLM look like? You ask it a question, or you give a command. The model spends some time writing a large wall of text, or performing some large amount of work, probably asks some follow up questions. Most of the output is repetitive slop so the user scans for the direct answer to the question, or checks if the tests work, promptly ignores the follow-ups and proceeds to the next task.
Then the user goes to an online forum and carries on behaving the same way: all posts are instrumental, all of the replies are just directing, shepherding, shaping and cajoling the other users to his desired end (giving him recognition and a job).
I'm probably reading too much into this one dude but perhaps daily interaction with LLMs also changes how one interacts with other text based entities in their lives.
- Unfortunately, his argument very often happens to be that AI is not useful, that there are no customers for it, that AI coding agents do not work...
I happen to agree with the overall sentiment (that AI buildout is overextending the tech sector and the financial markets), but he is utterly fixated on the evils of AI and unable to admit either the current usefulness or the future potential of the technology. This does not make him look like an honest broker.
The rambling nature of his posts also makes it harder to properly argue against them as he keeps repeating the same points over and over; some of them are decent but there is certainly a gish gallop feeling to the whole thing.