Preferences

uh. yes? it's far from uncommon, and sometimes it's ludicrously wrong. Grammarly has been getting quite a lot of meme-content lately showing stuff like that.

it is of course mostly very good at it, but it's very far from "trustworthy", and it tends to mirror mistakes you make.


perching_aix
Do you have any examples? The only time I noticed an LLM make a language mistake was when using a quantized model (gemma) with my native language (so much smaller training data pool).
Breza
Not GP, but I've definitely seen cutting edge LLMs make language mistakes. The most head scratching one I've seen in the past few weeks is when Gemini Pro decided to use <em> and </em> tags to emphasize something that was not code.

This item has no comments currently.