That doesn't match my experience at all. Maybe it's something to do with what your prompts are asking for or the way you're passing translations? Or the size of chunks being translated?
I have been astounded at the sophistication of LLM translation, and haven't encountered a single false-friend example ever. Maybe it depends a lot on which models you're using? Or it thinks you're trying to have a conversation that code-switches mid-sentence, which is a thing LLM's can do if you want?
I'm using o3 and Gemini Pro 2.5, paying for the high tier subscriptions. The complaints I get are from native speakers -- editors and end consumers. The LLMs tend to overfit to the English language, sometimes make up idioms that don't exist, use false friend words (especially verbs), directly translate English idioms, and so on. I've translated several book length texts now and I've seen it all.
The only thing that actually worked was knowing the target language and sitting down with multiple LLMs, going through the translation one sentence at a time with a translation memory tool wired in.
The LLMs are good, but they make lot of strange mistakes a human never would. Weird grammatical adherence to English structures, false friend mistakes that no one bilingual would make, and so on. Bizarrely many of these would not be caught between LLMs -- sometimes I would get _increasingly_ unnatural outputs instead of more natural outputs.
This is not just for English to Asian languages, even English to German or French... I shipped something to a German editor and he rewrote 50% of the lines.
LLMs are good editors and suggestors for alternatives, but I've found that if you can't actually read your target language to some degree, you're lost in the woods.