> Machine translation is a great example. It's also where I expect AI coding assistants to land. A useful tool, but not some magical thing that is going to completely replace actual professionals.
I can say from experience that machine translation is light years ahead of 15 years ago. When I started studying Japanese 15 years ago, Google Translate (and any other free translator) was absolutely awful. It was so bad, that it struggled to translate basic sentences into reasonable native-level Japanese. Fast forward to today, it is stunning how good is Google Translate. From time to time, I even learn about newspeak (slang) from Google Translate. If I am writing a letter, I regularly use it to "fine-tune" my Japanese. To be clear: My Japanese is far from native-level, but I can put full, complex sentences into Google Translate (I recommend to view "both directions"), and I get a reasonable, native-sounding translation. I have tested the outputs with multiple native speakers and they agree: "It is imperfect, but excellent; the meaning is clear."In the last few years, using only my primitive knowledge of Japanese (and Chinese -- which helps a lot with reading/writing), I have been able to fill out complex legal and tax documents using my knowledge of Japanese and the help of Google Translate. When I walk into a gov't office as the only non-Asian person, I still get a double take, but then they review my slightly-less-than-perfect submission, and proceed without issue. (Hat tip to all of the Japanese civil servants who have diligently served me over the years.)
Hot take: Except for contracts and other legal documents, "actual professionals" (translators) is a dead career at this point.
DeepL is a step up, and modern LLMs are even better. There's some data here[0], if you're curious - DeepL is beaten by 24B models, and dramatically beaten by Sonnet / Opus / https://nuenki.app/translator .
[0] https://nuenki.app/blog/claude_4_is_good_at_translation_but_... - my own blog
We don't know yet how a modern transformer trained on radiology would perform, but it almost certainly would be dramatically better.
Why? Is there something about radiology that makes the transformer architecture appropriate?
My understanding has been that transformers are great for sequences of tokens, but from what little I know of radiology sequence-of-tokens seems unlikely to be a useful representation of the data.
I can only imagine what people picture when they think about AI and radiology, but I can certainly imagine that it goes beyond that. What if you can feed a transformer with literature on how the body works and diseases, so that it can _analyse_ (not just _classify_) scans, applying some degree of, let's call it, creativity?
That second thing, if technically feasible, confabulations and all, has the potential to replace radiologists, maybe, if you're optimistic. Simple image classification, probably not, but it sounds like a great help to make sure they don't miss anything, can prioritise what to look at, and stuff like that.
Then it would make up plausible-sounding nonsense, just like it does in all other applications, but it would be particularly dangerous in this one.
A very similar story has been happening in radiology for the past decade or so. Tech folks think that small scale examples of super accurate AIs mean that radiologists will no longer be needed, but in practice the demand for imaging has grown while people have been scared to join the field. The efficiencies from AI haven't been enough to bridge the gap, resulting in a radiologist _shortage_.