Preferences

Machine translation is a great example. It's also where I expect AI coding assistants to land. A useful tool, but not some magical thing that is going to completely replace actual professionals. We're at least one more drastic change away from that, and there's no guarantee anyone will find it any time soon. So there's not much sense in worrying about it.

A very similar story has been happening in radiology for the past decade or so. Tech folks think that small scale examples of super accurate AIs mean that radiologists will no longer be needed, but in practice the demand for imaging has grown while people have been scared to join the field. The efficiencies from AI haven't been enough to bridge the gap, resulting in a radiologist _shortage_.


throwaway2037

    > Machine translation is a great example. It's also where I expect AI coding assistants to land. A useful tool, but not some magical thing that is going to completely replace actual professionals.
I can say from experience that machine translation is light years ahead of 15 years ago. When I started studying Japanese 15 years ago, Google Translate (and any other free translator) was absolutely awful. It was so bad, that it struggled to translate basic sentences into reasonable native-level Japanese. Fast forward to today, it is stunning how good is Google Translate. From time to time, I even learn about newspeak (slang) from Google Translate. If I am writing a letter, I regularly use it to "fine-tune" my Japanese. To be clear: My Japanese is far from native-level, but I can put full, complex sentences into Google Translate (I recommend to view "both directions"), and I get a reasonable, native-sounding translation. I have tested the outputs with multiple native speakers and they agree: "It is imperfect, but excellent; the meaning is clear."

In the last few years, using only my primitive knowledge of Japanese (and Chinese -- which helps a lot with reading/writing), I have been able to fill out complex legal and tax documents using my knowledge of Japanese and the help of Google Translate. When I walk into a gov't office as the only non-Asian person, I still get a double take, but then they review my slightly-less-than-perfect submission, and proceed without issue. (Hat tip to all of the Japanese civil servants who have diligently served me over the years.)

Hot take: Except for contracts and other legal documents, "actual professionals" (translators) is a dead career at this point.

Alex-Programs
Also, Google Translate is really not a particularly good translator. It has the most public knowledge, but as far as translators go it's pretty poor.

DeepL is a step up, and modern LLMs are even better. There's some data here[0], if you're curious - DeepL is beaten by 24B models, and dramatically beaten by Sonnet / Opus / https://nuenki.app/translator .

[0] https://nuenki.app/blog/claude_4_is_good_at_translation_but_... - my own blog

rstuart4133
> Hot take: Except for contracts and other legal documents, "actual professionals" (translators) is a dead career at this point.

Quote from article:

> it turns out the number of available job opportunities for translators and interpreters has actually been increasing.

Workaccount2
Just a note on the radiologist part, the current SOTA radiology AI is still tiny parameter CNN's from the mid-late 2010's running locally. NYT ran an article a few weeks about this, and the entire article uses the phrase "A.I.", which people assume means ChatGPT, but really can refer to anything in the last 60 years of A.I. research. Manually digging revealed it was an old architecture.

We don't know yet how a modern transformer trained on radiology would perform, but it almost certainly would be dramatically better.

demosthanos
> We don't know yet how a modern transformer trained on radiology would perform, but it almost certainly would be dramatically better.

Why? Is there something about radiology that makes the transformer architecture appropriate?

My understanding has been that transformers are great for sequences of tokens, but from what little I know of radiology sequence-of-tokens seems unlikely to be a useful representation of the data.

On the surface, radiology seems like an image classification problem. That's indeed something a small NN can do, already 15 years ago. But it's probably not really all it is.

I can only imagine what people picture when they think about AI and radiology, but I can certainly imagine that it goes beyond that. What if you can feed a transformer with literature on how the body works and diseases, so that it can _analyse_ (not just _classify_) scans, applying some degree of, let's call it, creativity?

That second thing, if technically feasible, confabulations and all, has the potential to replace radiologists, maybe, if you're optimistic. Simple image classification, probably not, but it sounds like a great help to make sure they don't miss anything, can prioritise what to look at, and stuff like that.

> What if you can feed a transformer with literature on how the body works and diseases, so that it can _analyse_ (not just _classify_) scans, applying some degree of, let's call it, creativity?

Then it would make up plausible-sounding nonsense, just like it does in all other applications, but it would be particularly dangerous in this one.

TechDebtDevin
That wouldnt be that much different than current CNN/labeling methods used on medical imaging. Last time I got a ct scan the paperwork had the workstation specs and the models/nueral network techniques used.

This item has no comments currently.