I've seen a mix of both for years. For the last 6 years the story has usually been some combination of "X program technically supports it, but you'll get 20% of the performance you'd expect given the hardware and/or you'll need to go through a bunch of hoops, and even then you'll have to troubleshoot tons of random errors and/or it'll crash the program or system"
That's on top of the "nobody supports ROCm, or if they do it's because of a single person who amd the PR - you're on your own though because none of the core contributers have amd hardware" which I'll admit is a chick-egg problem.
But given these two factors, it's always meant "if you want buy hardware to do data science, you have to go nvidia if you don't want to write the support yourself, and/or an insane headache that often resulting in switching to nvidia hardware anyway"
You are comparing Nvidia software with AMD hardware. It's the AMD software that has been lacking historically.
EDIT: here is a discussion from 2 months ago on CUDA versus ROCm. https://www.hackerneue.com/item?id=38700060
Gave up, bought an Nvidia card, zero problems.
If CUDA is so good, it'd be great to know what it does that AMD cards can't. I've never gotten that far because I hit what seems to be some sort of kernel panic or driver lockup.