AMD being able to benefit from AI, and this OpenAI relationship, is a bit different though. This is about using AMD hardware for training and presumably inference of LLMs. The users will be people consuming OpenAI APIs and services running on AMD hardware, not people themselves writing custom ML applications using AMD libraries.
Maybe also worth noting that some of the worlds largest supercomputers (e.g. Oak Ridge "Frontier" exascale computer) are based on AMD AI processors - I've no idea what drivers/libraries are being used to program these, but presumably they are reliable. I doubt they are using CUDA compatibility libraries.
Maybe also worth noting that some of the worlds largest supercomputers (e.g. Oak Ridge "Frontier" exascale computer) are based on AMD AI processors - I've no idea what drivers/libraries are being used to program these, but presumably they are reliable. I doubt they are using CUDA compatibility libraries.