Preferences

What a load of nonsense. MI210 effectively hit the market in 2023, similarly to H100. We're talking about datacenter-grade, two-year out of date card, and it's already "ancient history?"

No wonder nobody on this site trusts AMD.


fooblaster
my experience with the mi300 does not mirror yours. If I have a complaint, it's that it's performance does not live up to expectations.
bluescrn
Unless you're, you know, using GPUs for graphics...

Xbox, Playstation, and Steam Deck seem to be doing pretty nicely with AMD.

MindSpunk
The quantity of people on this site now that care about GPUs all of a sudden because of the explosion of LLMs, who fail to understand that GPUs are _graphics_ processors that are designed for _graphics_ workloads is insane. It almost feels like the popular opinion here is that graphics is just dead and AMD and NVIDIA should throw everything else they do in the bin to chase the LLM bag.

AMD make excellent graphics hardware, and the graphics tools are also fantastic. AMD's pricing and market positioning can be questionable but the hardware is great. They're not as strong with machine learning tasks, and they're in a follower position for tensor acceleration, but for graphics they are very solid.

_carbyau_
Just having fun with an out of context quote.

> graphics is just dead and AMD and NVIDIA should throw everything else they do in the bin to chase the LLM bag

No graphics means that games of the future will be like:

"You have been eaten by a ClautGemPilot."

almostgotcaught
The quantity of people on this site now that think they understand modern GPUs because back in the day they wrote some opengl...

1. Both AMD and NVIDIA have "tensorcore" ISA instructions (ie real silicon/data-path, not emulation) which have zero use case in graphics

2. Ain't no one playing video games on MI300/H100 etc and the ISA/architecture reflects that

> but for graphics they are very solid.

Hmmm I wonder if AMD's overfit-to-graphics architectural design choices are a source of friction as they now transition to serving the ML compute market... Hmmm I wonder if they're actively undoing some of these choices...

MindSpunk
AMD isn't overfit to graphics. AMD's GPUs were friendly to general purpose compute well before Nvidia was. Hardware-wise anyway. AMD's memory access system and resource binding model was well ahead of Nvidia for a long time. When Nvidia was stuffing resource descriptors into special palettes with addressing limits, AMD was fully bindless under the hood. Everything was just one big address space, descriptors and data.

Nvidia 15 years ago was overfit to graphics. Nvidia just made smarter choices, sold more hardware and re-invested their winnings into software and improving their hardware. Now they're just as good at GPGPU with a stronger software stack.

AMD has struggled to be anything other than a follower in the market and has suffered quite a lot as a result. Even in graphics. Mesh shaders in DX12 was the result of NVIDIA dictating a new execution model that was very favorable to their new hardware while AMD had already had a similar (but not perfectly compatible) system since the Vega called primitive shaders.

averne_
Matrix instructions do of course have uses in graphics. One example of this is DLSS.
lomase
Imagine thinking you know more than others because you use a different abstraction layer.

This item has no comments currently.