Preferences

That might not even be overstatement. The last few big desktop linux crash-and-burns I've run into all had display drivers as a common component.

I like back-foot, underdog NVIDIA. Ascendent AMD hasn't drawn my ire yet, let's hope power corrupts slowly.


Amd changed their windows drivers to not output video if it detects its running in a VM. Nvidia went the other way and stopped doing so.

Both can/could be bypassed with some libvirtd xml magic, but still. Nvidia seem to slowly stop being assholes, AMD started already.

>Amd changed their windows drivers to not output video if it detects its running in a VM.

What? Why?

Presumably market segmentation. You're only allowed VMs that dont feel like shit (i.e. have gpu accel) if you pay for enterprise vGPU shit. Can't have someone buy two of your GPUs to give one to a VM, obviously.
> pay for enterprise vGPU

For AMD the driver is difficult to find and poorly documented (and only available on ESXi unlike NVIDIA vGPU support for Xen, Hyper-V, KVM, Nutanix, and ESXi, etc.). At least the guest drivers don't have licensing issues unlike with NVIDIA IIUC.

And very few AMD GPUs even support it...

(and good luck finding a remotely recent AMD GIM driver)

Because the drivers for the consumer GPUs are not licensed for datacebter use and obviously VM == datacenter
This is a problem for QUbes OS which has a legitimate need for vgpu on a desktop operating system.

It's because of this arbitrary restriction that Qubes is not able to provide GPU acceleration, which is a huge barrier to its adoption.

Wow, I hadn't heard that AMD added that check. Between that and their unending reset problems that makes them a completely inferior choice for GPU passthrough. Before Nvidia stopped the passthrough blocking you could make a case that AMD was a better choice.
The cycle continues
> I like back-foot, underdog NVIDIA. Ascendent AMD hasn't drawn my ire yet, let's hope power corrupts slowly.

That "back-foot" "underdog" nVidia has the edge in the video market still... and 3x the market cap of AMD.

It's fair to extrapolate because their strategic decisions will be based on extrapolations.

NVIDIA had to overclock and hustle the current generation of cards and it's looking even worse for the next generation. Software was a moat when AMD was heavily resource constrained, but now they can afford the headcount to give chase. Between the chip shortage and crypto, there was plenty of noise on top of fundamentals, but one doesn't make strategic plans based on noise.

This is all speculative, of course. I'm sure if asked they would say it was a total coincidence. Just like AMD and Intel switching places on their stance towards overclocking. Complete coincidence that it matches the optimal strategy for their market position -- "milk it" vs "give chase." Somehow it always seems to match, though, and speculation is fun :)

NVIDIA is well, well ahead of AMD.

NVIDIA's cards were faster than AMD's with the huge gap in transistor density that was the Samsung fab.

Don't get excited for the AMD graphics division up in Canada.

>NVIDIA's cards were faster than AMD's with the huge gap in transistor density that was the Samsung fab.

They are roughly at par. AMD does better at lower resolutions because of their cache setup.

With the refreshed cards, AMD is slightly ahead.

Keep in mind that is at a particular price point.

NVIDIA's top of the range chip is ahead of AMD's, and the 3080's SKU is at a lower binning point on the bell curve than the 6950's.

Hence NVIDIA would be able to maintain a performance per watt crown at the 6950's price point if it sold its highest bins cheaper.

Given the gap in transistor density, that is an exorbitant architectural delta.

I wish my company were in the same desperate situation as Nvidia. One where we’d be faster than the competition with similar perf/W while using a much inferior silicon process…
APUs are eating the market of novideo, see e.g. the performance of the M1 iGPU
Are APUs different from what we used to call integrated graphics cards?
The difference is getting blurry. Apus have generally better communication/latency/shared resources with the CPU. The ultimate ideal of an APU is to have a unified memory with the CPU, which is the case in e.g the PS3/PS4 Despite progress in heterogenous computing (the neglected HSA), in SOCs, 3D ingerposers, high bandwidth buses interconnects and 3D memory such as HBM, the PC platform has yet to see a proper APU. In fact the M1 is probably the closest thing to an ideal APU on the market. But yes the more time pass, the more the term IGPU denote APU. AMD bought ATI because of the fusion vision, the idea that sharing silicon, resources and memory between the CPU and the GPU would be the future of computing.

An unrelated but very underrated is the egpu. Egpus are external to the pc unlike a dgpu. So you can buy a thin laptop, connect it via Thunderbolt to a rtx 3080 and enjoy faster gpu performance than allowed on any laptop on the market, and enjoy a thin lightweight, silent laptop the rest of the time. Disclaimer Thunderbolt is still a moderate limiting factor in reaching peak performance.

> the PC platform has yet to see a proper APU

Wat. AMD literally invented the term 'APU' and has been shipping them since 2011. Fully unified CPU+GPU memory since 2014's Kavari. That's full cache coherent CPU & GPU along with the GPU using the same shared virtual pageable memory as the CPU.

The M1 didn't add anything new to the mix.

It's a spectrum. I don't think that cache coherency was useable by developers/compilers. The two only ways I know (HMM and HSA) are niche, used by nobody. GPGPU compute would GREATLY benefit from programs that can share memory between cpu and gou without having to do needless high latency round-trips and copies. So they failed in practice. They never did a CPU addressable HBM interposer (despite having invented HBM) unlike what I believe is the M1.
> An unrelated but very underrated is the egpu. Egpus are external to the pc unlike a dgpu. So you can buy a thin laptop, connect it via Thunderbolt to a rtx 3080 and enjoy faster gpu performance than allowed on any laptop on the market, and enjoy a thin lightweight, silent laptop the rest of the time. Disclaimer Thunderbolt is still a moderate limiting factor in reaching peak performance.

Not just for laptops: this sounds also a bit like what the Switch dock could have been.

(And in some sense, it reminds me of Super FX chip for the SNES.)

APUs are AMD-speak for CPU and GPU on the same die (Intel has similar but doesn't call them that). Integrated graphics cards (a misnomer since there is no card -- IGP or iGPU is probably more accurate) may or may not be on the same die (instead could be on the motherboard, particularly in the chipset). That design is pretty rare/antiquated at this point though. Being on the same die means higher bandwidth, lower latency, etc.
I think Intel calls them XPUs.
Integrated video cards were integrated onto the motherboard. APUs/iGPUs are integrated into the CPU.
Just had my first graphics stack issue since 2013 upgrading to Fedora 36 and was caught flat-footed. I've got multiple GPUs, so now I've got to figure out if it's Wayland, amdgpu, nouveau (since unblacklisting), or dkms. "Just working" has made me lazy.
Was in similar boat recently, I'm not that up to day with the whole X11 vs Wayland, but dammit am I mad !

I feel like JUST as the "Linux X11 Discrete Graphics Scenario" started to become more stable and less (not none, but less) of an issue to setup and upgrade without getting black screens, the Linux-world is now turning to a "new windowing server" i.e Wayland, we starting all over again sigh

Maybe the answer to having a decent and carefree discrete graphics Linux stack is to fork (Don't you dare link to the XKCD comic about 'Standards') SteamOS.

They are at least motivated (as it's part of their core product) to make it work most of the time. And have a done boat load of good work for Linux ecosystem. Well done guys ! :)

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal