Both can/could be bypassed with some libvirtd xml magic, but still. Nvidia seem to slowly stop being assholes, AMD started already.
What? Why?
For AMD the driver is difficult to find and poorly documented (and only available on ESXi unlike NVIDIA vGPU support for Xen, Hyper-V, KVM, Nutanix, and ESXi, etc.). At least the guest drivers don't have licensing issues unlike with NVIDIA IIUC.
(and good luck finding a remotely recent AMD GIM driver)
It's because of this arbitrary restriction that Qubes is not able to provide GPU acceleration, which is a huge barrier to its adoption.
That "back-foot" "underdog" nVidia has the edge in the video market still... and 3x the market cap of AMD.
NVIDIA had to overclock and hustle the current generation of cards and it's looking even worse for the next generation. Software was a moat when AMD was heavily resource constrained, but now they can afford the headcount to give chase. Between the chip shortage and crypto, there was plenty of noise on top of fundamentals, but one doesn't make strategic plans based on noise.
This is all speculative, of course. I'm sure if asked they would say it was a total coincidence. Just like AMD and Intel switching places on their stance towards overclocking. Complete coincidence that it matches the optimal strategy for their market position -- "milk it" vs "give chase." Somehow it always seems to match, though, and speculation is fun :)
NVIDIA's cards were faster than AMD's with the huge gap in transistor density that was the Samsung fab.
Don't get excited for the AMD graphics division up in Canada.
They are roughly at par. AMD does better at lower resolutions because of their cache setup.
With the refreshed cards, AMD is slightly ahead.
NVIDIA's top of the range chip is ahead of AMD's, and the 3080's SKU is at a lower binning point on the bell curve than the 6950's.
Hence NVIDIA would be able to maintain a performance per watt crown at the 6950's price point if it sold its highest bins cheaper.
Given the gap in transistor density, that is an exorbitant architectural delta.
An unrelated but very underrated is the egpu. Egpus are external to the pc unlike a dgpu. So you can buy a thin laptop, connect it via Thunderbolt to a rtx 3080 and enjoy faster gpu performance than allowed on any laptop on the market, and enjoy a thin lightweight, silent laptop the rest of the time. Disclaimer Thunderbolt is still a moderate limiting factor in reaching peak performance.
Wat. AMD literally invented the term 'APU' and has been shipping them since 2011. Fully unified CPU+GPU memory since 2014's Kavari. That's full cache coherent CPU & GPU along with the GPU using the same shared virtual pageable memory as the CPU.
The M1 didn't add anything new to the mix.
Not just for laptops: this sounds also a bit like what the Switch dock could have been.
(And in some sense, it reminds me of Super FX chip for the SNES.)
I feel like JUST as the "Linux X11 Discrete Graphics Scenario" started to become more stable and less (not none, but less) of an issue to setup and upgrade without getting black screens, the Linux-world is now turning to a "new windowing server" i.e Wayland, we starting all over again sigh
Maybe the answer to having a decent and carefree discrete graphics Linux stack is to fork (Don't you dare link to the XKCD comic about 'Standards') SteamOS.
They are at least motivated (as it's part of their core product) to make it work most of the time. And have a done boat load of good work for Linux ecosystem. Well done guys ! :)
I like back-foot, underdog NVIDIA. Ascendent AMD hasn't drawn my ire yet, let's hope power corrupts slowly.