The clock speeds haven't really gone up anymore, but computations still got considerably faster. From an i7 2700k (2011) to an i7 13700k single core benchmark scores went up 131%
https://cpu.userbenchmark.com/Compare/Intel-Core-i7-2700K-vs...
Second, over that time period we've had a lot of changes in tooling. Some make code faster. Most make code slower. (Examples include the spread of containerization, and adoption of slow languages like Python.) The result is that programs to do equivalent things might wind up actually faster or slower, no matter what a CPU benchmark shows.
Jensen aims to charge more for more GPU computing power into the future.
This is because Nvidia has close to monopoly power this is able to break Moores Law single handedly.
IMO if Intel stay in the game it’ll be sorted out within a few years.
Nvidia may have good software but people like paying less money.
Paying less will win out.
Unlike in gaming in the data center initial cost + performance per watt are the only thing that really matter (besides software, Nvidia has a huge moat there..). And in relation to how much Nvidia is charging per GPU total power costs are close to zero.. So 4 ‘worse’ but much cheaper chips might be a better deal than buying an A/H100 etc.
That's not true from a hardware perspective either. You can't just plug in 4 worse cards in the same rack. The savings on the graphics cards become less significant if you need to double/quadruple all other hardware to increase the number of racks. A 1U blade can easily cost $10000 without a graphics card.
For anything datacenter related, customers are very sensitive to price per performance. And datacenters are happy to oblige.
Of course data centers are happy to oblige to customer demands, but initial cost per GPU and performance per watt are certainly not the only relevant factors.
otoh it can be said that gpu's just look like they improved longer than cpu's because their workload is vastly more susceptible to parallelism.
And well Nvidia is almost a monopoly at this point so they have barely any incentives to continue innovating as opposed trying to extract as much money as possible from their clients.
On the other hand look at what happened with CPUs over the last few years. Huge improvements in efficiency (including Intel)
> hasn't really improved in the last 15 years.
I don’t think that’s even close to being the case in almost all use cases. Increasing complexity/bloat has obfuscated that to a large degree though.
This is interesting to me, how did they end up like this?
On desktop, anyway. Mobile is a different store, back in the mid 2000s they had everything to dominate the market for the next 10+ years (e.g. the fastest ARM chips) yet choose not to due to reasons..
Intel, by contrast, says that Moore's Law is still alive. But Intel is technologically behind, and it is easier to improve when there is someone to learn from, so maybe there is a wall that they haven't yet hit.
Regardless, it is a very different law than when I was young, when code just magically got faster each year. Now we can run more and more things at once, but the timing for an individual single-threaded computation hasn't really improved in the last 15 years.