Preferences

Is AI energy consumption a stable 24x7 kind of thing? Inference load obviously changes with consumer traffic, so it will have a daily rhythm. But do the large providers use the rest of the capacity for training? Or are those separate clusters?

If it's a stable 24x7 load it would be ideal for nuclear energy, low carbon, but slow to adapt to changes in demand.


Frontier LLM training can take months for a single run, which is about as stable as a load gets.
Might make sense to scale the load by following electricity supply/prices though?

Staying that as a genuine question since I'm not sure how the math works out at that scale, you have to weigh that against hardware depreciation of course.

It does not.

Power purchase agreements are priced differently and usually written to guarantee power at a predictable price, think of it like reserved instances and spot on the cloud. Bulk of workloads don’t care or benefit from spot pricing.

Also Modern neoclouds have captive non grid sources like gas or diesel plants for which grid demand has no impact to cost. These sources are not cheap but DC operators have not much choice as getting grid capacity takes years . Even gas turbines are difficult to procure these days so we hear of funky sources like jet engines.

It’s way more nuanced than this.

It’s not like when you ask GPT a question, the energy grid takes a dip. No, data centers have massive power draw. They also have battery backup systems that are the primary drivers of stable power along with power inverters and all sorts of power equipment on site. The fact that we are building out more data centers means we need more power. The energy marketplace has only so much extra capacity (various forms) before it too is depleted. So, you bring on more power plants, more reactors, more solar farms, moar powah!

No, what is sad is that we have the ability to turn every roof, every window, every side wall into a power source and yet we choose not to.

(I wrote a demand response energy grid “manipulation” platform)

Training on-demand, using spare GPU capacity is an interesting concept.
Maybe in the future we'll be making thankless water heaters out of GPU's so they can kick on when there's demand for heat.
It's different, more lightweight hardware for inference but can be in the same data center. Training requires beefier GPUs.

This item has no comments currently.