Before water sanitization technology we had no way of sanitizing water on a large scale.
Before LLMs, we could still write software. Arguably we were collectively better at it.
The reality is that there's not a single critical component anywhere that is built on LLMs. There's absolutely no reliance on models, and ChatGPT being down has absolutely no impact on anything beside teenagers not being able to cheat on their homeworks and LLM wrappers not being able to wrap.
It's going to take a while for those new expectations to develop, and they won't develop evenly, just like how even today there's plenty of low-hanging fruit in the form of roles or businesses that aren't using what anyone here would identify as simple opportunities for automation, and the main benefit that accrues to the one guy in the office who knows how to cheat with Excel and VBA is that he gets to slack off most of the time. But there certainly are places where the people in charge expect more, and are quick to perceive when and how much that bar can be raised. They don't care if you're cheating, but you'll need to keep up with the people who are.
Remember that there are billion dollar usecases where being correct is not important. For example, shopping recommendations, advertizing, search results, image captioning, etc. All of these usecases have humans consuming the output, and LLMs can play a useful role as productivity boosters.
His point is that the world is RELIANT on GenAI. This isn't true.
I don't think his point was that LLMs are as crucial as the power grid, or even close. He's just saying that he finds the comparison interesting, for whatever reason. If you find it stupid instead, that's okay.
My company uses GenAI a lot in a lot of projects. Would it have some impact if all models suddenly stopped working? Sure. But the oncalls wouldn't even get paged.
This if anything should be a huge red flag