The business question is, what if AI works about as well as it does now for the next decade or so? No worse, maybe a little better in spots. What does the industry look like? NVidia and TSMC are telling us that price/performance isn't improving through at least 2030. Hardware is not going to save us in the near term. Major improvement has to come from better approaches.
Sutskever: "I think stalling out will look like…it will all look very similar among all the different companies. It could be something like this. I’m not sure because I think even with stalling out, I think these companies could make a stupendous revenue. Maybe not profits because they will need to work hard to differentiate each other from themselves, but revenue definitely."
Somebody didn't get the memo that the age of free money at zero interest rates is over.
The "age of research" thing reminds me too much of mid-1980s AI at Stanford, when everybody was stuck, but they weren't willing to admit it. They were hoping, against hope, that someone would come up with a breakthrough that would make it work before the house of cards fell apart.
Except this time everything costs many orders of magnitude more to research. It's not like Sutskever is proposing that everybody should go back to academia and quietly try to come up with a new idea to get things un-stuck. They want to spend SSI's market cap of $32 billion on some vague ideas involving "generalization". Timescale? "5 to 20 years".
This is a strange way to do corporate R&D when you're kind of stuck. Lots of little and medium sized projects seem more promising, along the lines of Google X. The discussion here seems to lean in the direction of one big bet.
You have to admire them for thinking big. And even if the whole thing goes bust, they probably get to keep the house and the really nice microphone holder.
Somebody has to come up with an idea first. Before they share it, it is not publicly known. Ilya has previously come up with plenty of productive ideas. I don't think it's a stretch to think that he has some IP that is not publicly known.
Even seemingly simple things like how you shuffle your training set, how you augment it, the specific architecture of the model, etc, have dramatic effects on the outcome.
There are lots of ideas. Some may work.
The space in which people seem to be looking is deep learning on something other than text tokens. Yet most successes punt on feature extraction / "early vision" and just throw compute at raw pixels. That's the "bitter lesson" approach, which seems to be hitting the ceiling of how many gigawatts of data center you can afford.
Is there a useful non-linguistic abstraction of the real world that works and leads to "common sense"? Squirrels must have something; they're not verbal and have a brain the size of a peanut. But what?
Anthropic projects a lot. It's hard to get actuals from Anthropic.[1] They're privately held, so they don't have to report actuals publicly. [1] says "Anthropic has, through July 2025, made around $1.5 billion in revenue." $26 billion for 2026 seems unlikely.
This is revenue, not profit.
If you think that AGI is not possible to achieve, then you probably wouldn't be giving anyone money in this space.
"If you think that AGI is not possible to achieve, then you probably wouldn't be giving anyone money in this space." If you think other people think AGI is possible, you sell them shovels and ready yourself for a shovel market dip in the near future. Strike while the iron is hot.
If the former, no. If the latter, sure, approximately.
I think the title is an interesting thing, because the scaling isn't about compute. At least as I understand it, what they're running out of is data, and one of the ways they deal with this, or may deal with this, is to have LLM's running concurrently and in competition. So you'll have thousands of models competing against eachother to solve challenges through different approaches. Which to me would suggest that the need for hardware scaling isn't about to stop.