Preferences

I don't think 2 is true: when OpenAI model won a gold medal in the math olympiads, it did so without tools or web search, just pure inference. Such a feat definitely would not have happened with o1.

Yeah, I confirmed this at the time. Neither OpenAI nor Gemini used tools as part of their IMO gold medal performances.

Here's OpenAI's tweet about this: https://twitter.com/SebastienBubeck/status/19465776504050567...

> Just to spell it out as clearly as possible: a next-word prediction machine (because that's really what it is here, no tools no nothing) just produced genuinely creative proofs for hard, novel math problems at a level reached only by an elite handful of pre‑college prodigies.

My notes: https://simonwillison.net/2025/Jul/19/openai-gold-medal-math...

They DID use tools for the International Collegiate Programming Contest (ICPC) programming one though: https://twitter.com/ahelkky/status/1971652614950736194

> For OpenAI, the models had access to a code execution sandbox, so they could compile and test out their solutions. That was it though; no internet access.

We still have next to no real information on how the models achieved the gold medal. It’s a little early to be confirming anything, especially when the main source is a Twitter thread initiated by a company known for “exaggerating” the truth.
Well Google got the same results and the official body confirmed that. Would it be nice to know exactly how it was done ? Sure, but this is something that happened.
If you're not going to believe researchers when they tell you how they did something then sure, we don't know how they did it.

Given how much bad press OpenAI got just last week[1] when one one of their execs clumsily (and I would argue misleadingly) described a model achievement and then had to walk it back amid widespread headlines about their dishonesty, those researchers have a VERY strong incentive to tell the truth.

[1] https://techcrunch.com/2025/10/19/openais-embarrassing-math/

Any company will apologize when they receive bad press. That’s basic corporate PR, not integrity.
It illustrates that there is a real risk to lying about research results: if you get caught it's embarrassing.

It's also worth taking professional integrity into account. Even if OpenAI's culture didn't value the truth individual researchers still care about being honest.

True, but aren't the math (and competitive programming) achievements a bit different? They're specific models heavily RL'd on competition math problems. Obviously still ridiculously impressive, but if you haven't done competition math or programming before it's much more memorization of techniques than you might expect and it's much easier to RL on.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal