Here's OpenAI's tweet about this: https://twitter.com/SebastienBubeck/status/19465776504050567...
> Just to spell it out as clearly as possible: a next-word prediction machine (because that's really what it is here, no tools no nothing) just produced genuinely creative proofs for hard, novel math problems at a level reached only by an elite handful of pre‑college prodigies.
My notes: https://simonwillison.net/2025/Jul/19/openai-gold-medal-math...
They DID use tools for the International Collegiate Programming Contest (ICPC) programming one though: https://twitter.com/ahelkky/status/1971652614950736194
> For OpenAI, the models had access to a code execution sandbox, so they could compile and test out their solutions. That was it though; no internet access.
Given how much bad press OpenAI got just last week[1] when one one of their execs clumsily (and I would argue misleadingly) described a model achievement and then had to walk it back amid widespread headlines about their dishonesty, those researchers have a VERY strong incentive to tell the truth.
[1] https://techcrunch.com/2025/10/19/openais-embarrassing-math/
It's also worth taking professional integrity into account. Even if OpenAI's culture didn't value the truth individual researchers still care about being honest.