Preferences

kamikazeturtles parent
There's a huge price difference between o3-mini and o1 ($4.40 vs $60 per million output tokens), what trade-offs in performance would justify such a large price gap?

Are there specific use cases where o1's higher cost is justified anymore?


arthurcolle
its the same thing as:

gpt-3.5 -> gpt-4 (gpt-4-32k premium)

"omni" announced (multimodal fusion, initial promise of gpt-4o, but cost effectively distilled down with additional multimodal aspects)

gpt-4o-mini -> gpt-4o (multimodal, realtime)

gpt-4o + "reasoning" exposed via tools in ChatGPT (you can see it in export formats) -> "o" series

o1 -> o1 premium / o1-mini (equivalent of gpt-4 "god model" becoming basis for lots of other stuff)

o1-pro-mode, o1-premium, o1-mini, somewhere in that is the "o1-2024-12-17" model with not streaming, function calling, and structured outputs and vision

now, distilled o1-pro-mode probably is o3-mini and o3-mini-high-mode (the naming is becoming just as bad as android)

its the repeat, take model, scale it up, run evals, detect innefficiencies, retrain, scale, distill, see what's not working. when you find a good little zone in the efficiency frontier, release it with a cool name

anticensor
No, o3-mini is a distillation of (not-yet-released) o3, not a distillation of o1.
arthurcolle
o1-"pro mode" could just be o3
anticensor
It's not that either, benchmarks list the two as separate models.
arthurcolle
thank you!
benatkin
> Are there specific use cases where o1's higher cost is justified anymore?

Long tail stuff perhaps. Most stuff doesn't resemble a programming benchmark. A newer model thrives despite being small when there is a lot of training data, and with programming benchmarks, like with chess, there is a lot of training data, in part because high quality training data can be synthesized.

zamadatix
Not really, it'll also be replaced by a newer o3 series model in short order.

This item has no comments currently.