INSUBCONTINENT EXCLUSIVE:
functions like web browsing and coding
These models mark the first time OpenAI's reasoning-focused models can use every ChatGPT tool simultaneously, including visual analysis and
image generation.OpenAI announced o3 in December, and until now, only less capable derivative models named "o3-mini" and "03-mini-high" have
with Enterprise and Edu customers gaining access next week
Free users can try o4-mini by selecting the "Think" option before submitting queries
OpenAI CEO Sam Altman tweeted that "we expect to release o3-pro to the pro tier in a few weeks."For developers, both models are available
starting today through the Chat Completions API and Responses API, though some organizations will need verification for access."These are
the smartest models we've released to date, representing a step change in ChatGPT's capabilities for everyone from curious users to advanced
researchers," OpenAI claimed on its website
OpenAI says the models offer better cost efficiency than their predecessors, and each comes with a different intended use case: o3 targets
complex analysis, while o4-mini, being a smaller version of its next-gen SR model "o4" (not yet released), optimizes for speed and
cost-efficiency.
OpenAI says o3 and o4-mini are multimodal, featuring the ability to "think with images."
Credit:
OpenAI
What sets these new models apart from OpenAI's other
models (like GPT-4o and GPT-4.5) is their simulated reasoning capability, which uses a simulated step-by-step "thinking" process to solve
Additionally, the new models dynamically determine when and how to deploy aids to solve multistep problems
For example, when asked about future energy usage in California, the models can autonomously search for utility data, write Python code to