OpenAI has introduced two new, smaller versions of its GPT models (GPT-5.4 mini and GPT-5.4 nano) designed for faster response times and lower costs, especially useful for workloads where quick results matter more than complex reasoning. Both models are now available, as announced by OpenAI.
GPT-5.4 Mini: Faster, Cheaper Coding Model
GPT-5.4 mini is aimed at developers currently using GPT-5 mini, offering more than twice the speed while nearing GPT-5.4’s performance on several benchmarks. It’s positioned as the primary choice for those who need a significant speed upgrade.
GPT-5.4 Nano: Low-Cost Model for Simple AI Tasks
The GPT-5.4 nano is the smaller of the two models, tailored for tasks like classification, data extraction, ranking, and simple coding. Its design favors low latency and minimal cost, making it ideal for supporting roles.
GPT-5.4 Mini and Nano Benchmark Performance and Speed
On SWE-Bench Pro, a software engineering evaluation, GPT-5.4 mini scores 54.4% compared to 45.7% for GPT-5 mini and 57.7% for the full GPT-5.4. On OSWorld-Verified, a computer use benchmark, GPT-5.4 mini reaches 72.1%, close to GPT-5.4's 75.0% and well above GPT-5 mini's 42.0%.
GPT-5.4 nano scores 52.4% on SWE-Bench Pro and 39.0% on OSWorld-Verified, placing it between GPT-5 mini and GPT-5.4 mini on most evaluations.
GPT-5.4 Mini and Nano Use Cases, Pricing, and Availability
OpenAI envisions GPT-5.4 mini serving as a coding assistant, supporting applications that interpret screenshots in real time, and powering multi-model systems where smaller models carry out specific tasks while larger ones plan. For example, within Codex, GPT-5.4 can delegate work to GPT-5.4 mini for tasks like codebase searches.
GPT-5.4 nano is suited for straightforward tasks such as classification and data extraction, where speed and cost efficiency are priorities.
It supports text and image inputs, tool use, function calling, web search, file search, and a 400,000-token context window. In Codex, it uses about 30% of the standard GPT-5.4 quota, making it roughly a third of the cost for simpler tasks. In ChatGPT, GPT-5.4 mini is accessible to Free and Plus users through the model selector’s Thinking feature, with Plus plans offering fallback support if GPT-5.4 Thinking is unavailable.
Meanwhile, GPT-5.4 nano is currently only available via API, priced at $0.20 per million input tokens and $1.25 per million output tokens, and is not yet accessible in ChatGPT or Codex.
Thank you for being a Ghacks reader. The post OpenAI Launches Smaller GPT Models: GPT-5.4 Mini and Nano appeared first on gHacks.
0 Commentaires