Brands
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
Yourstory

Brands

Resources

Stories

General

In-Depth

Announcement

Reports

News

Funding

Startup Sectors

Women in tech

Sportstech

Agritech

E-Commerce

Education

Lifestyle

Entertainment

Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food

YSTV

ADVERTISEMENT
Advertise with us

OpenAI rolls out its most cost-efficient small model GPT-4o mini

The new model is priced at $0.15 per million input tokens and $0.60 per million output tokens, making it much more affordable than earlier models and over 60% cheaper than GPT-3.5 Turbo.

OpenAI rolls out its most cost-efficient small model GPT-4o mini

Friday July 19, 2024 , 2 min Read

OpenAI has introduced the GPT-4o mini, its latest cost-efficient and small model. The Sam Altman-backed company aims to significantly broaden the range of AI applications by making intelligence more affordable. 

The latest model scores 82% on MMLU (Massive Multitask Language Understanding) and currently outperforms GPT-4 on chat preferences in the LMSYS (Language Model Systems) leaderboard. It is priced at $0.15 per million input tokens and $0.60 per million output tokens, making it much more affordable than earlier models and over 60% cheaper than GPT-3.5 Turbo.

MMLU is a benchmark that tests a model's performance across various subjects. The LMSYS leaderboard ranks AI models based on their performance in various language-related tasks.

“Towards intelligence too cheap to meter: 15 cents per million input tokens, 60 cents per million output tokens, MMLU of 82%, and fast. Most importantly, we think people will really, really like using the new model,” Sam Altman, CEO of OpenAI, posted on X. 

Starting today, ChatGPT Free, plus, and team users can access GPT-4o mini instead of GPT-3.5. The company said that enterprise users will gain access starting next week.

Also Read
OpenAI rolls out CriticGPT to spot errors in ChatGPT’s code output

“GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots),” read the company’s blogpost. 

GPT-4o mini currently supports text and vision in the API. OpenAI stated that it plans to include support for text, image, video, and audio inputs and outputs. It also outperforms GPT-3.5 Turbo and other small models in both textual intelligence and multimodal reasoning benchmarks. 

It supports the same languages as GPT-4o and excels in function calling, enabling developers to create applications that interact with external systems. 


Edited by Kanishk Singh