OpenAI Launches GPT-4o: Faster, Multimodal, and Developer-Friendly

0
53
Share this
bama cap

OpenAI has unveiled GPT-4o, a new iteration of the GPT-4 model that powers its flagship product, ChatGPT.

Announced by OpenAI Chief Technology Officer (CTO) Mira Murati during a livestream on Monday, GPT-4o promises significant improvements in speed and capabilities across text, vision, and audio.

Reports by Verge state, “The new model will be free for all users, with paid users enjoying up to five times the capacity limits of free users.”

According to a blog post from OpenAI, GPT-4o’s enhanced capabilities will be introduced incrementally, starting with its text and image functionalities, which are available in ChatGPT.

Chief Executive Officer (CEO), Sam Altman highlighted that GPT-4o is “natively multimodal,” enabling it to generate content and understand commands in voice, text, and images.

“Developers will be able to access the GPT-4o API, which is priced at half the cost and operates at twice the speed of GPT-4 Turbo,” Mr Altman noted on X(Twitter).

The update also brings new features to ChatGPT’s voice mode, transforming it into a dynamic voice assistant that responds in real time and can observe the environment.

This marks a significant upgrade from the current voice mode, which handles one prompt at a time and only processes auditory input.

Reflecting on OpenAI’s evolution,Mr Altman acknowledged a shift in the company’s original mission to create universal benefits through Artificial Intelligence (AI).

Instead, OpenAI is now focusing on providing advanced AI models to developers via paid APIs, empowering third parties to innovate and create diverse applications that benefit society.

“It now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from,” Mr Altman stated in his blog post following the livestream event.

Share this