OpenAI’s livestreamed GPT announcement event happened at 10 a.m. PT Monday, but you can still catch up on the reveals.
The company described the event as “a chance to demo some ChatGPT and GPT-4 updates.” CEO Sam Altman, meanwhile, promoted the event with the message, “not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”
As it turned out, the announcement was a new model called GPT-4o — the “o” stands for “omni”– which offers greater responsiveness to voice prompts, as well as better vision capabilities.
“GPT-4o reasons across voice, text and vision,” OpenAI CTO Mira Murati said during a keynote presentation at OpenAI’s offices in San Francisco. “And this is incredibly important, because we’re looking at the future of interaction between ourselves and machines.”
OpenAI also followed up on Monday’s event by showcasing a number of additional demos of GPT-4o’s capabilities on its YouTube channel, from improving visual accessibility through Be My Eyes, its ability to harmonize with itself, and further translation capabilities.
You can watch a replay on the OpenAI website or here: