OpenAI Announced GPT-4o today in their Spring Update. The model comes with Video recognition and improved Voice recognition. The model is twice as fast as the old GPT-4 Turbo model. The "o" stands for Omnimodel and it ties in text, audio and video better than any model on the market today.

We didn't want to wait and are proud to announce that we have integrated GPT-4o into our Workflow Automation Suite within just hours of the announcement. Now available to all users.

It's fast!

We have more metrics to gather around how fast it is and how well it handles some of the more complex asks, compared to GPT-4, GPT-3.5 and Llama3.

PyroPrompts doesn't currently have audio or video functionality, so the GPT-4o audio and video functionality is not leveraged in automation. If you want to see this leveraged in some way, let us know what you want to do! We are working through some ideas like having it watch videos and listen to recordings and making that available to automation. This would work well with the RAG Functionality that was introduced in April.

Our understanding is that under the hood, this model should handle text very similar to how GPT-4 handles text, so we don't expect much of a change there, but we will run some some benchmarks in the coming days and weeks and publish our findings here. Subscribe to our newsletter to be sure you don't miss the results.