* This blog post is a summary of this video.

OpenAI Announces GPT-4 Turbo, Lower Pricing, Vision Support and More

Table of Contents

GPT-4 Turbo Increases Context Window to 128,000 Tokens

OpenAI made several major announcements at its recent developer conference. One of the biggest pieces of news was the release of a new model called GPT-4 Turbo. This model supports a context window of 128,000 tokens, allowing it to reference more context compared to other models like Claude which is limited to 100,000 tokens.

Having a larger context window enables GPT-4 Turbo to potentially produce higher quality, more coherent outputs. During initial testing, it was able to generate good outputs using complex prompts, though length restrictions limited it to around 4,000 tokens or 3,000 words per generation.

How GPT-4 Turbo Compares to Claude

Claude has been the gold standard until now with its 100,000 token context window. GPT-4 Turbo surpasses that with 128,000 tokens. However, some limitations were noticed during initial testing versus Claude. When trying sample prompts used successfully with Claude, GPT-4 Turbo was only able to generate parts of the expected output before needing to stop and wait. There are also tighter length restrictions only allowing around 4,000 tokens total for a prompt plus response. So while GPT-4 shows a lot of promise, there are still some advantages Claude retains for now regarding output length and coherence. But OpenAI will likely continue improving GPT-4 over time to address these limitations.

Multimodal Capabilities Added to Models

In addition to the new GPT-4 Turbo model, OpenAI also announced upgrades to ChatGPT and other models to support multimodal inputs and outputs.

This includes the ability to take in and generate images using DALL-E, as well as perform text-to-speech. The OpenAI Playground sandbox now supports integrating these capabilities into custom AI applications.

Enabling AI models to handle images, audio, video, and other non-text modalities makes them more versatile and useful for real-world applications. It brings them one step closer to being general artificial intelligence versus just narrow language models.

Lower Pricing Introduced for AI Models

Surprisingly, despite offering increased capabilities, OpenAI has actually lowered pricing on its API platform. The new GPT-4 Turbo is 3 times cheaper per thousand tokens than previous models. GPT-3.5 Turbo pricing was also reduced.

This shows that as models continue to advance in ability, economies of scale are allowing the costs to decrease. So over time, more advanced AI could become inexpensive enough for wide-scale use across many companies and industries.

Copyright Protection Offered to Customers

To provide reassurance to users of its platform, OpenAI also announced a new copyright protection program. This copyright shield will legally and financially back customers if they face claims of copyright infringement while using OpenAI products and services.

By offering this protection, OpenAI aims to alleviate concerns and hesitation businesses may have about potential legal issues when deploying AI models. Having a major platform stand behind its users like this facilitates wider adoption.

Easily Customizable GPT Models Introduced

OpenAI introduced a new capability called GPTs which makes it easy for anyone to customize vanilla ChatGPT models. GPTs can combine specialized instructions, knowledge, and skills to tailor ChatGPT for particular use cases.

For instance, someone could create a GPT for providing creative writing feedback. The best customizations will likely come from the community over time versus just OpenAI itself. The company plans to open up a GPT marketplace where people can share and even sell their custom models.

Other Platform Improvements Announced

Beyond the major announcements covered already, OpenAI shared some other updates aimed at making ChatGPT easier and more powerful to use.

ChatGPT Plus was updated to include fresher information and no longer requires inconvenient model switching. Users can now access different functions like image generation without changing contexts.

File attachments are also now supported to allow ChatGPT to search and reference PDFs and other documents. These small but meaningful improvements will help more users get even more value from the ChatGPT platform.

FAQ

Q: What is the context length of GPT-4 Turbo?
A: GPT-4 Turbo has a context length of 128,000 tokens, compared to Claude's 100,000 tokens.

Q: What multimodal capabilities were added?
A: Vision, DALL-E image generation, and text-to-speech were added.

Q: How much cheaper did pricing get?
A: GPT-4 Turbo is 3x cheaper per 1,000 tokens than previous models. GPT-3.5 Turbo is also now cheaper.