* This blog post is a summary of this video.
OpenAI CEO Confirms GPT-5 Not in Training Yet, Safety a Top Priority
Table of Contents
- OpenAI CEO Interview Details GPT-4 and GPT-5 Timelines
- Response to Open Letter Calling for AI Pause
- Emergent Abilities an Unpredictable AI Risk
- Google Aims for 10x Bigger Than GPT-4
- Incremental GPT-5 Release for Responsible Rollout
OpenAI CEO Interview Details GPT-4 and GPT-5 Timelines
The CEO of OpenAI, Sam Altman, recently gave an interview providing insights into the development timelines and capabilities of GPT-4 and the upcoming GPT-5 AI systems. Altman confirmed that GPT-5 is not currently in training and won't be for some time, aiming to proceed cautiously given growing safety concerns around advanced AI systems.
This aligns with the goals outlined in a recent open letter calling for a pause on developing AI more powerful than GPT-4 until safety measures can be assured. The letter has garnered over 27,000 signatures, including from influential figures like Elon Musk. While OpenAI doesn't intend to pause GPT-5 development, Altman agrees releasing alignment datasets and evaluations could help.
GPT-5 Not Being Trained Currently
In the interview clip, Altman definitively states "GPT-5 is not in training at all for some time." This refutes claims made in earlier versions of the open letter that OpenAI had already begun work on GPT-5. However, OpenAI is still enhancing GPT-4 and acknowledges the important safety issues that need addressing before releasing more advanced systems.
Increasing Safety Rigor Before Next AI Releases
OpenAI's CEO emphasized the need to "move with caution and increasing rigor for safety issues" when developing AI progressing beyond GPT-4. Dario Amodei from OpenAI noted they spent over 6 months testing GPT-4 for safety based on years of prior alignment research in anticipation of large language models like it.
Response to Open Letter Calling for AI Pause
The open letter requesting a pause on developing AI more powerful than GPT-4 resonated with many, securing 27,000 signatories to date. While OpenAI doesn't intend to halt GPT-5 work, Altman agreed creating and releasing alignment datasets could help. He believes openness and collaboration around safe AI development practices are important.
Agreement on Releasing Alignment Datasets
Altman acknowledged merit in parts of the open letter, tweeting "One thing up for debate I really agree with is that OpenAI should make great alignment datasets & evaluations, and then release those." Releasing OpenAI's alignments methodologies could aid wider AI safety efforts.
Years Spent Ensuring GPT-4 Safety
OpenAI's Dario Amodei highlighted their cautious approach to AI development, saying "We spent more than six months testing GPT-4 and making it even safer, and built it on years of alignment research we pursued anticipating models like GPT-4." Amodei stressed taking AI safety seriously and proceeding carefully.
Emergent Abilities an Unpredictable AI Risk
One of the greatest concerns around advanced AI systems like GPT-5 is the potential for them to develop unforeseen 'emergent abilities' beyond what they were designed for. Researchers warn these unpredictable capabilities arising spontaneously in large AI models present risks if systems are deployed before these behaviors are understood.
Examples of Unexpected Model Capabilities
Researchers highlight examples like AI models suddenly gaining skills for arithmetic and language translation despite not being explicitly programmed for those tasks. The abilities seem to emerge once models pass certain scale thresholds. But when and how new skills emerge remains unpredictable.
Need for Understanding Before Deployment
Experts emphasize the need to fully test and understand what abilities an AI system can manifest before releasing them for real-world use. If models like GPT-5 develop unanticipated capabilities after launch, the ramifications could be severe. Taking the time for rigorous safety testing is critical.
Google Aims for 10x Bigger Than GPT-4
With OpenAI gaining much attention for GPT-4, Google is now reportedly pouring millions into AI startups with the aim of developing a system up to 10 times more powerful within the next 18 months. But some worry pushing the capabilities envelope so quickly could lead to unsafe or unethical AI deployments.
Pumping Millions into Competitors
Google is strategically funding a range of AI startups with advanced language models that can compete with GPT-4, like Anthropic's Claude and Character.AI's Character. Reports suggest Google aims to achieve a model 10x bigger than GPT-4 in just a year and a half.
Rushing Out Unready Models a Concern
While more investment in AI development isn't inherently bad, experts caution that rapidly accelerating progress without proper safety testing and alignment could be disastrous. There are worries the AI race could pressure companies to release models before understanding their implications.
Incremental GPT-5 Release for Responsible Rollout
Rather than a single major launch event, OpenAI intends to release GPT-5 incrementally in smaller ongoing updates. This responsible rollout approach allows properly assessing and addressing safety issues in each new version before progression, avoiding potential dangers of a sudden powerful AI arrival.
FAQ
Q: Is OpenAI currently training GPT-5?
A: No, according to OpenAI CEO Sam Altman, GPT-5 is not in training and won't be for some time.
Q: Why was an open letter calling for an AI pause published?
A: Influential figures worried next-gen AI like GPT-5 could be dangerous if released without safety measures, hence the call for a pause.
Q: What are emergent abilities in AI models?
A: Emergent abilities are skills that arise spontaneously in AI systems due to complexity, rather than explicit programming.
Q: Is Google trying to beat GPT-4?
A: Yes, Google is investing in competitors with a goal of creating an AI 10x more powerful than GPT-4 within 18 months.
Q: How will GPT-5 be released?
A: GPT-5 will have incremental updates over time rather than one big release, for responsible rollout.
Q: When could we expect GPT-5?
A: No clear timeline given, but likely not for some time as safety is the priority over rushing new AI releases.
Q: Will GPT-5 be the most powerful AI yet?
A: If released in full form, GPT-5 is expected to be far more capable than any previous AI from OpenAI.
Q: Could unchecked AI be dangerous?
A: Yes, without proper safety measures and alignment, advanced AI could potentially cause harm.
Q: Who signed the open letter to pause AI progress?
A: Over 27,000 people including influential figures like Elon Musk signed the open letter.
Q: Will other companies listen to calls for an AI pause?
A: Unclear if competitors like Google will slow AI progress, even if widely supported.
Casual Browsing
GPT 5 Is Now In Training (Open AI GPT-5 Announcement)
2024-03-08 16:25:01
GPT-5 Date LEAKED! OpenAI Voice PREVIEW. AI Ceo LEAVES! Sora, Meta , Robotics Breakthrough and more
2024-03-25 23:55:03
ChatGPT 4 Image Input Not Working Yet (Chat GPT 4 Analysis & Recognition Talk)
2024-04-12 07:10:00
Democratizing AI: A Fireside Chat with OpenAI CEO Sam Altman
2024-02-06 01:35:01
Is This GPT-5? OpenAI o1 Full Breakdown
2024-09-14 11:31:00
Top 5 GAUNTLET Tips How to Not DIE in PoE 3.25 Settlers of Kalguur
2024-09-26 13:59:00