Runway Just Changed AI Video Forever! Seriously.
TLDRIn this exciting video, the creator discusses Runway's latest launch, Act One, which revolutionizes AI video creation. They reflect on their previous experiences with Runway's evolving technology, highlighting improvements in video-to-video processes and the challenges of using AI-generated content. The new features promise enhanced realism and expressive character animation, particularly in terms of eye movement and scene details. The video showcases various examples, emphasizing the potential for creative storytelling using both professional and smartphone cameras. The creator expresses eagerness to experiment with Act One and explore its limitations and capabilities.
Takeaways
- 🚀 Runway has soft-launched 'Act One', a groundbreaking video creation tool.
- 🎥 This tool enhances video processing capabilities, focusing on realism and user control.
- 🌀 Runway's evolution from Gen 1 to Gen 3 shows significant advancements in AI video technology.
- 💡 The new features allow for improved text-to-image and video-to-video generation.
- 📽️ Users may face challenges with AI-generated performances, particularly when layering effects.
- ⚙️ Act One's capabilities include producing realistic animations from basic input videos.
- 🎬 The process allows for combining different takes and performances seamlessly.
- 👁️ New technology offers expressive eye movements and improved character animations.
- 🌐 The rollout of Act One is gradual, with full access expected within 24 hours.
- ☕ The creator is excited to experiment further with Act One to uncover its potential.
Q & A
What significant change did Runway recently introduce in the AI video industry?
-Runway recently soft-launched Act One, which is considered the most impressive video reyzer (renderer) that the speaker has seen yet, indicating a significant change in AI video technology.
What was Runway's initial offering on March 27th, 2023?
-On March 27th, 2023, Runway first introduced Gen 1, which was not text to video but rather video stylization transfers, impressing users with its capabilities.
How has Runway's technology evolved since Gen 1?
-Since Gen 1, Runway has introduced several generations, including Gen 2 for text to image, motion brushes for image to image, and various iterations of Gen 3, showing continuous evolution and improvement in their AI video technology.
What was the issue the speaker encountered with video to video workflows, specifically in their micro short film 'Tuesday'?
-The speaker found that applying AI processes to an actor's performance, especially in a video to video workflow, tends to mute the performance, requiring exaggerated acting to compensate.
What was the result of running an AI-generated video through a video to video process?
-The result was a video where the AI-generated characters' facial expressions and actions did not align well with the input video, leading to unrealistic and sometimes comical outcomes.
How does Act One differ from previous Runway video to video models?
-Act One seems to offer more photorealistic outputs and better integration of AI-generated characters with real-world footage, as demonstrated by the sample outputs provided in the transcript.
What was the speaker's critique of having two characters in one shot with previous video to video technology?
-The speaker critiqued that previous technology might not be able to handle two characters in one shot well, suggesting the need for creative masking to layer performances on top of each other.
What impressed the speaker about the driving performance in Act One's generated video?
-The speaker was impressed with how well the generated characters' eye movements and gaze tracked with the driving video, even when looking directly at the camera or performing other actions.
What are some of the questions and concerns the speaker has about Act One's capabilities?
-The speaker is curious about how Act One will handle different types of video inputs, such as handheld footage, and how much motion control can be added before facial expressions break down.
Why is the speaker excited about Act One's potential for video to video?
-The speaker is excited because Act One seems to take video to video to the next level, allowing for more control over stylistic output and negating the argument that making an AI film is as simple as typing in a prompt.
What is the speaker's anticipation regarding the rollout of Act One?
-The speaker is eagerly awaiting access to Act One, planning to continuously refresh their browser until it becomes available, indicating high anticipation and excitement for the new technology.
Outlines
🚀 Exciting Changes in Video Tech!
The speaker shares an unexpected turn in their video creation journey, transitioning from a project on Stable Diffusion 3.5 to exploring the newly launched Runway Act One. They express excitement about Runway's advancements in video generation, highlighting the evolution from earlier versions that offered basic video stylization to the more sophisticated capabilities of video-to-video transformation. The speaker also reflects on their personal attachment to an old phone, humorously recounting how they buried it, setting a lighthearted tone as they prepare to delve into the new technology.
🎥 Exploring Video-to-Video Innovations
In this paragraph, the speaker discusses the progression of Runway's video generation tools, from the initial Gen 1 release in March 2023 to the anticipated Act One features. They emphasize the challenges faced with video-to-video processes, particularly in maintaining the integrity of an actor's performance. The speaker shares personal experiences with their micro short film, expressing both the coolness of the technology and the limitations encountered, such as unintentional visual glitches. They highlight the need for a strong performance to counteract the AI's processing effects, while teasing viewers with examples of the impressive output from the new system.
Mindmap
Keywords
💡Runway
💡Stable Diffusion 3.5
💡Video Reeyzer
💡Text to Video
💡Image to Image
💡Motion Brushes
💡Video to Video
💡CGI Model
💡Photorealistic Outputs
💡AB Testing
💡Domo
💡Sky Glass
Highlights
Runway has soft launched Act One, a groundbreaking video reyzer that could change AI video forever.
Act One is considered the most impressive video reyzer the speaker has seen yet.
The speaker was in the middle of a video on stable diffusion 3.5 when Runway announced Act One.
Runway's Act One is seen as a significant advancement, potentially closing the loop for video stylization.
Runway first introduced Gen 1 on March 27th, 2023, which was just video stylization transfers without text to video capabilities.
Since Gen 1, Runway has evolved through multiple generations, including text to image and motion brushes.
The speaker encountered issues with video to video processes muting actor performances when layered with AI.
Runway's Gen 3 includes a video to video model, but it has limitations, especially with maintaining actor performances.
The speaker used Sky Glass and Domo for a micro short film, encountering issues with AI-generated video quality.
Runway's video to video process can sometimes result in unrealistic outputs, such as sunlight shining through characters.
The speaker anticipates that Runway's Act One will avoid the unreliability of previous video to video outputs.
Act One promises photorealistic outputs, which could revolutionize the video production process.
The speaker is excited about the potential for two characters in one shot using Act One, with creative masking.
Act One's driving performance tracking seems accurate, even when characters are looking directly at the camera.
The speaker speculates that image to video examples with Act One will not suffer from the reliability issues of Gen 2's video to video.
Runway is expected to roll out Act One within the next 24 hours, and the speaker is eager to test it.
The speaker is curious about the limitations of Act One, such as its performance with different types of camera work.
Act One's potential for music videos is highlighted by a demo featuring a singing character.
The speaker is particularly impressed with Act One's ability to generate expressive eye movements and blinking.
Runway's Act One is slowly rolling out, and the speaker plans to continuously check for access to it.