Runway Just Changed AI Video Forever! Seriously.

Theoretically Media
23 Oct 202409:39

TLDRIn this exciting video, the creator discusses Runway's latest launch, Act One, which revolutionizes AI video creation. They reflect on their previous experiences with Runway's evolving technology, highlighting improvements in video-to-video processes and the challenges of using AI-generated content. The new features promise enhanced realism and expressive character animation, particularly in terms of eye movement and scene details. The video showcases various examples, emphasizing the potential for creative storytelling using both professional and smartphone cameras. The creator expresses eagerness to experiment with Act One and explore its limitations and capabilities.

Takeaways

  • 🚀 Runway has soft-launched 'Act One', a groundbreaking video creation tool.
  • 🎥 This tool enhances video processing capabilities, focusing on realism and user control.
  • 🌀 Runway's evolution from Gen 1 to Gen 3 shows significant advancements in AI video technology.
  • 💡 The new features allow for improved text-to-image and video-to-video generation.
  • 📽️ Users may face challenges with AI-generated performances, particularly when layering effects.
  • ⚙️ Act One's capabilities include producing realistic animations from basic input videos.
  • 🎬 The process allows for combining different takes and performances seamlessly.
  • 👁️ New technology offers expressive eye movements and improved character animations.
  • 🌐 The rollout of Act One is gradual, with full access expected within 24 hours.
  • ☕ The creator is excited to experiment further with Act One to uncover its potential.

Q & A

  • What significant change did Runway recently introduce in the AI video industry?

    -Runway recently soft-launched Act One, which is considered the most impressive video reyzer (renderer) that the speaker has seen yet, indicating a significant change in AI video technology.

  • What was Runway's initial offering on March 27th, 2023?

    -On March 27th, 2023, Runway first introduced Gen 1, which was not text to video but rather video stylization transfers, impressing users with its capabilities.

  • How has Runway's technology evolved since Gen 1?

    -Since Gen 1, Runway has introduced several generations, including Gen 2 for text to image, motion brushes for image to image, and various iterations of Gen 3, showing continuous evolution and improvement in their AI video technology.

  • What was the issue the speaker encountered with video to video workflows, specifically in their micro short film 'Tuesday'?

    -The speaker found that applying AI processes to an actor's performance, especially in a video to video workflow, tends to mute the performance, requiring exaggerated acting to compensate.

  • What was the result of running an AI-generated video through a video to video process?

    -The result was a video where the AI-generated characters' facial expressions and actions did not align well with the input video, leading to unrealistic and sometimes comical outcomes.

  • How does Act One differ from previous Runway video to video models?

    -Act One seems to offer more photorealistic outputs and better integration of AI-generated characters with real-world footage, as demonstrated by the sample outputs provided in the transcript.

  • What was the speaker's critique of having two characters in one shot with previous video to video technology?

    -The speaker critiqued that previous technology might not be able to handle two characters in one shot well, suggesting the need for creative masking to layer performances on top of each other.

  • What impressed the speaker about the driving performance in Act One's generated video?

    -The speaker was impressed with how well the generated characters' eye movements and gaze tracked with the driving video, even when looking directly at the camera or performing other actions.

  • What are some of the questions and concerns the speaker has about Act One's capabilities?

    -The speaker is curious about how Act One will handle different types of video inputs, such as handheld footage, and how much motion control can be added before facial expressions break down.

  • Why is the speaker excited about Act One's potential for video to video?

    -The speaker is excited because Act One seems to take video to video to the next level, allowing for more control over stylistic output and negating the argument that making an AI film is as simple as typing in a prompt.

  • What is the speaker's anticipation regarding the rollout of Act One?

    -The speaker is eagerly awaiting access to Act One, planning to continuously refresh their browser until it becomes available, indicating high anticipation and excitement for the new technology.

Outlines

00:00

🚀 Exciting Changes in Video Tech!

The speaker shares an unexpected turn in their video creation journey, transitioning from a project on Stable Diffusion 3.5 to exploring the newly launched Runway Act One. They express excitement about Runway's advancements in video generation, highlighting the evolution from earlier versions that offered basic video stylization to the more sophisticated capabilities of video-to-video transformation. The speaker also reflects on their personal attachment to an old phone, humorously recounting how they buried it, setting a lighthearted tone as they prepare to delve into the new technology.

05:01

🎥 Exploring Video-to-Video Innovations

In this paragraph, the speaker discusses the progression of Runway's video generation tools, from the initial Gen 1 release in March 2023 to the anticipated Act One features. They emphasize the challenges faced with video-to-video processes, particularly in maintaining the integrity of an actor's performance. The speaker shares personal experiences with their micro short film, expressing both the coolness of the technology and the limitations encountered, such as unintentional visual glitches. They highlight the need for a strong performance to counteract the AI's processing effects, while teasing viewers with examples of the impressive output from the new system.

Mindmap

Keywords

💡Runway

Runway is a company that specializes in AI-driven video and image generation technology. In the context of the video, Runway is highlighted for its significant advancements in video reyzer technology with the soft launch of 'Act One,' which is described as impressive and potentially game-changing for the field of AI video generation.

💡Stable Diffusion 3.5

Stable Diffusion 3.5 refers to a version of a diffusion model used in AI-generated imagery. The speaker mentions working on a video about this technology when Runway made their announcement, indicating that Stable Diffusion is a relevant technology in the AI imaging field that Runway's new tool may compete or interact with.

💡Video Reeyzer

A 'video reeyzer' is a term that seems to be used to describe a tool or technology that analyzes or processes video content. In the video script, 'Act One' by Runway is referred to as a video reeyzer, suggesting it's a novel tool for video analysis and generation that has caught the speaker's attention.

💡Text to Video

Text to Video is a technology that generates videos based on textual descriptions. The script mentions Runway's evolution from video stylization transfers to text to video capabilities, indicating a significant leap in AI-generated content creation where users can create videos from written prompts.

💡Image to Image

Image to Image technology refers to the process of transforming one image into another based on certain criteria or prompts. The video discusses Runway's development in this area, suggesting improvements in how AI can manipulate and generate images, which is crucial for video generation as well.

💡Motion Brushes

Motion Brushes likely refers to a tool or feature that allows for the application of motion or animation to images or video. In the context of the video, it seems to be a part of Runway's evolving suite of tools for creating dynamic and animated content.

💡Video to Video

Video to Video technology enables the transformation of one video into another, often with changes in style, content, or other attributes. The speaker discusses challenges with this technology, such as the loss of performance detail when applying AI processes to video, and how Runway's 'Act One' might address these issues.

💡CGI Model

A CGI model refers to a computer-generated imagery model, which is used to create realistic 3D characters or objects. In the video, the speaker mentions a CGI model being generated by an app and then turned into an animated character, highlighting the role of CGI in the pipeline of creating AI-generated videos.

💡Photorealistic Outputs

Photorealistic Outputs are results that closely resemble real-life photographs or videos in terms of visual quality and detail. The video script includes an example of a photorealistic output from Runway's technology, demonstrating the company's capability to generate highly realistic AI video content.

💡AB Testing

AB Testing, or split testing, is a method of comparing two versions of something (like video outputs) to see which one performs better. The script mentions AB testing to compare the original shot with the AI-generated output, which is crucial for evaluating the effectiveness of Runway's 'Act One' technology.

💡Domo

Domo is likely a reference to a software or tool used in the video generation process. The speaker mentions using Domo in their workflow for a micro short film, indicating it as a part of the technology stack used for video to video processing.

💡Sky Glass

Sky Glass seems to be an application used for creating rough animatics, which are preliminary animations that help in planning out the shots and sequences of a film. The video discusses using Sky Glass in the video creation process, emphasizing its role in pre-production for AI-generated content.

Highlights

Runway has soft launched Act One, a groundbreaking video reyzer that could change AI video forever.

Act One is considered the most impressive video reyzer the speaker has seen yet.

The speaker was in the middle of a video on stable diffusion 3.5 when Runway announced Act One.

Runway's Act One is seen as a significant advancement, potentially closing the loop for video stylization.

Runway first introduced Gen 1 on March 27th, 2023, which was just video stylization transfers without text to video capabilities.

Since Gen 1, Runway has evolved through multiple generations, including text to image and motion brushes.

The speaker encountered issues with video to video processes muting actor performances when layered with AI.

Runway's Gen 3 includes a video to video model, but it has limitations, especially with maintaining actor performances.

The speaker used Sky Glass and Domo for a micro short film, encountering issues with AI-generated video quality.

Runway's video to video process can sometimes result in unrealistic outputs, such as sunlight shining through characters.

The speaker anticipates that Runway's Act One will avoid the unreliability of previous video to video outputs.

Act One promises photorealistic outputs, which could revolutionize the video production process.

The speaker is excited about the potential for two characters in one shot using Act One, with creative masking.

Act One's driving performance tracking seems accurate, even when characters are looking directly at the camera.

The speaker speculates that image to video examples with Act One will not suffer from the reliability issues of Gen 2's video to video.

Runway is expected to roll out Act One within the next 24 hours, and the speaker is eager to test it.

The speaker is curious about the limitations of Act One, such as its performance with different types of camera work.

Act One's potential for music videos is highlighted by a demo featuring a singing character.

The speaker is particularly impressed with Act One's ability to generate expressive eye movements and blinking.

Runway's Act One is slowly rolling out, and the speaker plans to continuously check for access to it.