Luma Ai NeRF Tutorial

JSFILMZ
16 Apr 202304:16

TLDRIn this video, Jay from JS Films explores the innovative AI video Nerfing technology by Luma Labs. He demonstrates how this AI can create detailed 3D models from a simple video, showcasing its potential for revolutionizing photogrammetry. Jay tests the app with a Cyberpunk 2077 car video, highlighting the ease of use and the impressive results, despite the technology being in its early stages. He also discusses the export options for the Nerfed models and teases future plans to experiment with Luma AI in the real world.

Takeaways

  • 🎥 Luma Labs AI's video nerfing technology allows users to import videos directly into the app for processing.
  • 🚗 The video demonstration features a car from Cyberpunk 2077, which is used to showcase the nerfing process.
  • 🔄 The process involves rotating the camera around the object (car) to capture it from different angles.
  • 📸 Traditional nerfing required extensive photography, but the AI simplifies this by using a single video.
  • 🤖 AI fills in the details of the environment without the need for manual photogrammetry.
  • 📁 The maximum file size for the AI to process is five gigabytes.
  • 🎥 The video can be recorded with a normal lens or a 360-degree camera, and then uploaded to the AI.
  • ⏱️ The AI processing time can vary, with the example taking between 30 minutes to an hour.
  • 📐 The result is a 3D mesh that can be exported and imported into Unreal Engine.
  • 🔗 Unreal Engine 5 has a plugin for easily integrating the nerfed objects into the engine.
  • 📹 The video creator plans to explore more outdoor applications of Luma AI in future video shoots.

Q & A

  • What is the main focus of the video?

    -The video focuses on testing out Luma AI's video NeRFing technology, which allows for 3D scanning of objects using a video instead of multiple photos.

  • What is NeRF and how is it different from traditional photogrammetry?

    -NeRF (Neural Radiance Fields) is a technology similar to photogrammetry, but instead of scanning entire environments like walls and floors, it focuses on scanning a specific object and uses AI to fill in the rest of the scene.

  • How does Luma AI's video NeRFing work?

    -Luma AI's video NeRFing allows users to import a video into the app, which then uses AI to process the video and create a 3D scan of the object in focus, without the need for multiple photos at every angle.

  • What video game was used for the demonstration in the video?

    -The video game Cyberpunk 2077 was used for the demonstration, specifically to capture a video of a car rotating in front of a bar called Lizzy's.

  • What are the file size and format requirements for using Luma AI's video NeRFing?

    -The maximum file size for the video is 5 GB, and it can be in a normal fisheye lens format or a 360 video format.

  • How long did it take for the video creator to get the NeRF result?

    -The video creator mentioned that it took about 30 minutes to an hour to get the NeRF result.

  • What are the potential applications of the NeRF technology?

    -The NeRF technology can be used to scan objects in the real world and bring them into virtual environments like Unreal Engine 5, allowing for interactive 3D models and animations.

  • What is the Unreal Engine 5 plugin mentioned in the video?

    -The Unreal Engine 5 plugin mentioned is a tool that allows users to import the NeRFed 3D models directly into Unreal Engine, using a blueprint that can be dragged and dropped into the content browser.

  • What are the export options available for the NeRFed models?

    -The export options include USD, Z, glTF, and OBJ file formats, with different polygon levels such as low, medium, and high poly.

  • How can viewers learn more about photo scanning in the real world?

    -The video creator mentioned a tutorial on how to photo scan objects in the real world and bring them into Unreal Engine, which can be found on their platform.

  • What is the creator's plan for future content related to Luma AI?

    -The creator plans to do a video shoot and experiment with Luma AI in the outside world, showcasing its capabilities in various scenarios.

Outlines

00:00

🎥 Testing Luma AI's Video Nerfing

Jay from JS Films discusses the new capabilities of Luma AI's video nerfing. He explains that instead of taking multiple photos for photogrammetry, users can now import a video directly into the app, which uses AI to fill in the surroundings. He demonstrates this by capturing a car in Cyberpunk 2077 and rotating around it, noting the high quality of the 4K resolution. The process is simplified, as the AI handles the rest of the environment. The result is a 3D mesh that can be exported and used in Unreal Engine 5 with a new plugin, showcasing the potential of this technology.

Mindmap

Keywords

💡LumaLabs AI

LumaLabs AI refers to the artificial intelligence technology developed by LumaLabs, which is used for video neural radiance field (NeRF) creation. In the video, it is demonstrated as a tool that can convert video footage into 3D models without the need for extensive photography. This technology is a significant advancement in the field of 3D modeling and virtual reality, as it simplifies the process of creating realistic digital environments.

💡NeRFing

NeRFing, short for Neural Radiance Fields, is a process that uses machine learning to create detailed 3D representations of objects or scenes from a series of 2D images. In the context of the video, the term is used to describe the AI-driven process of converting video footage into a 3D model. This technology is particularly exciting because it reduces the time and effort required to create accurate 3D models, as it automates the process of capturing and interpreting visual data.

💡Cyberpunk 2077

Cyberpunk 2077 is a popular video game that serves as the backdrop for the demonstration in the video. The game's detailed environment and characters provide a rich source of visual data for the LumaLabs AI to process. The use of Cyberpunk 2077 in the video showcases the potential of the AI technology to work with complex and visually rich content, highlighting its capabilities in creating realistic 3D models from video footage.

💡4K Resolution

4K resolution refers to a digital video resolution of approximately 4,000 pixels on the horizontal axis, which is four times the resolution of 1080p (Full HD). In the video, the creator mentions that the footage used for NeRFing is in 4K resolution, indicating a high level of detail and quality in the source material. This high resolution is beneficial for the AI to accurately capture and recreate the nuances of the scene in 3D.

💡Photogrammetry

Photogrammetry is a technique used for making measurements from photographs, especially for surveying and mapping purposes. In the video, it is mentioned in comparison to NeRFing, highlighting that traditional photogrammetry requires extensive image capturing and processing. The AI-driven NeRFing process, on the other hand, streamlines this by using a single video, making it a more efficient method for 3D modeling.

💡AI

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn. In the video, AI is the driving force behind the NeRFing process, enabling the LumaLabs software to analyze video footage and generate 3D models. The AI's role is crucial as it automates the complex task of interpreting visual data and filling in the details to create a realistic 3D representation.

💡UE5

UE5 stands for Unreal Engine 5, which is a game engine developed by Epic Games used for creating video games and other real-time applications. The video mentions a plugin for UE5 that allows for the integration of the NeRFed 3D models into the engine. This integration is significant as it enables creators to use the AI-generated models in a powerful and widely-used platform for further development and visualization.

💡3D Mesh

A 3D mesh is a collection of vertices, edges, and faces that define the shape of a 3D model. In the video, the LumaLabs AI is shown to create a 3D mesh of the car from the video footage. The mesh can be exported and imported into Unreal Engine 5, allowing for further manipulation and use in virtual environments. The creation of a 3D mesh is a key outcome of the NeRFing process, enabling a high level of interaction and detail in the final 3D model.

💡Export Options

Export options refer to the various file formats that can be used to save and share the 3D models created by the AI. In the video, the creator mentions different export options such as USD, Z, glTF, and obj formats. These formats are important because they determine how the 3D model can be used across different platforms and applications, providing flexibility for creators to integrate their work into various environments.

💡Gimbal

A gimbal is a device used to stabilize a camera, allowing for smooth and controlled movements during filming. In the video, the creator suggests using a gimbal to capture the video footage for NeRFing. The use of a gimbal ensures that the video has a consistent and stable perspective, which is beneficial for the AI to accurately process and create the 3D model.

💡High Poly

High Poly refers to a 3D model with a high polygon count, which results in a more detailed and complex surface. In the video, the creator mentions different levels of polygon detail, including high poly, which is an option for exporting the NeRFed models. A high poly model provides a greater level of detail, which can be important for applications that require a high degree of realism, such as in video games or virtual reality experiences.

Highlights

Testing out Luma AI's video Nerfing technology, which allows Nerfing with just a video import.

Nerfing traditionally required many pictures around an object, but now a video is sufficient.

Luma AI uses AI to fill in the environment around the object in the video, similar to photogrammetry.

The demonstration involves a video of a car in Cyberpunk 2077, capturing it in 4K resolution.

The process is simplified, eliminating the need to photograph each step around the object.

The video is uploaded to Luma AI, with a maximum file size of 5GB, and can use a normal or fisheye lens.

The AI fills in the environment, capturing the Nerf without additional photography.

The technology is in its early stages, but the potential for improvement is immense.

Luma AI can be used with Unreal Engine 5, with a newly released plugin for easy integration.

The resulting Nerf can be exported as a 3D mesh in various polygon levels.

A tutorial on photo scanning in the real world and bringing it into Unreal Engine is available.

The plugin allows for easy import of the captured mesh into Unreal Engine.

The video shoot showcases the potential of Luma AI for practical applications.

The video render is shared on Instagram and Twitter, showing the practical results of the Nerfing process.

Export options include USD, Z, glTF, and OBJ formats.

The presenter plans to explore Luma AI further, particularly in outdoor applications.