Multi-Character Scene with Midjourney’s Huge Character Consistency Update (--cref)
TLDRMidjourney has introduced a highly anticipated feature for character consistency, allowing users to generate characters with consistent details using a character reference image. The video demonstrates how to use the new --cref function, which focuses on character traits and works best with characters created by Midjourney. The creator shares a workaround for placing multiple characters in a scene and provides tips on noting down character features to maintain consistency. The process involves using the --cref parameter to control character details, adjusting character weight from 100 (all details) to 0 (just the face), and using the 'vary region' feature to edit images for perfection. The video also offers a method to generate two characters in a scene by being more descriptive in the text prompt. The creator concludes by showing the generated images and encouraging viewers to experiment with the tool for better consistency in character creation.
Takeaways
- 🎉 Midjourney has released a new feature for character consistency that allows the generation of characters with consistent details using a reference image.
- 🔍 The new `--cref` function focuses on character traits and is most precise with characters originally created by Midjourney, although it can also be used with real people or photos.
- 🚫 Midjourney warns that using real people or photos as references may result in distortion, as the feature is not designed for them.
- 👧 The creator found the `--cref` feature particularly useful for stabilizing facial features, but less so for hair and outfit details.
- 📈 The results are suitable for creating AI influencers or fashion models, but the focus of the video is on animation-style illustrations.
- 📝 It's recommended to note down the important features of your characters to maintain consistency as you generate more images.
- 🌟 The character image can be generated from Midjourney or other sources, but the feature works best with Midjourney-generated characters.
- 🔗 The image URL can be obtained by dragging the image into the prompt box, right-clicking to copy the image address, or opening the image in a browser and copying the link.
- ✅ The `--cw` parameter can be used to modify character references, with strengths ranging from 100 (all details) to 0 (just the face).
- 🔍 Lowering the character weight makes the image adhere more to the text prompt and less to the reference character's hair and outfit.
- 🖌 The `vary region` feature allows for fine-tuning of specific details in the generated images to better match the original character.
- 👥 To include multiple characters in a scene, the prompt needs to be more descriptive, specifying the appearance and actions of each character.
- 💻 For further detail editing, one can use Photoshop's generative tool or similar software to refine the generated images.
- ⌛ Despite the current limitations, the consistency of character details is expected to improve over time with advancements in the technology.
Q & A
What is the new feature introduced by Midjourney for character consistency?
-The new feature introduced by Midjourney for character consistency is the 'cref' function, which allows users to generate characters with consistent details using a character reference image.
What are the limitations of the 'cref' function according to Midjourney?
-The 'cref' function has limitations in that it won't copy exact details like dimples, freckles, or t-shirt logos. It works best with characters made from Midjourney and is not designed for real people or photos, which may be distorted.
Why might the 'cref' feature be more useful for stabilizing facial features rather than hair and outfit details?
-The 'cref' feature might be more useful for stabilizing facial features because, at the current stage, it is more effective at maintaining consistency in facial traits. The stabilization of hair and outfit details may not meet the preferred standard for animation or storybooks.
For what type of content creation is the 'cref' feature particularly suitable?
-The 'cref' feature is particularly suitable for creating AI influencers or AI models for fashion brands, where the focus is on facial features and the overall style rather than specific hair and outfit details.
How does one use the 'cref' function in Midjourney?
-To use the 'cref' function, one can type in a prompt for the desired image, followed by '--cref' and then insert the image URL of the character reference. The function can be further modified using the '--cw' parameter to adjust the character weight from 100 (all details) to 0 (just the face).
What is the purpose of using '--cw' parameter with the 'cref' function?
-The '--cw' parameter is used to modify the character reference strength, allowing users to control the level of detail that is taken from the reference image, ranging from full character details to focusing solely on the face.
How can one obtain the image URL needed for the 'cref' function?
-To obtain the image URL, one can drag the image directly to the prompt box if it's easily accessible, right-click the image to get the image address, or open the image in a browser and copy-paste the link from the address bar.
What is the process to edit the generated images to perfection?
-To edit the generated images to perfection, one can use the 'vary region' feature to upscale the image and select the area for editing. This allows for adjustments to clothing details, eye gaze, or any other areas that need refinement.
How can multiple characters be included in the same scene using the 'cref' function?
-To include multiple characters in the same scene, one must be more descriptive in the text prompt, specifying the details of each character and their actions. Additionally, the character reference should be switched for each character to be included in the scene.
What is the recommended approach when generating images for animation style illustration?
-When generating images for animation style illustration, it is recommended to focus on the cref function, be descriptive in the text prompt, and use the vary region feature to refine details. The default character weight is usually sufficient unless specific adjustments are desired.
How can one ensure consistency in character details when generating multiple images?
-To ensure consistency in character details, one should note down the important features of the characters and use the cref link with simple prompt descriptions. Additionally, using the vary region feature can help fine-tune details to match the original character ideas.
What is the current state of character consistency in Midjourney's generated images?
-While the character consistency in Midjourney's generated images is not perfect, the use of the cref function significantly improves the consistency compared to using just a reference image. The tool is expected to improve over time, offering better consistency.
Outlines
🎨 Introducing Character Consistency with Midjourney's 'cref' Feature
The video introduces a new feature by Midjourney that allows for the generation of characters with consistent details using a character reference image. The narrator shares their experience with the feature, noting that while it doesn't perfectly replicate every detail like dimples or logos, it is effective for stabilizing facial features. They find it particularly useful for creating AI influencers and fashion models, and the video focuses on how to use the feature for animation style illustrations. The process involves using the 'cref' function in conjunction with a text prompt for style, and the narrator demonstrates how to generate images using Discord and the Midjourney Alpha website. They also explain how to modify character references using the '--cw' parameter to adjust the level of detail from the reference image.
🖌️ Refining Character Details and Generating Multiple Characters in a Scene
The second paragraph delves into refining the generated images to better match the desired character details. The narrator discusses using the 'vary region' feature to edit specific parts of the image, such as changing the color of the suspenders on a dress. They then demonstrate how to add a second character to the scene by adjusting the text prompt and switching the character reference to match the new character. The process requires a more descriptive prompt to ensure both characters are included in the generated image. The narrator also suggests using Photoshop's generative tool for further detail editing if needed. The video concludes with the narrator sharing their generated images and inviting feedback on the consistency of the characters, as well as promoting another video with additional character consistency hacks.
Mindmap
Keywords
💡Midjourney
💡Character Consistency
💡Character Reference Image
💡Cref Function
💡Text Prompt
💡Image URL
💡Character Weight
💡Vary Region
💡Upscale Image
💡AI Influencers
💡Pixar Animation Style
Highlights
Midjourney has released a new feature focusing on character consistency.
Users can now generate characters with consistent details using a character reference image.
The precision of this technique is limited and does not replicate exact details like dimples or t-shirt logos.
The feature works best with characters created by Midjourney and is not designed for real people or photos.
The creator found the feature more useful for stabilizing facial features rather than hair and outfit details.
The results are suitable for creating AI influencers or fashion brand models.
The video focuses on animation style illustration, which is the preferred style of most viewers.
The cref function in Midjourney allows for the use of character reference images for generating consistent characters.
Noting down important character features helps maintain consistency in generated images.
The character 'Lily' is introduced as an example, with specific features like big round brown eyes and long wavy hair.
The process of generating images using the cref function on Discord is demonstrated.
The --cref parameter is used to insert the image URL of the character reference.
The --cw parameter can modify character references, with strengths ranging from 100 to 0.
Lower character weight focuses more on the text prompt and less on the reference character's details.
Generated images include animals and butterflies from the original image, showing some level of detail consistency.
The 'vary region' feature allows for fine-tuning specific areas of the generated image.
To add a second character to the scene, the prompt must be more descriptive about the characters' appearances.
Switching the character reference to the second character allows for generating a scene with both characters.
Details can be further refined using the 'vary region' feature or Photoshop's generative tool.
The video concludes with a discussion on the current state of character consistency in AI and anticipation for future improvements.