Stable Diffusion 3 via API in comfyUI with Stability AI official nodes - SD Experimental
TLDRIn this video, Andrea Baioni introduces viewers to the use of Stable Diffusion 3 (SD3) through an API key in ComfyUI, a user-friendly interface for AI image generation. Although SD3 is not free and requires the purchase of credits, Stability AI provides a starting 25 credits for users to test the service. The tutorial covers the installation of necessary nodes from Stability AI's GitHub, the process of obtaining and inputting an API key, and the exploration of various features such as image generation, upscaling, outpainting, and inpainting. The video demonstrates the generation of images based on prompts, showcasing the capabilities of SD3 in creating detailed and atmospheric visuals. Baioni also humorously highlights a creative mishap where a person in an image is replaced with a giant cat, adding a touch of levity to the demonstration. The summary encourages viewers to experiment with the platform and share their prompts for further exploration.
Takeaways
- 📈 Stable Diffusion 3 (SD3) is available for use via API key, but not yet as a free checkpoint.
- 💵 Using SD3 requires purchasing credits, with image generation costing around 6 US cents per credit.
- 🔑 To utilize SD3 with ComfyUI, you need to install Stability AI's official nodes from their GitHub repository.
- 🛠️ After installing the nodes, you may need to restart ComfyUI to see the new nodes and set up your API key for each node.
- 📷 The nodes include various functionalities like image generation, background removal, creative upscaling, outpainting, and inpainting.
- 💡 API key override is a common field in every node where you input your Stability AI API key to generate images.
- 💳 You can purchase additional credits on the Stability AI website, starting at $10 for a thousand credits.
- 🎨 The core model and SD3 model offer different levels of detail and refinement in the generated images.
- 🔍 The SD3 model produced more accurate results in terms of fashion and environment details compared to the core model.
- 🚫 An error occurred when trying to use the outpainting node with an upscaled image due to payload size limitations.
- 🧩 The inpainting node allowed for changes in the image, such as altering clothing and models, with some minor issues.
- 😹 The search and replace node humorously replaced a person with a giant cat in an image, demonstrating the node's functionality.
Q & A
What is Stable Diffusion 3 (SD3) and how can it be used?
-Stable Diffusion 3 (SD3) is an AI model released by Stability AI for image generation. It can be used via API calls and requires the purchase of credits for image generation. It is not available as a free checkpoint yet.
How much does it cost to generate an image with SD3?
-Generating an image with SD3 costs around 6 cents of US dollar per image.
What is the process of setting up SD3 with ComfyUI?
-To set up SD3 with ComfyUI, you need to install the required custom nodes from the Stability AI GitHub page, restart ComfyUI, and then input your Stability AI API key into the API key override field for each node you want to use.
How can I get Stability AI API keys and credits?
-You can get Stability AI API keys and credits by signing up or logging into your Stability AI account, navigating to the account page, and purchasing additional credits if needed.
What are the different nodes available for SD3 in ComfyUI?
-The different nodes available for SD3 in ComfyUI include stability image core, stability SD3, stability remove background, stability creative upscale, stability outpainting, and stability inpainting.
How do I input an API key for each node in ComfyUI?
-To input an API key for each node in ComfyUI, you need to click on the API key override field within each node, paste your API key, and confirm it.
What is the purpose of the positive and negative prompt fields in the nodes?
-The positive prompt field is used to guide the AI towards generating an image that matches the desired characteristics, while the negative prompt field helps to avoid undesired elements in the generated image.
How can I adjust the output format and aspect ratio in the SD3 node?
-You can adjust the output format by selecting either PNG or Jpeg in the output format field. The aspect ratio can be set by entering the desired ratio, such as 16:9, in the aspect ratio field.
What happens if I encounter an error while using the outpainting node?
-If you encounter an error, such as a payload size being too large, you may need to adjust the settings, like reducing the outpainting size or linking the output from another node, like the upscaler, to resolve the issue.
What is the inpainting node used for in ComfyUI?
-The inpainting node is used to fill in or modify parts of an image. It requires an image and a mask to define the area to be inpainted, allowing for changes such as a quick change of clothes or a different model in the image.
How can I test SD3 without spending money?
-Stability AI provides an initial 25 free credits when you sign up for an account. You can use these credits to test out SD3 and its various features in ComfyUI.
Outlines
📝 Introduction to Using Stable Diffusion 3 with CompUI
The video begins with an introduction to Stable Diffusion 3 (SD3), mentioning that it is available for use via API keys but not yet as a free checkpoint. The presenter explains that using SD3 requires purchasing credits, costing approximately 6 cents per image generation. The video outlines the process of setting up a workflow in CompUI to use SD3, including installing missing nodes from Stability AI's GitHub and inputting an API key for each node. The presenter also provides a brief overview of the different nodes available for image manipulation.
💳 Purchasing Credits and Setting API Keys in CompUI
The presenter guides viewers on how to purchase credits for Stability AI's SD3 on their website, noting the initial 25 free credits and the option to buy more at a rate of $10 for a thousand credits. The process of revealing and copying the API key from the Stability AI account page is discussed, with instructions on how to input the key into the API key override field in each relevant node within CompUI. The video also covers the selection of different models within the SD3 node, such as SD3 and SD3 turbo, and the adjustment of various fields like positive prompt, negative prompt, seed, and output format.
🖼️ Generating Images with Core and SD3 Models
The video demonstrates generating images using the Core and SD3 models in CompUI. The presenter inputs a positive prompt describing a young woman in a specific setting and leaves the negative prompt empty. The images generated from both models are compared, with the SD3 model producing a more accurate representation of the clothing and environment as described in the prompt. The presenter also discusses the credits used for the image generations and the remaining balance after the process.
🎨 Exploring Additional Features: Upscaling, Outpainting, and Inpainting
The presenter explores additional features of the SD3 nodes in CompUI, including creative upscaling, outpainting, and inpainting. Each feature is tested with a specific prompt, and the presenter shares observations about the results, such as the level of detail and the influence of the environment on the generated clothing. The video also covers troubleshooting, such as dealing with payload size errors and correcting input fields. The presenter concludes with a brief mention of the search and replace feature and an error encountered during its demonstration.
🚀 Conclusion and Future Testing with SD3
The video concludes with the presenter's overall positive impression of the SD3 models, especially considering they are base models without community fine-tuning. The presenter mentions the inability to get the remove background node working due to an API key input issue but suggests a potential workaround. The video ends with an invitation for viewers to leave prompts in the comments for further testing and provides links to view the generated images. The presenter introduces themselves, provides social media and web contact information, and leaves viewers with a slideshow of the SD3 images generated during the video.
Mindmap
Keywords
💡Stable Diffusion 3
💡API Key
💡ComfyUI
💡Image Generation
💡Credits
💡GitHub
💡Workflow
💡Nodes
💡Positive Prompt
💡Negative Prompt
💡Upscaling
Highlights
Stable Diffusion 3 (SD3) is now available to use with API keys, but not as a free checkpoint.
Using SD3 requires purchasing credits, costing around 6 cents USD per image generation.
Stability AI's official nodes and workflow for ComfyUI are available on their GitHub page.
Missing nodes in ComfyUI can be installed via the manager, requiring a restart after installation.
Each Stability API node in ComfyUI has an API key override field for entering your unique API key.
Stability AI offers different pricing for various models like SD3, SD3 Turbo, and Core.
Credits for image generation can be purchased in increments starting at $10 for 1000 credits.
The API key must be entered manually into each node within the ComfyUI workflow.
Core and SD3 nodes have fields for positive and negative prompts, seed, and output format.
Initial tests with SD3 show promising results, with the model accurately generating images based on prompts.
SD3 Turbo is a cost-effective alternative to SD3, using fewer credits per API call.
The output format in the SD3 node was mistakenly set to 16:9, which should be in the aspect ratio field.
Stability Creative Upscale node demonstrated excellent results, enhancing the detail and correcting anatomy in images.
Outpainting node was used to expand the image, maintaining the original perspective and ambience.
Inpainting node allowed for changes in the image, such as altering clothing and models, with some minor issues.
Search and Replace node encountered an error due to an incorrect active prompt field, but was later resolved.
The Remove Background node was not functional in the session, possibly due to a missing API key input field.
Users are provided with 25 free credits to test SD3 within ComfyUI without any upfront cost.
Andrea Baioni, the presenter, offers to test user-submitted prompts and shares the results on imgur.