Unlock Stunning Visuals: A Step-by-Step Guide to Stable Diffusion Workflow
📝 Workflow Overview

This workflow is designed to generate high-quality images with particle and light effects using Stable Diffusion.
It incorporates FluxGuidance for style enhancement and high-resolution upscaling to ensure final image clarity.
🧠 Core Models
1️⃣ Stable Diffusion (UNet)
Function: The main image generation model responsible for producing images based on text prompts and control inputs.
Model Used:
F.1_Depth-fp16_1.0
Installation:
Install via ComfyUI Manager.
Or manually download
.safetensors
and place it inmodels/checkpoints
.
2️⃣ FluxGuidance
Function: Enhances specific styles, making particle and light effects more pronounced.
Parameter Used:
guidance_strength = 2.5
Installation:
Requires the Flux plugin, available through ComfyUI Manager.
3️⃣ CLIP (Text Encoder)
Function: Converts text prompts into usable vectors for image generation.
Model Used:
clip_l, t5xxl_fp16
Installation:
Install via ComfyUI Manager.
Or manually place it in
models/clip
.
4️⃣ VAE (Variational Autoencoder)
Function: Enhances image quality by improving color and detail representation.
Model Used:
ae.sft
Installation:
Obtain via ComfyUI Manager.
Or manually place it in
models/vae
.
5️⃣ Super-Resolution Upscaling
Function: Increases the resolution of the final output for enhanced clarity.
Model Used:
4x_NMKD-Siax_200k
Installation:
Download
4x_NMKD-Siax_200k.pth
and place it inmodels/upscale
.
📦 Key Components (Nodes)
Node | Function |
---|---|
| Loads the UNet model (Stable Diffusion). |
| Loads the VAE model. |
| Applies Flux-style enhancement to control particle and light effects. |
| Converts text prompts into conditioning inputs. |
| Processes negative prompts to reduce unwanted effects. |
| Handles image generation by controlling sampling steps and methods. |
| Converts latent images into final output images. |
| Uses an upscaling model to enhance resolution. |
| Saves the final high-resolution image. |
📂 Major Workflow Groups
1️⃣ Text Encoding
Function: Converts text descriptions into usable vectors for Stable Diffusion.
Key Components:
DualCLIPLoader
CLIPTextEncode
ConditioningZeroOut
2️⃣ Flux Style Enhancement
Function: Adjusts the Flux style to emphasize particle and light effects.
Key Components:
FluxGuidance
guidance_strength = 2.5
3️⃣ Sampling & Generation
Function: Generates images based on input conditions.
Key Components:
KSampler
Sampling Parameters:
Euler
6 sampling steps
CFG Scale = 6
Fixed Seed
4️⃣ High-Resolution Upscaling
Function: Enhances the final image resolution for better quality.
Key Components:
ImageUpscaleWithModel
4x_NMKD-Siax_200k
🔢 Inputs & Outputs
📥 Main Inputs
Text Prompt (describing key aspects of the particle and light effects)
"Realistic portrait photography, realistic skin texture, white background. Best quality, high quality, HD 8K"
Sampling Parameters:
Seed Value
(Fixed or Random)Sampling Method
(Euler)Sampling Steps
(6)CFG Scale
(6)
📤 Main Outputs
Final high-quality image with light and particle effects
Upscaled high-resolution version
Output suitable for special effects or artistic compositions
⚠️ Important Considerations
Hardware Requirements
Requires at least 8GB GPU (12GB+ recommended).
FluxGuidance and super-resolution upscaling are memory-intensive.
Flux Style Adjustments
Adjust
guidance_strength
to control the intensity of particle and light effects.
Sampling Parameter Optimization
Euler is faster, while DPM++ provides better quality.
CFG Scale
above 7 may cause oversaturation; recommended value: 6–7.
Super-Resolution Upscaling
Using
4x_NMKD-Siax_200k
produces high-quality upscaling but increases processing time.
Conclusion
This ComfyUI workflow enhances particle and light effects using FluxGuidance, generates high-resolution special effects images with Stable Diffusion, and applies super-resolution upscaling for improved clarity.
It is ideal for sci-fi effects, artistic creations, and cinematic visuals.
Let me know if you need further modifications! 🚀