Re-Style Your World: Expert-Level Anime Art with Flux Architecture
1. Workflow Overview

This is an advanced Stable Diffusion 3 (Flux architecture) workflow specialized for anime-style (Re-style) generation, featuring:
Flux-DiT model replacing traditional UNet
ControlNet Upscaler for detail control
T5-XXL + CLIP-L dual-text encoder system
Key Models:
NunchakuFluxDiTLoader: Flux-architecture SD3 modelFlux-ControlNet-Upscaler: Super-resolution controlT5-XXL + CLIP-L: Enhanced text understanding
2. Critical Nodes
Node | Function | Installation |
|---|---|---|
| Loads 4-bit quantized Flux-DiT model ( | Manual install Nunchaku plugin |
| Applies ControlNet constraints ( | Requires |
| Adjusts Flux sampling params (CFG=1.15, noise_offset=0.5) | Built-in node |
Dependencies:
Model Files:
svdq-int4-flux.1-dev→ Save tomodels/fluxdit/Flux.1-dev-Controlnet-Upscaler.safetensors→ Save tomodels/controlnet/t5xxl_fp16.safetensors→ Save tomodels/clip/
3. Workflow Structure
Group 1: Model Loading
Nodes:
NunchakuFluxDiTLoader,DualCLIPLoader,VAELoader
Group 2: ControlNet Processing
Input: Reference image (
ComfyUI_temp_pecyg_00001_...png)Params: ControlNet strength=0.6, start=0%, end=54.56%
Group 3: Generation Core
Key Params: Resolution=768x1024, steps=28, seed=526841747880726
4. Inputs & Outputs
Required Inputs:
Positive/Negative prompts (via two
CLIPTextEncodenodes)Reference image (auto-loaded)
Output:
Final image saved to
ComfyUIfolder
5. Important Notes
⚠️ Hardware:
Minimum 12GB VRAM (T5-XXL + Flux model)
Launch with --medvram flag
🔧 Troubleshooting:
If
Nunchaku plugin missing:cd custom_nodes && git clone https://github.com/T8star/ComfyUI-nunchakuControlNet strength >0.7 may cause artifacts