Unlock Pro-Level Game Character Design with AI-Powered Workflow
1. Workflow Overview

This workflow, "Game Design Focus: Draft Control + Style Transfer + WebUI Restoration", is optimized for game character design, featuring:
- Sketch Control: Uses - ControlNet(- xinsir_controlnet_tile_sdxland- diffusers_xl_depth_mid) for precise structure.
- Style Transfer: Applies - IPAdapterto transfer reference styles.
- SDXL Enhancement: Leverages Jianzhen SDXL model with LoRAs (e.g., - Wheelchair Fashion).
- Upscaling: Combines - UltimateSDUpscaleand- SwinIR_4xfor HD output.
Core Models:
- Jianzhen SDXL Wheelchair: Base model (realistic style). 
- IPAdapter-MoE: Multi-style fusion adapter. 
- ControlNet: - xinsir_controlnet_tile_sdxl(sketch),- diffusers_xl_depth_mid(depth).
2. Key Nodes
- IPAdapterUnifiedLoader (Node 564) - Function: Loads - IPAdapterfor multi-style fusion (e.g.,- PLUS-high strength).
- Install: Via - ComfyUI Manager(- ComfyUI_IPAdapter_plus).
 
- ControlNetApplyAdvanced (Nodes 427/430) - Function: Applies sketch/depth control (weights: sketch - 0.25, depth- 0.8).
- Dependency: - comfyui_controlnet_auxpackage.
 
- UltimateSDUpscale (Node 633) - Function: Tile-based upscaling with - SwinIR_4x(tile size- 512x512).
 
- BNK_CLIPTextEncodeAdvanced (Nodes 645/646) - Function: Advanced text encoder with negative prompts (e.g., "bad anatomy"). 
 
- Jianzhen SDXL LoRA (Node 657) - Function: Enhances details (e.g., armor textures) via - Wheelchair FashionLoRA.
 
Special Dependencies:
- Model Files: - Jianzhen SDXL Wheelchair.safetensors: Download to- models/checkpoints.
- xinsir_controlnet_tile_sdxl: Get from CivitAI, place in- models/controlnet.
 
3. Workflow Structure (Groups)
- Group 1: Model Loading (Nodes 542/546/657) - Loads base model, VAE, and LoRAs. 
 
- Group 2: Draft Input & Preprocess (Nodes 655/560) - Input: Sketch or design draft (e.g., - 02.png).
- Resize: - ImageResizeKJadjusts resolution to- 2048x2048.
 
- Group 3: ControlNet (Nodes 427/430/434/431) - Sketch: - LineartStandardPreprocessorextracts clean lines.
- Depth: - MiDaS-DepthMapPreprocessoradds 3D perception.
 
- Group 4: IPAdapter Style Transfer (Nodes 564/621/565) - Input reference image (e.g., - 433383010-xxx.png) for style fusion.
 
- Group 5: Generate & Upscale (Nodes 618/633) - Generation: - KSampler //Inspirewith- dpmpp_2m(38 steps).
- Upscale: - UltimateSDUpscalerepairs details tile-by-tile.
 
- Group 6: Output & Compare (Nodes 656/318) - Saves final image and provides slider comparison (original vs. generated). 
 
4. Inputs & Outputs
Inputs:
- Draft Image: Recommended - 768x1280(text-to-image) or- 2048x2048(image-to-image).
- Prompts: Detailed descriptions (e.g., "futuristic soldier, grey armor"). 
- Reference Image: For style transfer (clean background preferred). 
Outputs:
- HD character design (saved as PNG). 
- Slider comparison tool. 
5. Notes
- Optimization: - Reduce - UltimateSDUpscaletile size (e.g.,- 384x384) for low VRAM.
- Enable - --xformersfor faster inference.
 
- Troubleshooting: - Sketch control fails → Check - ControlNetmodel paths.
- Style deviation → Adjust - IPAdapterweights (default- 0.1-0.8).
 
- Compatibility: - Requires - ComfyUI ≥ 0.3.30and- PyTorch 2.1+.
- BNK_CLIPTextEncodeAdvancedneeds- ComfyUI_ADV_CLIP_emb.