Unlock Pro-Level Game Character Design with AI-Powered Workflow

CN
ComfyUI.org
2025-05-14 14:42:00

1. Workflow Overview

mankugwlpboino8ji17bf5694369d95581b6cb5f5387b92052547c294ffd1a4966d6cab7a0f40da405.gif

This workflow, "Game Design Focus: Draft Control + Style Transfer + WebUI Restoration", is optimized for game character design, featuring:

  • Sketch Control: Uses ControlNet (xinsir_controlnet_tile_sdxl and diffusers_xl_depth_mid) for precise structure.

  • Style Transfer: Applies IPAdapter to transfer reference styles.

  • SDXL Enhancement: Leverages Jianzhen SDXL model with LoRAs (e.g., Wheelchair Fashion).

  • Upscaling: Combines UltimateSDUpscale and SwinIR_4x for HD output.

Core Models:

  • Jianzhen SDXL Wheelchair: Base model (realistic style).

  • IPAdapter-MoE: Multi-style fusion adapter.

  • ControlNet: xinsir_controlnet_tile_sdxl (sketch), diffusers_xl_depth_mid (depth).


2. Key Nodes

  1. IPAdapterUnifiedLoader (Node 564)

    • Function: Loads IPAdapter for multi-style fusion (e.g., PLUS-high strength).

    • Install: Via ComfyUI Manager (ComfyUI_IPAdapter_plus).

  2. ControlNetApplyAdvanced (Nodes 427/430)

    • Function: Applies sketch/depth control (weights: sketch 0.25, depth 0.8).

    • Dependency: comfyui_controlnet_aux package.

  3. UltimateSDUpscale (Node 633)

    • Function: Tile-based upscaling with SwinIR_4x (tile size 512x512).

  4. BNK_CLIPTextEncodeAdvanced (Nodes 645/646)

    • Function: Advanced text encoder with negative prompts (e.g., "bad anatomy").

  5. Jianzhen SDXL LoRA (Node 657)

    • Function: Enhances details (e.g., armor textures) via Wheelchair Fashion LoRA.

Special Dependencies:

  • Model Files:

    • Jianzhen SDXL Wheelchair.safetensors: Download to models/checkpoints.

    • xinsir_controlnet_tile_sdxl: Get from CivitAI, place in models/controlnet.


3. Workflow Structure (Groups)

  1. Group 1: Model Loading (Nodes 542/546/657)

    • Loads base model, VAE, and LoRAs.

  2. Group 2: Draft Input & Preprocess (Nodes 655/560)

    • Input: Sketch or design draft (e.g., 02.png).

    • Resize: ImageResizeKJ adjusts resolution to 2048x2048.

  3. Group 3: ControlNet (Nodes 427/430/434/431)

    • Sketch: LineartStandardPreprocessor extracts clean lines.

    • Depth: MiDaS-DepthMapPreprocessor adds 3D perception.

  4. Group 4: IPAdapter Style Transfer (Nodes 564/621/565)

    • Input reference image (e.g., 433383010-xxx.png) for style fusion.

  5. Group 5: Generate & Upscale (Nodes 618/633)

    • Generation: KSampler //Inspire with dpmpp_2m (38 steps).

    • Upscale: UltimateSDUpscale repairs details tile-by-tile.

  6. Group 6: Output & Compare (Nodes 656/318)

    • Saves final image and provides slider comparison (original vs. generated).


4. Inputs & Outputs

Inputs:

  • Draft Image: Recommended 768x1280 (text-to-image) or 2048x2048 (image-to-image).

  • Prompts: Detailed descriptions (e.g., "futuristic soldier, grey armor").

  • Reference Image: For style transfer (clean background preferred).

Outputs:

  • HD character design (saved as PNG).

  • Slider comparison tool.


5. Notes

  1. Optimization:

    • Reduce UltimateSDUpscale tile size (e.g., 384x384) for low VRAM.

    • Enable --xformers for faster inference.

  2. Troubleshooting:

    • Sketch control fails → Check ControlNet model paths.

    • Style deviation → Adjust IPAdapter weights (default 0.1-0.8).

  3. Compatibility:

    • Requires ComfyUI ≥ 0.3.30 and PyTorch 2.1+.

    • BNK_CLIPTextEncodeAdvanced needs ComfyUI_ADV_CLIP_emb.

Recommend