Transform Your Videos into Anime-Style Masterpieces with Advanced AI Models
1. Workflow Overview

This workflow specializes in AI-powered video stylization and generation, featuring:
Transforming input videos (e.g.,
dance.mp4
) into anime-style sequencesFrame-to-frame consistency via WAN2.1 model series
Customizable prompts for character/background control
2. Core Models
Model Name | Function |
---|---|
| Base video generation model for motion prediction |
| Multilingual CLIP encoder (optimized for Chinese prompts) |
| VAE for latent-image space conversion |
3. Key Nodes
Video Input Group
VHS_LoadVideo
: Frame loading (supports frame skipping)ImageScale
: Resizes to 832x480 (model-compatible resolution)
Prompt Control
Triple
CLIPTextEncode
: Processes positive/negative/target-style promptsFluxGuidance
: Dynamic conditioning strength (value=11 for high stylization)
Sampling & Generation
HYFlowEditSampler
: Custom sampler (4th-order DPM++ Solver, seed=123478)VAEDecodeTiled
: Tile-based decoding (256x64 tiles)
Video Export
VHS_VideoCombine
: Outputs MP4 (16fps, CRF19 compression)
4. Dependencies
Required Plugins:
ComfyUI-VideoHelperSuite
ComfyUI-HunyuanLoom
(contains custom nodes likeHYFlowEditGuiderCFG
)
Model Sources:
Download WAN2.1 models from HuggingFace
5. Input Parameters
Mandatory:
Video file path (e.g.,
dance.mp4
)Positive prompt (see example for detailed anime-style description)
Tunable:
CFG Scale
(default=7.5)Frame rate (default=16fps)
6. Notes
⚠️ Requires 12GB+ VRAM (video batch processing)
⚠️ Initial run triggers model compilation (~2-5 mins)
⚠️ Outputs saved as hyloom_*.mp4
in ComfyUI/output/