Skip to content Skip to footer

How to Install Wan 2.2 with ComfyUI part 1

How to Install Wan 2.2 with ComfyUI

1. Overview of Wan 2.2

Wan 2.2 is Alibaba Cloud’s newest multimodal video generation model powered by a Mixture of Experts (MoE) architecture with separate high-noise and low-noise expert models. This design enhances video quality by dynamically splitting tasks by denoising timesteps. Wan 2.2 delivers:

  • Cinematic-level aesthetic control (lighting, color, composition),

  • Smooth, large-scale motion restoration,

  • Precision semantic adherence in complex scenes.
    It supports both text‑to‑video and image‑to‑video generation—ideal for creators, educators, and artists.

2. Open-Source Models & Licensing

Wan 2.2 models are open-source under the Apache 2.0 license, which permits free use, modification, and commercial distribution—provided the original copyright and license are retained. (Src: ComfyUI+2ComfyUI+2)

Available variants:

  • Hybrid model: Wan2.2-TI2V‑5B (5B parameters; supports both text‑to‑video and image‑to‑video)

  • Image-to-video: Wan2.2-I2V-A14B (14B parameters)

  • Text-to-video: Wan2.2-T2V-A14B (14B parameters)

3. Installation Steps

  • Update or install the latest ComfyUI, preferably the Nightly build to ensure access to the newest templates and nodes. ComfyUI+12ComfyUI+12ComfyUI Blog+12

  • Open ComfyUI → Workflow → Browse Templates → Video and choose:

    • “Wan2.2 5B video generation” for the hybrid model, or

    • Wan2.2 14B T2V/I2V workflows, if available.
      Templates may only appear if your ComfyUI is fully up to date. GitHub+13ComfyUI+13ComfyUI+13ComfyUI+1

        • Manually download the required model files and place them into your ComfyUI folder structure:ComfyUI/
          ├── models/
               ├── diffusion_models/
                    └── wan2.2_ti2v_5B_fp16.safetensors (for 5B model)
               ├── text_encoders/
                    └── umt5_xxl_fp8_e4m3fn_scaled.safetensors
               └── vae/
                    └── wan2.2_vae.safetensors

  •  

    For 14B models, you’ll need the high‑noise & low‑noise diffusion models plus the appropriate VAE model (e.g., wan_2.1_vae). ComfyUIComfyUI+2Comfy Anonymous+2

    Load the models in the workflow nodes:

    1. Load Diffusion Model → your .safetensors file

    2. Load CLIP → the encoder .safetensors

    3. Load VAE → your VAE file
      Adjust the prompt, frame settings, and optionally enable image input (Ctrl+B) before executing with Run or Ctrl+Enter. ComfyUI+2Comfy Anonymous+2

4. Pro Tips & Optimization