CyberLink Logo
MyEdit Icon
MyEdit Video
Try Online

How to Use Wan 2.2 and 2.6 Locally and Online: Full Review & Tutorial

Last Updated on Jan. 9, 2026 - by Cyberlink AI Research Team
Wan 2.2 Banner

If you value open source freedom and customization, Wan 2.2 is one of the most compelling AI video models that is quickly gaining attention among creators who want realistic character motion. Power users can unlock its full potential locally, while creators without technical backgrounds can still benefit through third-party platforms.

In this guide, you will learn what Wan 2.2 is, how to use Wan 2.2 online through an easier platform like MyEdit, how to run Wan 2.2 locally using ComfyUI, and how it compares with other popular AI video models.




What Is Wan 2.2? Features, Pricing & Advantages

Wan 2.2 is Alibaba Tongyi Lab’s bellwether AI video generation model, designed to make cinematic-quality video creation accessible to everyone. By combining cutting-edge AI with an open source philosophy, Wan 2.2 allows creators and businesses to generate professional videos from text or image prompts without expensive equipment or technical barriers.

⏶ AI Videos Generated by Wan 2.2

Built for both flexibility and high performance, Wan 2.2 delivers sharp 1080p visuals, realistic motion, and seamless effects. Its open, collaborative design encourages developers and creative communities worldwide to experiment, innovate, and push the boundaries of AI-driven storytelling.


The Evolution: From Wan 2.1 to the Power of Wan 2.6


Wan 2.2 Key Features


Wan 2.2 Advantages & Benefits


How to Use Wan 2.2 Online for Character Motion Swap & Animation

If you prefer cloud based tools, lack GPU hardware, or are unfamiliar with ComfyUI workflows and dependencies, MyEdit offers a streamlined way to experience Wan 2.2 powered motion animation.

  1. Open MyEdit in the web browser and head to Character Motion Swap.
  2. Uplod your photo and add a reference video.
    💡For better results, use footage of a single character with an aspect ratio similar to the uploaded image.
    How to Use Wan 2.2 Online for Character Motion Swap & Animation Step 2
  3. Choose between the original photo background or the background from the video.
    How to Use Wan 2.2 Online for Character Motion Swap & Animation Step 3
  4. Click "Generate." Generation time depends on your footage length, but it usually takes 5 to 15 minutes. You can continue working on other projects while you wait for your first Wan 2.2 AI video.
    How to Use Wan 2.2 Online for Character Motion Swap & Animation Samplr Photo
    +
    =

If you do not have motion ideas or reference videos, MyEdit also offers various built-in templates, perfect for social media posts, short videos and creative experimentation.

Get a sneak peek at MyEdit's Image to Video templates.


How to Run Wan 2.2 Locally Step by Step

Running Wan 2.2 locally gives maximum control but requires technical setup and sufficient hardware. Thus, for users who want results without setup, using a third party platform that integrates Wan 2.2 can save significant time and frustration.

  1. Prepare Your Computer

    · GPU: NVIDIA RTX 3060 or higher recommended, 8GB VRAM minimum, 12GB+ ideal
    · CPU: Modern multi core processor
    · RAM: 16GB minimum, 32GB recommended
    · Storage: SSD with at least 50GB free space
    · OS: Windows or Linux
    · CUDA and compatible GPU drivers installed

  2. Install ComfyUI

    Download ComfyUI from its official repository. Then, install Python and required dependencies. Launch ComfyUI and confirm the interface loads correctly.

  3. Add Wan 2.2 Model Files

    Download Wan 2.2 model checkpoints. Place files into the correct ComfyUI model directories. Restart ComfyUI to load the model.

  4. Build a Wan 2.2 Workflow

    Load image input and reference motion video node. Connect Wan 2.2 motion transfer nodes. Adjust parameters like frame length and resolution. Generate and export the video.

💡If you need a more detailed guide for running Wan 2.2 in ComfyUI, check out the Official Wan 2.2 and Wan 2.2 Animate workflow tutorial by ComfyUI.


Who Benefits Most From Wan 2.2 Character Motion Swap

Wan 2.2 is particularly valuable for creators and professionals who need realistic motion without filming.


What Sets Wan 2.2 Apart From Other AI Video Models

We compared Wan 2.2 with other popular AI video models on the market, focusing mainly on the Motion Swap feature.

ModelOpen SourceLocal RunMotion Swap QualityEase of Use
Wan 2.2YesYesHighMedium
Veo 3NoNoVery High (in conjunction with other tools)Easy
Kling AINoNoHighEasy
FluxYes (partial)YesMedium (in conjunction with other tools)Medium

Common Limitations of Wan 2.2

While Wan 2.2 is a revolutionary video model, it still has limitations. Recognizing these helps in setting realistic expectations and choosing the right video genres for the tool.


FAQs About Wan 2.2

  • GPU: NVIDIA RTX 3060 or higher recommended, 8GB VRAM minimum, 12GB+ ideal
  • CPU: Modern multi core processor
  • RAM: 16GB minimum, 32GB recommended
  • Storage: SSD with at least 50GB free space
  • OS: Windows or Linux
  • CUDA and compatible GPU drivers installed

Yes. Wan 2.2 is open source and free to download and use.

The model itself costs nothing. Costs only come from hardware, electricity, or cloud GPU usage.

Wan 2.2 excels in flexibility and open source control, while closed models often prioritize ease of use and polished user experience.

Beginners may find local setup challenging, but they can easily use Wan 2.2 through integrated platforms such as MyEdit.

MyEdit typically uses a credit-based or subscription pricing model, with costs varying depending on the footage length.

No. Online platforms handle the technical complexity, allowing users to generate videos with simple uploads and clicks.

Create with Wan 2.2
Author Avatar
AI Trends & Tool Review SpecialistsCyberlink AI Research Team

Cyberlink AI Research Team specializes in tracking the fast-moving world of generative AI. We deliver clear insights that help creators navigate emerging tools and technologies. Our work focuses on competitive AI tool analysis and trends across video and photo creation, helping creators stay ahead in an ever-evolving AI landscape.

Share this article via emailShare this article on FacebookShare this article on LinkedInShare this article on XShare this article on Pinterest