Qwen

Wan V2.7 Video

Key Features of Wan V2.7 Video

Unified Multi-Workflow Video Creation

Wan 2.7 Video exposes four practical workflows in one model family: text/image-to-video, reference-to-video, first-last-frame generation, and edit-video. This lets teams move from ideation to revision without switching to unrelated models.

Example 1

Subject Reference

-

Prompt

Extreme close-up of rich dark chocolate being poured in slow motion over a layered cake. The glossy chocolate cascades down the sides, coating every surface with a perfect mirror-like sheen. Cocoa powder dusts through the air like smoke. Shallow depth of field, warm studio lighting, food photography aesthetic. The scene is luxurious and indulgent.

Result

Example 2

Subject Reference

Subject Reference

Prompt

The referenced vintage car cruises along a winding coastal highway at sunset. Waves crash on cliffs below. The car's chrome bumpers and red paint gleam in the golden light. Camera follows from a helicopter angle. Classic Americana road trip feel.

Result

Example 3

Subject Reference

Subject Reference

Prompt

The massive humpback whale glides slowly through the deep blue water. It turns gracefully, its huge pectoral fin sweeping through the water like a wing. Sunbeams penetrate from above, illuminating the whale's textured skin. Small fish scatter. Awe-inspiring scale and grace.

Result

Example 4

Subject Reference

Prompt

Transform the entire scene into a beautiful watercolor painting style. Soft brushstrokes, flowing paint washes, visible paper texture. Colors should bleed and blend naturally like wet watercolor on paper.

Result

Reference And Identity Control

Reference mode supports image and video references so creators can anchor character identity, style, and shot intent across generated clips. In the integrated schema, reference_image_urls and video_refs each support up to 8 inputs.

Example 1

Subject Reference

Subject Reference

Prompt

Use image references to keep character identity and visual language stable while changing camera movement and scene rhythm.

Result

Example 2

Subject Reference

Prompt

Use a source video reference to preserve motion rhythm and subject continuity while restyling the scene.

Result

Frame-Aware Motion Planning

Frames workflow supports a start image and optional end image, which is useful for transitions, reveal shots, and controlled motion arcs where camera or subject trajectory matters.

Example 1

Motion Source

Motion Source

Prompt

From a start frame, generate coherent cinematic movement with smooth camera travel and stable subject trajectory.

Result

Instruction-Based Video Editing

Edit-video mode accepts an input video_url, then applies style or content instructions while preserving core motion structure. You can also choose audio handling (auto or origin) and optionally keep source duration.

Example 1

Subject Reference

Prompt

Edit the source clip into watercolor style while preserving original timing, movement continuity, and pacing.

Result

Practical Resolution And Duration Ranges

The Wan 2.7 Video schema supports 720p and 1080p output tiers. Duration options cover short-form production windows, with up to 15 seconds in generation workflows and up to 10 seconds in reference/edit constrained paths.

Example 1

Prompt

Generate a short-form cinematic clip at 1080p with stable motion and visual coherence suitable for social or promo outputs.

Result

How To Use Wan V2.7 Video AI Video Model on skills.video

01

Select the Wan V2.7 Video model

Head to the create page and choose this model from the dropdown list.

02

Input your detailed prompt

Describe the scene, style, and motion you want. Adjust settings as needed.

03

Download your video

Click create, then download or share once the generation finishes.

FAQs

What is Wan 2.7 Video?expand_more
Wan 2.7 Video is Alibaba's open video model family available on fal, focused on high-quality generation and editing across text, image, reference, frame, and video inputs.
Where can I access Wan 2.7 workflows?expand_more
On fal, Wan 2.7 is exposed as dedicated endpoints for text-to-video, image-to-video, reference-to-video, and edit-video, so teams can choose task-specific routes while staying in one model family.
Which workflows are available in Wan 2.7 Video?expand_more
You can run text/image-to-video, reference-to-video, first-last-frame generation, and edit-video workflows from the same model family.
What output quality and duration does it support?expand_more
Wan 2.7 Video supports 720p and 1080p output. Depending on workflow, duration settings cover short-form ranges up to 15 seconds, while some constrained modes cap at 10 seconds.
How many references can I provide?expand_more
In reference workflows, you can provide reference_image_urls and video_refs, each supporting up to 8 items in the current schema.
Does Wan 2.7 Video support editing existing videos?expand_more
Yes. Edit-video mode accepts an input video URL and prompt instructions, with optional reference image guidance and audio behavior controls.
Does it support Chinese prompts?expand_more
Yes. Wan 2.7 Video supports both Chinese and English prompt inputs.
Is Wan 2.7 available for commercial usage?expand_more
On fal model pages, Wan 2.7 workflows are marked for commercial use. Final usage terms still depend on your platform account plan and policy compliance.
Wan V2.7 Video AI Video Generator | Text-to-Video & Image-to-Video | skills.video