We integrated Wan 2.7 R2V (Reference-to-Video) as a new video generation model for reference-driven workflows.
Why This Matters
Wan 2.7 R2V preserves subject identity using references, making it ideal for consistent brand characters, product-focused UGC, and creative storytelling where a subject needs to remain recognizable across generations.
- Identity preservation: Uses 1-7 reference images to keep subjects, style, and composition on-brand.
- Cinematic quality: Native 1080p (and 720p) output with Alibaba's latest Wan 2.7 model.
- Flexible durations: Generate clips from 2 to 10 seconds.
- Common social formats: Supports 16:9, 9:16, 4:3, 3:4, and 1:1.
What's New
- New video model:
wan-2.7-r2v - End-to-end wiring in GenAI video generation, model catalog, and provider normalization
- New Mastra tool mapping for agent-driven reference-to-video generation
- Webhook (async completion) support via Replicate
References
- Replicate model page: wan-video/wan-2.7-r2v
- Model docs: llms.txt for wan-video/wan-2.7-r2v