* Equal contribution
†Project lead
Guangzhou Quwan Network Technology
2025/11/11: 🎉🎉🎉 Our Playmate2 paper has been accepted and will be presented at AAAI 2026. We plan to release the inference code and model weights for both Playmate and Playmate2 in the coming weeks. Stay tuned and thank you for your patience!2025/10/15: 🔥🔥🔥 We released Playmate2, a novel DiT framework for generating long-duration high-quality audio-driven videos. Playmate2 also supports multi-character animation. To the best of our knowledge, Playmate2 is the first training-free approach capable of enabling audio-driven animation for three or more characters. Codes and models will release, please stay tuned!2025/05/07: 🎉🎉🎉 Super stoked to share that our paper has been accepted to ICML 2025!2025/04/28: ✨✨✨ Created a GitHub repository for the project.2025/02/11: 🚀🚀🚀 Our paper is in public on arxiv.
- Inference code for the first stage.
- Release pretrained models for the first stage.
- Inference code for the second stage.
- Release pretrained models for the second stage.
- Release the training code for the first stage.
- Release the training code for the second stage.
001.mp4 |
002.mp4 |
003.mp4 |
004.mp4 |
005.mp4 |
006.mp4 |
| Angry | Disgusted | Contempt | Fear | Happy | Sad | Surprised |
|---|---|---|---|---|---|---|
007.mp4 |
||||||
008.mp4 |
||||||
009.mp4 |
||||||
010.mp4 |
||||||
011.mp4 |
||||||
Explore more examples.
If you find our work useful for your research, please consider citing the paper:
@inproceedings{maplaymate,
title={Playmate: Flexible Control of Portrait Animation via 3D-Implicit Space Guided Diffusion},
author={Ma, Xingpei and Cai, Jiaran and Guan, Yuansheng and Huang, Shenneng and Zhang, Qiang and Zhang, Shunsi},
booktitle={Forty-second International Conference on Machine Learning}
}