1S-Lab, Nanyang Technological Universityโ
2SenseTime Research, Singaporeโ
MatAnyone is a practical human video matting framework supporting target assignment, with stable performance in both semantics of core regions and fine-grained boundary details.
๐ฅ For more visual results, go checkout our project page
- [2025.02] ๐ข ๐ข ๐ข Code and demo will be released next week ๐ค
- [2025.02] This repo is created.
If you find our repo useful for your research, please consider citing our paper:
@InProceedings{yang2025matanyone,
title = {{MatAnyone}: Stable Video Matting with Consistent Memory Propagation},
author = {Yang, Peiqing and Zhou, Shangchen and Zhao, Jixin and Tao, Qingyi and Loy, Chen Change},
booktitle = {arXiv preprint arXiv:2501.14677},
year = {2025}
}This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.
If you have any questions, please feel free to reach us at peiqingyang99@outlook.com.



