Kavli Affiliate: Zheng Zhu
| First 5 Authors: Tianbao Zhang, Jian Zhao, Yuer Li, Zheng Zhu, Ping Hu
| Summary:
Whole-body audio-driven avatar pose and expression generation is a critical
task for creating lifelike digital humans and enhancing the capabilities of
interactive virtual agents, with wide-ranging applications in virtual reality,
digital entertainment, and remote communication. Existing approaches often
generate audio-driven facial expressions and gestures independently, which
introduces a significant limitation: the lack of seamless coordination between
facial and gestural elements, resulting in less natural and cohesive
animations. To address this limitation, we propose AsynFusion, a novel
framework that leverages diffusion transformers to achieve harmonious
expression and gesture synthesis. The proposed method is built upon a
dual-branch DiT architecture, which enables the parallel generation of facial
expressions and gestures. Within the model, we introduce a Cooperative
Synchronization Module to facilitate bidirectional feature interaction between
the two modalities, and an Asynchronous LCM Sampling strategy to reduce
computational overhead while maintaining high-quality outputs. Extensive
experiments demonstrate that AsynFusion achieves state-of-the-art performance
in generating real-time, synchronized whole-body animations, consistently
outperforming existing methods in both quantitative and qualitative
evaluations.
| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3