Audio-Visual Face Reenactment

Audio-Visual Face Reenactment

Audio-Visual Face Reenactment

Oct 6, 2022

Madhav Agarwal, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C. V. Jawahar

Madhav Agarwal, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C. V. Jawahar

This work proposes a novel method to generate realistic talking head videos using audio and visual streams. We animate a source image by transferring head motion from a driving video using a dense motion field generated using learnable keypoints. We improve the quality of lip sync using audio as an additional input, helping the network to attend to the mouth region. We use additional priors using face segmentation and face mesh to improve the structure of the reconstructed faces. Finally, we improve the visual quality of the generations by incorporating a carefully designed identity-aware generator module. The identity-aware generator takes the source image and the warped motion features as input to generate a high-quality output with fine-grained details. Our method produces state-of-the-art results and generalizes well to unseen faces, languages, and voices. We comprehensively evaluate our approach using multiple metrics and outperforming the current techniques both qualitative and quantitatively. Our work opens up several applications, including enabling low bandwidth video calls. We release a demo video and additional information at http://cvit.iiit.ac.in/research/projects/cvit-projects/avfr

Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5178-5187


Link to Paper

lipsync any content w/ one api. our lipsync works on any video content in the

wild — across movies, podcasts, games, and even animations.

crafting magic in california.

© 2024 synchronicity labs inc.

lipsync any content w/ one api. our lipsync works on any video content in the

wild — across movies, podcasts, games, and even animations.

crafting magic in california.

© 2024 synchronicity labs inc.