From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
Project page: ~evonne_ng/projects/audio2photoreal/
Code and data:
Arxiv: coming soon!
Abstract:
We present a framework for generating full-bodied photorealistic avatars that gesture according to the conversational dynamics of a dyadic interaction. Given speech audio, we output multiple possibilities of gestural motion for an individual, including face, body, and hands. The key behind our method is in combining the benefits of sample diversity from vector quantization with the high-frequency details obtained through diffusion to generate more dynamic, expressive motion. We visualize the generated motion using highly photorealistic avatars that can express crucial nuances in gestures (e.g. sneers and smirks). To facilitate this line of research, we introduce a first-of-its-kind multi-view conversational dataset that allows for photorealistic reconstruction. Experiments show our model generates appropriate and diverse gestures, outperforming both diffusion- and VQ-only methods. Furthermore, our perceptual evaluation highlights the importance of photorealism (vs. meshes) in accurately assessing subtle motion details in conversational gestures. Code and dataset will be publicly released.
Key parts:
00:15 project overview
00:40 dataset
00:47 method overview
00:55 face motion model
01:10 guide pose predictor
01:26 pose motion model
01:45 avatar renderer
02:31 results: guide poses, diffusion outputs, avatar
03:16 results: muti-sample results
04:15 results: ours vs. LDA vs. Random
04:53 results: ours vs. SHOW vs KNN
05:43 results: generalization to “Friends“ audio
06:10 results: motion editing