Skip to yearly menu bar Skip to main content


Poster

GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image

Chong Bao · Yinda Zhang · Yuan Li · Xiyu Zhang · Bangbang Yang · Hujun Bao · Marc Pollefeys · Guofeng Zhang · Zhaopeng Cui

Arch 4A-E Poster #403
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Recently, we have witnessed the explosive growth of various volumetric representations in modeling animatable head avatars.However, due to the diversity of frameworks, there is no practical method to support high-level applications like 3D head avatar editing across different representations.In this paper, we propose a generic avatar editing approach that can be universally applied to various 3DMM-driving volumetric head avatars.To achieve this goal, we design a novel expression-aware modification generative model, which enableslift 2D editing from a single image to a consistent 3D modification field.To ensure the effectiveness of the generative modification process,we develop several techniques, including an expression-dependent modification distillation scheme to draw knowledge from the large-scale head avatar model and 2D facial texture editing tools, implicit latent space guidance to enhance the convergence of training, and a segmentation-based loss reweight strategy for fine-grained texture inversion.Extensive experiments demonstrate that our method delivers high-quality and consistent editing results across multiple expressions and viewpoints.

Chat is not available.