Skip to yearly menu bar Skip to main content


Poster

UV-IDM: Identity-Conditioned Latent Diffusion Model for Face UV-Texture Generation

Hong Li · Yutang Feng · Song Xue · Xuhui Liu · Boyu Liu · Bohan Zeng · Shanglin Li · Jianzhuang Liu · Shumin Han · Baochang Zhang

Arch 4A-E Poster #90
[ ] [ Paper PDF ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

3D face reconstruction aims at generating high-fidelity 3D face shapes and textures from single-view or multi-view images. However, current prevailing facial texture generation methods generally suffer from low-quality texture, identity information loss, and inadequate handling of occlusions. To solve these problems, we introduce an Identity-Conditioned Latent Diffusion Model for face UV-texture generation (UV-IDM) to generate photo-realistic textures based on the Basel Face Model (BFM). UV-IDM leverages the powerful texture generation capacity of a latent diffusion model (LDM) to obtain detailed facial textures. To preserve the identity during the reconstruction procedure, we design an identity-conditioned module that can utilize any in-the-wild image as a robust condition for the LDM to guide texture generation. UV-IDM can be easily adapted to different BFM-based methods as a high-fidelity texture generator.Furthermore, in light of the limited accessibility of most existing UV-texture datasets, we build a large-scale and publicly available UV-texture dataset based on BFM, termed BFM-UV. Extensive experiments show that our UV-IDM can generate high-fidelity textures in 3D face reconstruction within seconds while maintaining image consistency, bringing new state-of-the-art performance in facial texture generation.

Chat is not available.