Skip to yearly menu bar Skip to main content


Poster

Generalizable Face Landmarking Guided by Conditional Face Warping

Jiayi Liang · Haotian Liu · Hongteng Xu · Dixin Luo

Arch 4A-E Poster #217
[ ] [ Project Page ]
[ Poster
Wed 19 Jun 10:30 a.m. PDT — noon PDT

Abstract: As a significant step for human face modeling, editing, and generation, face landmarking aims at extracting facial keypoints from images. Currently, a generalizable face landmarker is required in practical applications because real-world facial images, e.g., the avatars in animations and games, are often stylized in various ways. However, achieving generalizable face landmarking is often challenging due to the diversity of facial styles and the scarcity of labeled stylized faces. In this study, we propose a simple but effective paradigm for learning a generalizable face landmarker based on labeled real human faces and unlabeled stylized faces. In particular, we learn the face landmarker as the key module of a conditional face warper. Given a pair of real and stylized facial images, the conditional face warper predicts a warping field from the real face to the stylized one, in which the face landmarker predicts the ending points of the warping field and thus provides us with high-quality pseudo landmarks for the corresponding stylized facial images. Applying an alternating optimization strategy, we learn the face landmarker to minimize $i)$ the discrepancy between the stylized faces and the warped real ones and $ii)$ the prediction errors of both real and pseudo landmarks. Extensive experiments on various datasets show that our method outperforms existing state-of-the-art domain adaptation methods in face landmarking tasks, leading to a face landmarker with better generalizability.

Chat is not available.