Skip to yearly menu bar Skip to main content


Poster

StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On

Jeongho Kim · Gyojung Gu · Minho Park · Sunghyun Park · Jaegul Choo

Arch 4A-E Poster #327
[ ] [ Project Page ] [ Paper PDF ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Given a clothing image and a person image, an image-based virtual try-on aims to generate a customized image that appears natural and accurately reflects the characteristics of the clothing image. In this work, we aim to expand the applicability of the pre-trained diffusion model so that it can be utilized independently for the virtual try-on task. The main challenge is to preserve the clothing details while effectively utilizing the robust generative capability of the pre-trained model. In order to tackle these issues, we propose StableVITON, learning the semantic correspondence between the clothing and the human body within the latent space of the pre-trained diffusion model in an end-to-end manner. Our proposed zero cross-attention blocks not only preserve the clothing details by learning the semantic correspondence but also generate high-fidelity images by utilizing the inherent knowledge of the pre-trained model in the warping process. Through our proposed novel attention total variation loss and applying augmentation, we achieve the sharp attention map, resulting in a more precise representation of clothing details. StableVITONoutperforms the baselines in qualitative and quantitative evaluation, showing promising quality in arbitrary person images.

Chat is not available.