Shape-Aware Text-Driven Layered Video Editing

Yao-Chih Lee · Ji-Ze Genevieve Jang · Yi-Ting Chen · Elizabeth Qiu · Jia-Bin Huang

West Building Exhibit Halls ABC 188
[ Abstract ] [ Project Page ]
Wed 21 Jun 4:30 p.m. PDT — 6 p.m. PDT


Temporal consistency is essential for video editing applications. Existing work on layered representation of videos allows propagating edits consistently to each frame. These methods, however, can only edit object appearance rather than object shape changes due to the limitation of using a fixed UV mapping field for texture atlas. We present a shape-aware, text-driven video editing method to tackle this challenge. To handle shape changes in video editing, we first propagate the deformation field between the input and edited keyframe to all frames. We then leverage a pre-trained text-conditioned diffusion model as guidance for refining shape distortion and completing unseen regions. The experimental results demonstrate that our method can achieve shape-aware consistent video editing and compare favorably with the state-of-the-art.

Chat is not available.