Skip to yearly menu bar Skip to main content


Poster

LDP: Language-driven Dual-Pixel Image Defocus Deblurring Network

Hao Yang · Liyuan Pan · Yan Yang · Richard Hartley · Miaomiao Liu

Arch 4A-E Poster #67
[ ]
[ Poster
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT
 
Oral presentation: Orals 6A Low-level vision and remote sensing
Fri 21 Jun 1 p.m. PDT — 2:30 p.m. PDT

Abstract:

Recovering sharp images from dual-pixel (DP) pairs with disparity-dependent blur is a challenging task. Existing blur map-based deblurring methods have demonstrated promising results. In this paper, we propose, to the best of our knowledge, the first framework to introduce the contrastive language-image pre-training framework (CLIP) to achieve accurate blur map estimation from DP pairs unsupervisedly. To this end, we first carefully design text prompts to enable CLIP to understand blur-related geometric prior knowledge from the DP pair. Then, we propose a format to input stereo DP pair to the CLIP without any fine-tuning, where the CLIP is pre-trained on monocular images. Given the estimated blur map, we introduce a blur-prior attention block, a blur-weighting loss and a blur-aware loss to recover the all-in-focus image. Our method achieves state-of-the-art performance in extensive experiments (see Fig. 1).

Chat is not available.