Skip to yearly menu bar Skip to main content


Poster

Unsupervised Blind Image Deblurring Based on Self-Enhancement

Lufei Chen · Xiangpeng Tian · Shuhua Xiong · Yinjie Lei · Chao Ren

Arch 4A-E Poster #150
[ ] [ Paper PDF ]
[ Poster
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Significant progress in image deblurring has been achieved by deep learning methods, especially the remarkable performance of supervised models on paired synthetic data. However, real-world quality degradation is more complex than synthetic datasets, and acquiring paired data in real-world scenarios poses significant challenges. To address these challenges, we propose a novel unsupervised image deblurring framework based on self-enhancement. The framework progressively generates improved pseudo-sharp and blurry image pairs without the need for real paired datasets, and the generated image pairs with higher qualities can be used to enhance the performance of the reconstructor. To ensure the generated blurry images are closer to the real blurry images, we propose a novel re-degradation principal component consistency loss, which enforces the principal components of the generated low-quality images to be similar to those of re-degraded images from the original sharp ones. Furthermore, we introduce the self-enhancement strategy that significantly improves deblurring performance without increasing the computational complexity of network during inference. Through extensive experiments on multiple real-world blurry datasets, we demonstrate the superiority of our approach over other state-of-the-art unsupervised methods.

Chat is not available.