Skip to yearly menu bar Skip to main content


Poster

Mind The Edge: Refining Depth Edges in Sparsely-Supervised Monocular Depth Estimation

Lior Talker · Aviad Cohen · Erez Yosef · Alexandra Dana · Michael Dinerstein

Arch 4A-E Poster #92
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Monocular Depth Estimation (MDE) is a fundamentalproblem in computer vision with numerous applications.Recently, LIDAR-supervised methods have achieved remarkable per-pixel depth accuracy in outdoor scenes. However, significant errors are typically found in the proximityof depth discontinuities, i.e., depth edges, which often hinder the performance of depth-dependent applications thatare sensitive to such inaccuracies, e.g., novel view synthesis and augmented reality. Since direct supervision for thelocation of depth edges is typically unavailable in sparseLIDAR-based scenes, encouraging the MDE model to produce correct depth edges is not straightforward. To the bestof our knowledge this paper is the first attempt to addressthe depth edges issue for LIDAR-supervised scenes. In thiswork we propose to learn to detect the location of depthedges from densely-supervised synthetic data, and use it togenerate supervision for the depth edges in the MDE training. To quantitatively evaluate our approach, and due tothe lack of depth edges GT in LIDAR-based scenes, wemanually annotated subsets of the KITTI and the DDADdatasets with depth edges ground truth. We demonstratesignificant gains in the accuracy of the depth edges withcomparable per-pixel depth accuracy on several challenging datasets. Code and datasets are available at https://github.com/liortalker/MindTheEdge.

Chat is not available.