Skip to yearly menu bar Skip to main content


Poster

WorDepth: Variational Language Prior for Monocular Depth Estimation

Ziyao Zeng · Hyoungseob Park · Fengyu Yang · Daniel Wang · Stefano Soatto · Dong Lao · Alex Wong

Arch 4A-E Poster #7
[ ] [ Paper PDF ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Three-dimensional (3D) reconstruction from a single image is an ill-posed problem with inherent ambiguities, i.e. scale. Predicting a 3D scene from text description(s) is similarly ill-posed, i.e. spatial arrangements of objects described. We investigate the question of whether two inherently ambiguous modalities can be used in conjunction to produce metric-scaled reconstructions. To test this, we focus on monocular depth estimation, the problem of predicting a dense depth map from a single image, but with an additional text caption describing the scene. To this end, we begin by encoding the text caption as a mean and standard deviation; using a variational framework, we learn the distribution of the plausible metric reconstructions of 3D scenes corresponding to the text captions as a prior. To ``select'' a specific reconstruction or depth map, we encode the given image through a conditional sampler that samples from the latent space of the variational text encoder, which is then decoded to the output depth map. Our approach is trained alternatingly between the text and image branches: in one optimization step, we predict the mean and standard deviation from the text description and sample from a standard Gaussian, and in the other, we sample using a (image) conditional sampler. Once trained, we directly predict depth from the encoded text using the conditional sampler. We demonstrate our approach on indoor (NYUv2) and outdoor (KITTI) scenarios, where we show that language can consistently improve performance in both. Code: https://github.com/Adonis-galaxy/WorDepth.

Chat is not available.