Skip to yearly menu bar Skip to main content


Poster

Naturally Supervised 3D Visual Grounding with Language-Regularized Concept Learners

Chun Feng · Joy Hsu · Weiyu Liu · Jiajun Wu

Arch 4A-E Poster #350
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

3D visual grounding is a challenging task that often requires direct and dense supervision, notably the semantic label for each object in the scene. In this paper, we instead study the naturally supervised setting that learns from only 3D scene and QA pairs, where prior works underperform. We propose the Language-Regularized Concept Learner (LARC), which uses constraints from language as regularization to significantly improve the accuracy of neuro-symbolic concept learners in the naturally supervised setting. Our approach is based on two core insights: the first is that language constraints (e.g., a word's relation to another) can serve as effective regularization for structured representations in neuro-symbolic models; the second is that we can query large language models to distill such constraints from language properties. We show that LARC improves performance of prior works in naturally supervised 3D visual grounding, and demonstrates a wide range of 3D visual reasoning capabilities—from zero-shot composition, to data efficiency and transferability. Our method represents a promising step towards regularizing structured visual reasoning frameworks with language-based priors, for learning in settings without dense supervision.

Chat is not available.