Skip to yearly menu bar Skip to main content


Poster

LEMON: Learning 3D Human-Object Interaction Relation from 2D Images

Yuhang Yang · Wei Zhai · Hongchen Luo · Yang Cao · Zheng-Jun Zha

Arch 4A-E Poster #160
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract: Learning 3D human-object interaction relation is pivotal to embodied AI and interaction modeling. Most existing methods approach the goal by learning to predict isolated interaction elements, e.g., human contact, object affordance, and human-object spatial relation, primarily from the perspective of either the human or the object. Which underexploit certain correlations between the interaction counterparts (human and object), and struggle to address the uncertainty in interactions. Actually, objects' functionalities potentially affect humans' interaction intentions, which reveals what the interaction is. Meanwhile, the interacting humans and objects exhibit matching geometric structures, which presents how to interact. In light of this, we propose harnessing these inherent correlations between interaction counterparts to mitigate the uncertainty and jointly anticipate the above interaction elements in 3D space. To achieve this, we present $\mathbf{LEMON}$ ($\mathbf{LE}$arning 3D hu$\mathbf{M}$an-$\mathbf{O}$bject i$\mathbf{N}$teraction relation), a unified model that mines interaction intentions of the counterparts and employs curvatures to guide the extraction of geometric correlations, combining them to anticipate the interaction elements. Besides, the $\mathbf{3D}$ $\mathbf{I}$nteraction $\mathbf{R}$elation dataset ($\mathbf{3DIR}$ ) is collected to serve as the test bed for training and evaluation. Extensive experiments demonstrate the superiority of LEMON over methods estimating each element in isolation. The code and dataset will be publicly released.

Chat is not available.