Skip to yearly menu bar Skip to main content


Poster

Grounded Question-Answering in Long Egocentric Videos

Shangzhe Di · Weidi Xie

Arch 4A-E Poster #319
[ ] [ Project Page ]
[ Poster
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Existing approaches to video understanding, mainly designed for short videos from a third-person perspective, are limited in their applicability in certain fields, such as robotics. In this paper, we delve into open-ended question-answering (QA) in long, egocentric videos, which allows individuals or robots to inquire about their own past visual experiences. This task presents unique challenges, including the complexity of temporally grounding queries within extensive video content, the high resource demands for precise data annotation, and the inherent difficulty of evaluating open-ended answers due to their ambiguous nature. Our proposed approach tackles these challenges by (i) integrating query grounding and answering within a unified model to reduce error propagation; (ii) employing large language models for efficient and scalable data synthesis; and (iii) introducing a close-ended QA task for evaluation, to manage answer ambiguity. Extensive experiments demonstrate the effectiveness of our method, which also achieves state-of-the-art performance on the QAEgo4D and Ego4D-NLQ benchmarks. Code, data, and models are open-sourced at https://github.com/Becomebright/GroundVQA.

Chat is not available.