Skip to yearly menu bar Skip to main content


Paper
in
Workshop: DriveX - Foundation Models for V2X-Based Cooperative Autonomous Driving

V3LMA: Visual 3D-enhanced Language Model for Autonomous Driving

Jannik Lübberstedt · Esteban Guerrero Guerrero · Nico Uhlemann · Markus Lienkamp


Abstract:

LVLMs have shown strong capabilities in understanding and analyzing visual scenes across various domains. However, in the context of autonomous driving, their limited comprehension of 3D environments restricts their effectiveness in achieving a complete and safe understanding of dynamic surroundings. To address this, we introduce V3LMA, a novel approach that enhances 3D scene understanding by integrating LLMs with LVLMs. V3LMA leverages textual descriptions generated from object detections and video inputs, significantly boosting performance without requiring fine-tuning. Through a dedicated preprocessing pipeline that extracts 3D object data, our method improves situational awareness and decision-making in complex traffic scenarios, achieving a score of 0.56 on the LingoQA benchmark. We further explore different fusion strategies and token combinations with the goal of advancing the interpretation of traffic scenes, ultimately enabling safer autonomous driving systems.

Chat is not available.