Skip to yearly menu bar Skip to main content


Poster

Accurate Training Data for Occupancy Map Prediction in Automated Driving Using Evidence Theory

Jonas Kälble · Sascha Wirges · Maxim Tatarchenko · Eddy Ilg

Arch 4A-E Poster #43
[ ]
[ Poster
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Automated driving fundamentally requires knowledge about the surrounding geometry of the scene.Modern approaches use only captured images to predict occupancy maps that represent the geometry.Training these approaches requires accurate data that may be acquired with the help of LiDAR scanners.We show that the techniques used for current benchmarks and training datasets to convert LiDAR scans into occupancy grid maps yield very low quality,and subsequently present a novel approach using evidence theory that yields more accurate reconstructions.We demonstrate that these are superior by a large margin, both qualitatively and quantitatively, and that we additionally obtain meaningful uncertainty estimates. When converting the occupancy maps back to depth estimates and comparing them with the original LiDAR measurements from the nuScenes dataset, our method yields an MAE improvement over 55 cm (30\%) over the baseline Occ3D and 98 cm (52%) over the baseline OpenOccupancy.Finally, we use the improved occupancy maps to train a state-of-the-art occupancy prediction method and demonstrate that it improves by 47 cm (25%).

Chat is not available.