Skip to yearly menu bar Skip to main content


Poster

Continual Learning for Motion Prediction Model via Meta-Representation Learning and Optimal Memory Buffer Retention Strategy

Dae Jun Kang · Dongsuk Kum · Sanmin Kim

Arch 4A-E Poster #80
[ ] [ Paper PDF ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Embodied AI, such as autonomous vehicles, suffers from insufficient, long-tailed data because it must be obtained from the physical world. In fact, data must be continuously obtained in a series of small batches, and the model must also be continuously trained to achieve generalizability and scalability by improving the biased data distribution. This paper addresses the training cost and catastrophic forgetting problems when continuously updating models to adapt to incoming small batches from various environments for real-world motion prediction in autonomous driving. To this end, we propose a novel continual motion prediction (CMP) learning framework based on sparse meta-representation learning and an optimal memory buffer retention strategy. In meta-representation learning, a model explicitly learns a sparse representation of each driving environment, from road geometry to vehicle states, by training to reduce catastrophic forgetting based on an augmented modulation network with sparsity regularization. Also, in the adaptation phase, We develop an Optimal Memory Buffer Retention strategy that smartly preserves diverse samples by focusing on representation similarity. This approach handles the nuanced task distribution shifts characteristic of motion prediction datasets, ensuring our model stays responsive to evolving input variations without requiring extensive resources. The experiment results demonstrate that the proposed method shows superior adaptation performance to the conventional continual learning approach, which is developed using a synthetic dataset for the continual learning problem.

Chat is not available.