Skip to yearly menu bar Skip to main content


Paper
in
Workshop: The 5th Workshop of Adversarial Machine Learning on Computer Vision: Foundation Models + X

FullCycle: Full Stage Adversarial Attack For Reinforcement Learning Robustness Evaluation

Zhenshu Ma · Xuan Cai · Changhang Tian · Yuqi Fan · KeMou Jiang · Gangfu Liu · Xuesong Bai · Aoyong Li · Yilong Ren · Haiyang Yu


Abstract:

Recent advances in deep reinforcement learning (DRL) have demonstrated significant potential in applications such as autonomous driving and embodied intelligence. However, these large-scale, multi-parametric DRL models remain vulnerable to adversarial examples, while their prolonged training durations incur substantial temporal and economic costs.Current methods primarily focus on adversarial attacks during isolated training phases, whereas practical implementations may face interference across all training stages. To address this gap, we propose FullCycle, a full stage adversarial attack method that systematically assesses DRL robustness by injecting perturbations throughout the complete training pipeline. Experimental results reveal that introducing FullCycle adversarially perturbs algorithm convergence speed and agent performance to varying degrees. This work establishes a novel paradigm for robustness evaluation in reinforcement learning systems.

Chat is not available.