Our paper on safe imitation learning is accepted to Robotics and Autonomous Systems
[2026.02.02]
The following paper is accepted to the Robotics and Autonomous Systems:
SafeIL: Safety Constrained Imitation Learning for Autonomous Systems by Gunmin Lee, Jaeseok Heo, Dohyeong Kim, Geunje Cheon, Jeongwoo Oh, Minyoung Hwang, Chanwoo Park, and Songhwai Oh
- Abstract: Safety is a critical concern in controller design, yet developing a cost function that accurately reflects safety remains a significant challenge, akin to the complexities of designing a reward function. To address this, we introduce Safety Constrained Imitation Learning (SafeIL), an innovative safety-constrained imitation learning framework that simultaneously estimates reward and safety cost functions using two distinct sets of expert demonstrations: one aimed at maximizing rewards without considering safety, and the other focused on avoiding safety violations during execution. Through the use of dual independent discriminator networks, SafeIL effectively learns these functions, enabling the development of a controller that ensures safety while maintaining high performance. Our empirical evaluations across diverse simulation environments, including safety-gym, Metadrive, CARLA, F1tenth, and real-world platforms such as Jackal and RC car, demonstrate that SafeIL significantly outperforms existing methods like GAIL. Specifically, SafeIL achieves substantial reductions in constraint violations, including zero violations on the Jackal platform, and reduces violations by 79.6% compared to GAIL when utilizing safety-focused demonstrations, underscoring its potential to enhance safety in real-world robotic applications.
