摘要
无人驾驶系统综合了软件和硬件复杂的交互过程,在系统设计阶段,形式化方法可以保证系统满足逻辑规约和安全需求;在系统运行阶段,深度强化学习被广泛应用于无人驾驶系统决策中.然而,在面对没有经验的场景和复杂决策任务时,基于黑盒的深度强化学习系统并不能保证系统的安全性和复杂任务奖励函数设置的可解释性.为此提出了一种形式化时空同步约束制导的安全强化学习方法.首先,提出了一种形式化时空同步约束规约语言,接近自然语言的安全需求规约使奖励函数的设置更具有解释性.其次,展示了时空同步自动机和状态动作空间迁移系统,保证强化学习的状态行为策略更加安全.然后,提出了结合形式化时空约束制导的安全强化学习方法.最后,通过无人驾驶汽车在高速场景变道超车的案例,验证所提方法的有效性.
Autonomous driving systems integrate complex interactions between hardware and software.In order to ensure the safe and reliable operations,formal methods are used to provide rigorous guarantees to satisfy logical specifications and safety-critical requirements in the design stage.As a widely employed machine learning architecture,deep reinforcement learning(DRL)focuses on finding an optimal policy that maximizes a cumulative discounted reward by interacting with the environment,and has been applied to autonomous driving decision-making modules.However,black-box DRL-based autonomous driving systems cannot provide guarantees of safe operation and reward definition interpretability techniques for complex tasks,especially when they face unfamiliar situations and reason about a greater number of options.In order to address these problems,spatio-clock synchronous constraint is adopted to augment DRL safety and interpretability.Firstly,we propose a dedicated formal properties specification language for autonomous driving domain,i.e.,spatio-clock synchronous constraint specification language,and present domain-specific knowledge requirements specification that is close to natural language to make the reward functions generation process more interpretable.Secondly,we present domain-specific spatio-clock synchronous automata to describe spatio-clock autonomous behaviors,i.e.,controllers related to certain spatio-and clock-critical actions,and present safe state-action space transition systems to guarantee the safety of DRL optimal policy generation process.Thirdly,based on the formal specification and policy learning,we propose a formal spatio-clock synchronous constraint guided safe reinforcement learning method with the goal of easily understanding the safe reward function.Finally,we demonstrate the effectiveness of our proposed approach through an autonomous lane changing and overtaking case study in the highway scenario.
作者
王金永
黄志球
杨德艳
Xiaowei Huang
祝义
华高洋
Wang Jinyong;Huang Zhiqiu;Yang Deyan;Xiaowei Huang;Zhu Yi;Hua Gaoyang(College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106;Key Laboratory of Safety-Critical Software(Nanjing University of Aeronautics and Astronautics),Ministry of Industry and Information Technology,Nanjing 211106;School of Computer Science and Technology,Jiangsu Normal University,Xuzhou,Jiangsu 221116;Department of Computer Science,University of Liverpool,Liverpool,UK L693BX)
出处
《计算机研究与发展》
EI
CSCD
北大核心
2021年第12期2585-2603,共19页
Journal of Computer Research and Development
基金
国家重点研发计划项目(2018YFB1003900)
国家自然科学基金项目(61772270,62077029)。
关键词
时空同步约束
形式化规约
安全强化学习
时序差分
智能交通仿真
无人驾驶安全
spatio-clock synchronous constraint
formal specification
safe reinforcement learning
temporal difference
intelligent traffic simulation
autonomous driving safety