期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Robust Platoon Control of Mixed Autonomous and Human-Driven Vehicles for Obstacle Collision Avoidance:A Cooperative Sensing-Based Adaptive Model Predictive Control Approach
1
作者 Daxin Tian Jianshan Zhou +1 位作者 Xu Han Ping Lang 《Engineering》 SCIE EI CAS CSCD 2024年第11期244-266,共23页
Obstacle detection and platoon control for mixed traffic flows,comprising human-driven vehicles(HDVs)and connected and autonomous vehicles(CAVs),face challenges from uncertain disturbances,such as sensor faults,inaccu... Obstacle detection and platoon control for mixed traffic flows,comprising human-driven vehicles(HDVs)and connected and autonomous vehicles(CAVs),face challenges from uncertain disturbances,such as sensor faults,inaccurate driver operations,and mismatched model errors.Furthermore,misleading sensing information or malicious attacks in vehicular wireless networks can jeopardize CAVs’perception and platoon safety.In this paper,we develop a two-dimensional robust control method for a mixed platoon,including a single leading CAV and multiple following HDVs that incorpo-rate robust information sensing and platoon control.To effectively detect and locate unknown obstacles ahead of the leading CAV,we propose a cooperative vehicle-infrastructure sensing scheme and integrate it with an adaptive model predictive control scheme for the leading CAV.This sensing scheme fuses information from multiple nodes while suppressing malicious data from attackers to enhance robustness and attack resilience in a distributed and adaptive manner.Additionally,we propose a distributed car-following control scheme with robustness to guarantee the following HDVs,considering uncertain disturbances.We also provide theoretical proof of the string stability under this control framework.Finally,extensive simulations are conducted to validate our approach.The simulation results demonstrate that our method can effectively filter out misleading sensing information from malicious attackers,significantly reduce the mean-square deviation in obstacle sensing,and approach the theoretical error lower bound.Moreover,the proposed control method successfully achieves obstacle avoidance for the mixed platoon while ensuring stability and robustness in the face of external attacks and uncertain disturbances. 展开更多
关键词 Connected autonomous vehicle mixed vehicle platoon Obstacle collision avoidance Cooperative sensing Adaptive model predictive control
下载PDF
Human as AI mentor.Enhanced human-in the-loop reinforcement leaming for safe and efficient autonomous driving
2
作者 Zilin Huang Zihao Sheng +1 位作者 Chengyuan Ma Sikai Chen 《Communications in Transportation Research》 2024年第1期124-147,共24页
Despite significant progress in autonomous vehicles(AVs),the development of driving policies that ensure both the safety of AVs and traffic flow efficiency has not yet been fully explored.In this paper,we propose an e... Despite significant progress in autonomous vehicles(AVs),the development of driving policies that ensure both the safety of AVs and traffic flow efficiency has not yet been fully explored.In this paper,we propose an enhanced human-in-the-loop reinforcement learning method,termed the Human as AI mentor-based deep reinforcement learning(HAIM-DRL)framework,which facilitates safe and efficient autonomous driving in mixed traffic platoon.Drawing inspiration from the human learning process,we first introduce an innovative learning paradigm that effectively injects human intelligence into AI,termed Human as AI mentor(HAIM).In this paradigm,the human expert serves as a mentor to the AI agent.While allowing the agent to sufficiently explore uncertain environments,the human expert can take control in dangerous situations and demonstrate correct actions to avoid potential accidents.On the other hand,the agent could be guided to minimize traffic flow disturbance,thereby optimizing traffic flow efficiency.In detail,HAIM-DRL leverages data collected from free exploration and partial human demonstrations as its two training sources.Remarkably,we circumvent the intricate process of manually designing reward functions;instead,we directly derive proxy state-action values from partial human demonstrations to guide the agents’policy learning.Additionally,we employ a minimal intervention technique to reduce the human mentor’s cognitive load.Comparative results show that HAIM-DRL outperforms traditional methods in driving safety,sampling efficiency,mitigation of traffic flow disturbance,and generalizability to unseen traffic scenarios. 展开更多
关键词 Human as AI mentor paradigm Autonomous driving Deep reinforcement learning Human-in-the-loop learning Driving policy mixed traffic platoon
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部