期刊文献+
共找到9篇文章
< 1 >
每页显示 20 50 100
Stability of General Linear Dynamic Multi-Agent Systems under Switching Topologies with Positive Real Eigenvlues
1
作者 shengbo eben li Zhitao Wang +2 位作者 Yang Zheng Diange Yang Keyou You 《Engineering》 SCIE EI 2020年第6期688-694,共7页
The time-varying network topology can significantly affect the stability of multi-agent systems.This paper examines the stability of leader-follower multi-agent systems with general linear dynamics and switching netwo... The time-varying network topology can significantly affect the stability of multi-agent systems.This paper examines the stability of leader-follower multi-agent systems with general linear dynamics and switching network topologies,which have applications in the platooning of connected vehicles.The switching interaction topology is modeled as a class of directed graphs in order to describe the information exchange between multi-agent systems,where the eigenvalues of every associated matrix are required to be positive real.The Hurwitz criterion and the Riccati inequality are used to design a distributed control law and estimate the convergence speed of the closed-loop system.A sufficient condition is provided for the stability of multi-agent systems under switching topologies.A common Lyapunov function is formulated to prove closed-loop stability for the directed network with switching topologies.The result is applied to a typical cyber-physical system—that is,a connected vehicle platoon—which illustrates the effectiveness of the proposed method. 展开更多
关键词 STABILITY Multi-agent system Switching topologies Common Lyapunov function
下载PDF
GOPS:A general optimal control problem solver for autonomous driving and industrial control applications 被引量:1
2
作者 Wenxuan Wang Yuhang Zhang +10 位作者 Jiaxin Gao Yuxuan Jiang Yujie Yang Zhilong Zheng Wenjun Zou Jie li Congsheng Zhang Wenhan Cao Genjin Xie Jingliang Duan shengbo eben li 《Communications in Transportation Research》 2023年第1期92-106,共15页
Solving optimal control problems serves as the basic demand of industrial control tasks.Existing methods like model predictive control often suffer from heavy online computational burdens.Reinforcement learning has sh... Solving optimal control problems serves as the basic demand of industrial control tasks.Existing methods like model predictive control often suffer from heavy online computational burdens.Reinforcement learning has shown promise in computer and board games but has yet to be widely adopted in industrial applications due to a lack of accessible,high-accuracy solvers.Current Reinforcement learning(RL)solvers are often developed for academic research and require a significant amount of theoretical knowledge and programming skills.Besides,many of them only support Python-based environments and limit to model-free algorithms.To address this gap,this paper develops General Optimal control Problems Solver(GOPS),an easy-to-use RL solver package that aims to build real-time and high-performance controllers in industrial fields.GOPS is built with a highly modular structure that retains a flexible framework for secondary development.Considering the diversity of industrial control tasks,GOPS also includes a conversion tool that allows for the use of Matlab/Simulink to support environment construction,controller design,and performance validation.To handle large-scale problems,GOPS can automatically create various serial and parallel trainers by flexibly combining embedded buffers and samplers.It offers a variety of common approximate functions for policy and value functions,including polynomial,multilayer perceptron,convolutional neural network,etc.Additionally,constrained and robust algorithms for special industrial control systems with state constraints and model uncertainties are also integrated into GOPS.Several examples,including linear quadratic control,inverted double pendulum,vehicle tracking,humanoid robot,obstacle avoidance,and active suspension control,are tested to verify the performances of GOPS. 展开更多
关键词 Industrial control Reinforcement learning Approximate dynamic programming Optimal control Neural network BENCHMARK
原文传递
End-to-End Autonomous Driving Through Dueling Double Deep Q-Network 被引量:6
3
作者 Baiyu Peng Qi Sun +4 位作者 shengbo eben li Dongsuk Kum Yuming Yin Junqing Wei Tianyu Gu 《Automotive Innovation》 EI CSCD 2021年第3期328-337,共10页
Recent years have seen the rapid development of autonomous driving systems,which are typically designed in a hierarchical architecture or an end-to-end architecture.The hierarchical architecture is always complicated ... Recent years have seen the rapid development of autonomous driving systems,which are typically designed in a hierarchical architecture or an end-to-end architecture.The hierarchical architecture is always complicated and hard to design,while the end-to-end architecture is more promising due to its simple structure.This paper puts forward an end-to-end autonomous driving method through a deep reinforcement learning algorithm Dueling Double Deep Q-Network,making it possible for the vehicle to learn end-to-end driving by itself.This paper firstly proposes an architecture for the end-to-end lane-keeping task.Unlike the traditional image-only state space,the presented state space is composed of both camera images and vehicle motion information.Then corresponding dueling neural network structure is introduced,which reduces the variance and improves sampling efficiency.Thirdly,the proposed method is applied to The Open Racing Car Simulator(TORCS)to demonstrate its great performance,where it surpasses human drivers.Finally,the saliency map of the neural network is visualized,which indicates the trained network drives by observing the lane lines.A video for the presented work is available online,https://youtu.be/76ciJ mIHMD8 or https://v.youku.com/v_show/id_XNDM4 ODc0M TM4NA==.html. 展开更多
关键词 End-to-end autonomous driving Reinforcement learning Deep Q-network Neural network
原文传递
Self-learning drift control of automated vehicles beyond handling limit after rear-end collision
4
作者 Yuming Yin shengbo eben li +2 位作者 Keqiang li Jue Yang Fei Ma 《Transportation Safety and Environment》 EI 2020年第2期97-105,共9页
Vehicles involved in traffic accidents generally experience divergent vehicle motion,which causes severe damage.This paper presents a self-learning drift-control method for the purpose of stabilizing a vehicle’s yaw ... Vehicles involved in traffic accidents generally experience divergent vehicle motion,which causes severe damage.This paper presents a self-learning drift-control method for the purpose of stabilizing a vehicle’s yaw motions after a high-speed rear-end collision.The struck vehicle generally experiences substantial drifting and/or spinning after the collision,which is beyond the handling limit and difficult to control.Drift control of the struck vehicle along the original lane was investigated.The rear-end collision was treated as a set of impact forces,and the three-dimensional non-linear dynamic responses of the vehicle were considered in the drift control.A multi-layer perception neural network was trained as a deterministic control policy using the actor-critic reinforcement learning framework.The control policy was iteratively updated,initiating from a random parameterized policy.The results show that the self-learning controller gained the ability to eliminate unstable vehicle motion after data-driven training of about 60,000 iterations.The controlled struck vehicle was also able to drift back to its original lane in a variety of rear-end collision scenarios,which could significantly reduce the risk of a second collision in traffic. 展开更多
关键词 automated vehicle drift control reinforcement learning rear-end collision
原文传递
Markov probabilistic decision making of self-driving cars in highway with random traffic flow: a simulation study
5
作者 Yang Guan shengbo eben li +2 位作者 Jingliang Duan Wenjun Wang Bo Cheng 《Journal of Intelligent and Connected Vehicles》 2018年第2期77-84,共8页
Purpose–Decision-making is one of the key technologies for self-driving cars.The high dependency of previously existing methods on human driving data or rules makes it difficult to model policies for different driving... Purpose–Decision-making is one of the key technologies for self-driving cars.The high dependency of previously existing methods on human driving data or rules makes it difficult to model policies for different driving situations.Design/methodology/approach–In this research,a probabilistic decision-making method based on the Markov decision process(MDP)is proposed to deduce the optimal maneuver automatically in a two-lane highway scenario without using any human data.The decision-making issues in a traffic environment are formulated as the MDP by defining basic elements including states,actions and basic models.Transition and reward models are defined by using a complete prediction model of the surrounding cars.An optimal policy was deduced using a dynamic programing method and evaluated under a two-dimensional simulation environment.Findings–Results show that,at the given scenario,the self-driving car maintained safety and efficiency with the proposed policy.Originality/value–This paper presents a framework used to derive a driving policy for self-driving cars without relying on any human driving data or rules modeled by hand. 展开更多
关键词 Markov decision process DECISION-MAKING Dynamic programming Self-driving cars
原文传递
Robust cooperation of connected vehicle systems with eigenvalue-bounded interaction topologies in the presence of uncertain dynamics 被引量:3
6
作者 Keqiang li Feng GAO +2 位作者 shengbo eben li Yang ZHENG Hongbo GAO 《Frontiers of Mechanical Engineering》 SCIE CSCD 2018年第3期354-367,共14页
This study presents a distributed H-infinity control method for uncertain platoons with dimensionally and structurally unknown interaction topologies provided that the associated topological eigenvalues are bounded by... This study presents a distributed H-infinity control method for uncertain platoons with dimensionally and structurally unknown interaction topologies provided that the associated topological eigenvalues are bounded by a predesigned range. With an inverse model to compensate for nonlinear powertrain dynamics, vehicles in a platoon are modeled by third-order uncertain systems with bounded disturbances. On the basis of the eigenvalue decomposition of topological matrices, we convert the platoon system to a norm-bounded uncertain part and a diagonally structured certain part by applying linear transformation. We then use a common Lyapunov method to design a distributed H-infinity controller. Numerically, two linear matrix inequalities corresponding to the minimum and maximum eigenvalues should be solved. The resulting controller can tolerate interaction topologies with eigenvalues located in a certain range. The proposed method can also ensure robustness performance and disturbance attenuation ability for the closed-loop platoon system. Hardware-in-the-loop tests are performed to validate the effectiveness of our method. 展开更多
关键词 automated vehicles platoon distributedcontrol ROBUSTNESS
原文传递
Real-time energy optimization of HEVs under-connected environment: a benchmark problem and receding horizon-based solution 被引量:2
7
作者 Fuguo Xu Hiroki Tsunogawa +5 位作者 Junichi Kako Xiaosong Hu shengbo eben li Tielong Shen Lars Eriksson Carlos Guardiola 《Control Theory and Technology》 EI CSCD 2022年第2期145-160,共16页
In this paper,we propose a benchmark problem for the challengers aiming to energy efficiency control of hybrid electric vehicles(HEVs)on a road with slope.Moreover,it is assumed that the targeted HEVs are in the conne... In this paper,we propose a benchmark problem for the challengers aiming to energy efficiency control of hybrid electric vehicles(HEVs)on a road with slope.Moreover,it is assumed that the targeted HEVs are in the connected environment with the obtainment of real-time information of vehicle-to-everything(V2X),including geographic information,vehicle-to-infrastructure(V2I)information and vehicle-to-vehicle(V2V)information.The provided simulator consists of an industrial-level HEV model and a traffic scenario database obtained through a commercial traffic simulator,where the running route is generated based on real-world data with slope and intersection position.The benchmark problem to be solved is the HEVs powertrain control using traffic information to fulfill fuel economy improvement while satisfying the constraints of driving safety and travel time.To show the HEV powertrain characteristics,a case study is given with the speed planning and energy management strategy. 展开更多
关键词 Powertrain control Connected and automated vehicles Hybrid electric vehicles Vehicle-to-everything
原文传递
Approximate Optimal Filter Design for Vehicle System through Actor‑Critic Reinforcement Learning
8
作者 Yuming Yin shengbo eben li +3 位作者 Kaiming Tang Wenhan Cao Wei Wu Hongbo li 《Automotive Innovation》 EI CSCD 2022年第4期415-426,共12页
Precise state and parameter estimations are essential for identification,analysis and control of vehicle engineering problems,especially under significant model and measurement uncertainties.The widely used filtering/... Precise state and parameter estimations are essential for identification,analysis and control of vehicle engineering problems,especially under significant model and measurement uncertainties.The widely used filtering/estimation algorithms,such as Kalman series like Kalman filter,extended Kalman filter,unscented Kalman filter,and particle filter,generally aim to approach the true state/parameter distribution via iteratively updating the filter gain at each time step.However,the optimal-ity of these filters would be deteriorated by unrealistic initial condition or significant model error.Alternatively,this paper proposes to approximate the optimal filter gain by considering the effect factors within infinite time horizon,on the basis of estimation-control duality.The proposed approximate optimal filter(AOF)problem is designed and subsequently solved by actor-critic reinforcement learning(RL)method.The AOF design transforms the traditional optimal filtering problem with the minimum expected mean square error into an optimal control problem with the minimum accumulated estimation error,in which the estimation error is used as the surrogate system state and the infinite-horizon filter gain is the control input.The estimation-control duality is proved to hold when certain conditions about initial vehicle state distributions and policy structure are maintained.In order to evaluate of the effectiveness of AOF,a vehicle state estimation problem is then demonstrated and compared with the steady-state Kalman filter.The results showed that the obtained filter policy via RL with different discount factors can converge to theoretical optimal gain with an error within 5%,and the average estimation errors of vehicle slip angle and yaw rate are less than 1.5×10–4. 展开更多
关键词 Vehicle state estimation Kalman filter Estimation-control duality Reinforcement learning
原文传递
FPGA accelerated model predictive control for autonomous driving
9
作者 Yunfei li shengbo eben li +2 位作者 Xingheng Jia Shulin Zeng Yu Wang 《Journal of Intelligent and Connected Vehicles》 2022年第2期63-71,共9页
Purpose–The purpose of this paper is to reduce the difficulty of model predictive control(MPC)deployment on FPGA so that researchers can make better use of FPGA technology for academic research.Design/methodology/app... Purpose–The purpose of this paper is to reduce the difficulty of model predictive control(MPC)deployment on FPGA so that researchers can make better use of FPGA technology for academic research.Design/methodology/approach–In this paper,the MPC algorithm is written into FPGA by combining hardware with software.Experiments have verified this method.Findings–This paper implements a ZYNQ-based design method,which could significantly reduce the difficulty of development.The comparison with the CPU solution results proves that FPGA has a significant acceleration effect on the solution of MPC through the method.Research limitations implications–Due to the limitation of practical conditions,this paper cannot carry out a hardware-in-the-loop experiment for the time being,instead of an open-loop experiment.Originality value–This paper proposes a new design method to deploy the MPC algorithm to the FPGA,reducing the development difficulty of the algorithm implementation on FPGA.It greatly facilitates researchers in the field of autonomous driving to carry out FPGA algorithm hardware acceleration research. 展开更多
关键词 FPGA Model predictive control Autonomous driving ZYNQ
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部