摘要
Motivated by the promising benefits of connected and autonomous vehicles (CAVs) in improving fuelefficiency, mitigating congestion, and enhancing safety, numerous theoretical models have been proposed to plan CAVmultiple-step trajectories (time–specific speed/location trajectories) to accomplish various operations. However, limitedefforts have been made to develop proper trajectory control techniques to regulate vehicle movements to follow multiplesteptrajectories and test the performance of theoretical trajectory planning models with field experiments. Without aneffective control method, the benefits of theoretical models for CAV trajectory planning can be difficult to harvest. This studyproposes an online learning-based model predictive vehicle trajectory control structure to follow time–specific speed andlocation profiles. Unlike single-step controllers that are dominantly used in the literature, a multiple-step model predictivecontroller is adopted to control the vehicle’s longitudinal movements for higher accuracy. The model predictive controlleroutput (speed) cannot be interpreted by vehicles. A reinforcement learning agent is used to convert the speed value to thevehicle’s direct control variable (i.e., throttle/brake). The reinforcement learning agent captures real-time changes in theoperating environment. This is valuable in saving parameter calibration resources and improving trajectory control accuracy.A line tracking controller keeps vehicles on track. The proposed control structure is tested using reduced-scale robot cars.The adaptivity of the proposed control structure is demonstrated by changing the vehicle load. Then, experiments on twofundamental CAV platoon operations (i.e., platooning and split) show the effectiveness of the proposed trajectory controlstructure in regulating robot movements to follow time-specific reference trajectories.
基金
sponsored by the National Science Foundation(CMMI#1558887 and CMMI#1932452).