期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
基于LQR和PSO算法的八轮车侧翻控制系统研究 被引量:5
1
作者 陈致远 《控制工程》 CSCD 北大核心 2022年第7期1173-1180,共8页
为了辅助驾驶者有效抑制车辆侧翻的发生,提出以线性二次型最优控制算法(LQR)和粒子群优化算法(PSO)相结合的车侧翻控制系统设计方案。首先,控制系统以车辆的主动转向及差动制动作为控制输入,采用闭回路跟踪模型架构,以理想虚拟车辆模型... 为了辅助驾驶者有效抑制车辆侧翻的发生,提出以线性二次型最优控制算法(LQR)和粒子群优化算法(PSO)相结合的车侧翻控制系统设计方案。首先,控制系统以车辆的主动转向及差动制动作为控制输入,采用闭回路跟踪模型架构,以理想虚拟车辆模型的稳态响应作为实际车辆追踪的目标。其次,以线性二次型最优控制算法为基础,再使用粒子群优化算法,分别搜寻LQR的权重矩阵Q以间接求得增益矩阵K,与直接搜寻系统参数所得的增益矩阵K作比较,进一步求解控制系统目标函数的最小值。最后,利用MATLAB/Simulink软件结合TruckSim卡车模拟软件,对控制效果进行验证。模拟仿真结果显示,以直接搜寻法求得的系统最佳增益值较好,能有效提高车辆高速行驶时的操控稳定性。 展开更多
关键词 侧翻控制 线性二次型最优控制算法 粒子群优化算法 灰色理论 模拟仿真
下载PDF
工程结构主动控制中Riccati方程求解的改进
2
作者 张敏 胡淑兰 《华东交通大学学报》 2006年第2期34-36,共3页
由于建筑结构刚度和质量通常较大,对于Riccati方程的传统求解方法,常会发生计算溢出失效.为了避免该问题,本文对Riccati方程的求解进行了改进,并做了误差控制,表明本文的求解,是一种比较实用的改进方法.
关键词 主动控制 线性最优控制算法 RICCATI方程 Riccati方程的求解
下载PDF
Optimization of formation for multi-agent systems based on LQR 被引量:4
3
作者 Chang-bin YU Yin-qiu WANG Jin-liang SHAO 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2016年第2期96-109,共14页
In this paper,three optimal linear formation control algorithms are proposed for first-order linear multiagent systems from a linear quadratic regulator(LQR) perspective with cost functions consisting of both interact... In this paper,three optimal linear formation control algorithms are proposed for first-order linear multiagent systems from a linear quadratic regulator(LQR) perspective with cost functions consisting of both interaction energy cost and individual energy cost,because both the collective ob ject(such as formation or consensus) and the individual goal of each agent are very important for the overall system.First,we propose the optimal formation algorithm for first-order multi-agent systems without initial physical couplings.The optimal control parameter matrix of the algorithm is the solution to an algebraic Riccati equation(ARE).It is shown that the matrix is the sum of a Laplacian matrix and a positive definite diagonal matrix.Next,for physically interconnected multi-agent systems,the optimal formation algorithm is presented,and the corresponding parameter matrix is given from the solution to a group of quadratic equations with one unknown.Finally,if the communication topology between agents is fixed,the local feedback gain is obtained from the solution to a quadratic equation with one unknown.The equation is derived from the derivative of the cost function with respect to the local feedback gain.Numerical examples are provided to validate the effectiveness of the proposed approaches and to illustrate the geometrical performances of multi-agent systems. 展开更多
关键词 Linear quadratic regulator (LQR) Formation control Algebraic Riccati equation (ARE) OPTIMALCONTROL Multi-agent systems
原文传递
A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems 被引量:8
4
作者 WEI QingLai LIU DeRong 《Science China Chemistry》 SCIE EI CAS CSCD 2015年第12期143-157,共15页
In this paper, a novel iterative Q-learning algorithm, called "policy iteration based deterministic Qlearning algorithm", is developed to solve the optimal control problems for discrete-time deterministic no... In this paper, a novel iterative Q-learning algorithm, called "policy iteration based deterministic Qlearning algorithm", is developed to solve the optimal control problems for discrete-time deterministic nonlinear systems. The idea is to use an iterative adaptive dynamic programming(ADP) technique to construct the iterative control law which optimizes the iterative Q function. When the optimal Q function is obtained, the optimal control law can be achieved by directly minimizing the optimal Q function, where the mathematical model of the system is not necessary. Convergence property is analyzed to show that the iterative Q function is monotonically non-increasing and converges to the solution of the optimality equation. It is also proven that any of the iterative control laws is a stable control law. Neural networks are employed to implement the policy iteration based deterministic Q-learning algorithm, by approximating the iterative Q function and the iterative control law, respectively. Finally, two simulation examples are presented to illustrate the performance of the developed algorithm. 展开更多
关键词 adaptive critic designs adaptive dynamic programming approximate dynamic programming Q-LEARNING policy iteration neural networks nonlinear systems optimal control
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部