期刊文献+
共找到244篇文章
< 1 2 13 >
每页显示 20 50 100
A dynamic fusion path planning algorithm for mobile robots incorporating improved IB-RRT∗and deep reinforcement learning
1
作者 刘安东 ZHANG Baixin +2 位作者 CUI Qi ZHANG Dan NI Hongjie 《High Technology Letters》 EI CAS 2023年第4期365-376,共12页
Dynamic path planning is crucial for mobile robots to navigate successfully in unstructured envi-ronments.To achieve globally optimal path and real-time dynamic obstacle avoidance during the movement,a dynamic path pl... Dynamic path planning is crucial for mobile robots to navigate successfully in unstructured envi-ronments.To achieve globally optimal path and real-time dynamic obstacle avoidance during the movement,a dynamic path planning algorithm incorporating improved IB-RRT∗and deep reinforce-ment learning(DRL)is proposed.Firstly,an improved IB-RRT∗algorithm is proposed for global path planning by combining double elliptic subset sampling and probabilistic central circle target bi-as.Then,to tackle the slow response to dynamic obstacles and inadequate obstacle avoidance of tra-ditional local path planning algorithms,deep reinforcement learning is utilized to predict the move-ment trend of dynamic obstacles,leading to a dynamic fusion path planning.Finally,the simulation and experiment results demonstrate that the proposed improved IB-RRT∗algorithm has higher con-vergence speed and search efficiency compared with traditional Bi-RRT∗,Informed-RRT∗,and IB-RRT∗algorithms.Furthermore,the proposed fusion algorithm can effectively perform real-time obsta-cle avoidance and navigation tasks for mobile robots in unstructured environments. 展开更多
关键词 mobile robot improved IB-RRT∗algorithm deep reinforcement learning(DRL) real-time dynamic obstacle avoidance
下载PDF
Artificial Potential Field Incorporated Deep-Q-Network Algorithm for Mobile Robot Path Prediction 被引量:3
2
作者 A.Sivaranjani B.Vinod 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期1135-1150,共16页
Autonomous navigation of mobile robots is a challenging task that requires them to travel from their initial position to their destination without collision in an environment.Reinforcement Learning methods enable a st... Autonomous navigation of mobile robots is a challenging task that requires them to travel from their initial position to their destination without collision in an environment.Reinforcement Learning methods enable a state action function in mobile robots suited to their environment.During trial-and-error interaction with its surroundings,it helps a robot tofind an ideal behavior on its own.The Deep Q Network(DQN)algorithm is used in TurtleBot 3(TB3)to achieve the goal by successfully avoiding the obstacles.But it requires a large number of training iterations.This research mainly focuses on a mobility robot’s best path prediction utilizing DQN and the Artificial Potential Field(APF)algorithms.First,a TB3 Waffle Pi DQN is built and trained to reach the goal.Then the APF shortest path algorithm is incorporated into the DQN algorithm.The proposed planning approach is compared with the standard DQN method in a virtual environment based on the Robot Operation System(ROS).The results from the simulation show that the combination is effective for DQN and APF gives a better optimal path and takes less time when compared to the conventional DQN algo-rithm.The performance improvement rate of the proposed DQN+APF in comparison with DQN in terms of the number of successful targets is attained by 88%.The performance of the proposed DQN+APF in comparison with DQN in terms of average time is achieved by 0.331 s.The performance of the proposed DQN+APF in comparison with DQN average rewards in which the positive goal is attained by 85%and the negative goal is attained by-90%. 展开更多
关键词 Artificial potentialfield deep reinforcement learning mobile robot turtle bot deep q network path prediction
下载PDF
Deep reinforcement learning for UAV swarm rendezvous behavior
3
作者 ZHANG Yaozhong LI Yike +1 位作者 WU Zhuoran XU Jialin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第2期360-373,共14页
The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the mai... The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%. 展开更多
关键词 double deep q network(DDqN)algorithms unmanned aerial vehicle(UAV)swarm task decision deep reinforcement learning(DRL) sparse returns
下载PDF
Reliable Scheduling Method for Sensitive Power Business Based on Deep Reinforcement Learning
4
作者 Shen Guo Jiaying Lin +2 位作者 Shuaitao Bai Jichuan Zhang Peng Wang 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期1053-1066,共14页
The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services r... The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services related to dispatching data networks and the transmission of fault information or feeder automation have high requirements for delay.If processing time is prolonged,a power business cascade reaction may be triggered.In order to solve the above problems,this paper establishes an edge object-linked agent business deployment model for power communication network to unify the management of data collection,resource allocation and task scheduling within the system,realizes the virtualization of object-linked agent computing resources through Docker container technology,designs the target model of network latency and energy consumption,and introduces A3C algorithm in deep reinforcement learning,improves it according to scene characteristics,and sets corresponding optimization strategies.Mini-mize network delay and energy consumption;At the same time,to ensure that sensitive power business is handled in time,this paper designs the business dispatch model and task migration model,and solves the problem of server failure.Finally,the corresponding simulation program is designed to verify the feasibility and validity of this method,and to compare it with other existing mechanisms. 展开更多
关键词 Power communication network dispatching data networks resource allocation A3C algorithm deep reinforcement learning
下载PDF
Improved Double Deep Q Network-Based Task Scheduling Algorithm in Edge Computing for Makespan Optimization
5
作者 Lei Zeng Qi Liu +1 位作者 Shigen Shen Xiaodong Liu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第3期806-817,共12页
Edge computing nodes undertake an increasing number of tasks with the rise of business density.Therefore,how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical ... Edge computing nodes undertake an increasing number of tasks with the rise of business density.Therefore,how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge.This study proposes an edge task scheduling approach based on an improved Double Deep Q Network(DQN),which is adopted to separate the calculations of target Q values and the selection of the action in two networks.A new reward function is designed,and a control unit is added to the experience replay unit of the agent.The management of experience data are also modified to fully utilize its value and improve learning efficiency.Reinforcement learning agents usually learn from an ignorant state,which is inefficient.As such,this study proposes a novel particle swarm optimization algorithm with an improved fitness function,which can generate optimal solutions for task scheduling.These optimized solutions are provided for the agent to pre-train network parameters to obtain a better cognition level.The proposed algorithm is compared with six other methods in simulation experiments.Results show that the proposed algorithm outperforms other benchmark methods regarding makespan. 展开更多
关键词 edge computing task scheduling reinforcement learning MAKESPAN Double deep q network(DqN)
原文传递
Flexible Job Shop Composite Dispatching Rule Mining Approach Based on an Improved Genetic Programming Algorithm
6
作者 Xixing Li Qingqing Zhao +4 位作者 Hongtao Tang Xing Guo Mengzhen Zhuang Yibing Li Xi Vincent Wang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第5期1390-1408,共19页
To obtain a suitable scheduling scheme in an effective time range,the minimum completion time is taken as the objective of Flexible Job Shop scheduling Problems(FJSP)with different scales,and Composite Dispatching Rul... To obtain a suitable scheduling scheme in an effective time range,the minimum completion time is taken as the objective of Flexible Job Shop scheduling Problems(FJSP)with different scales,and Composite Dispatching Rules(CDRs)are applied to generate feasible solutions.Firstly,the binary tree coding method is adopted,and the constructed function set is normalized.Secondly,a CDR mining approach based on an Improved Genetic Programming Algorithm(IGPA)is designed.Two population initialization methods are introduced to enrich the initial population,and a superior and inferior population separation strategy is designed to improve the global search ability of the algorithm.At the same time,two individual mutation methods are introduced to improve the algorithm’s local search ability,to achieve the balance between global search and local search.In addition,the effectiveness of the IGPA and the superiority of CDRs are verified through comparative analysis.Finally,Deep Reinforcement Learning(DRL)is employed to solve the FJSP by incorporating the CDRs as the action set,the selection times are counted to further verify the superiority of CDRs. 展开更多
关键词 flexible job shop scheduling composite dispatching rule improved genetic programming algorithm deep reinforcement learning
原文传递
Feature-Based Aggregation and Deep Reinforcement Learning:A Survey and Some New Implementations 被引量:14
7
作者 Dimitri P.Bertsekas 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期1-31,共31页
In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinfor... In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural networkbased reinforcement learning, thereby potentially leading to more effective policy improvement. 展开更多
关键词 reinforcement learning dynamic programming Markovian decision problems AGGREGATION feature-based ARCHITECTURES policy ITERATION deep neural networks rollout algorithms
下载PDF
基于D3QN的火力方案优选方法
8
作者 佘维 岳瀚 +1 位作者 田钊 孔德锋 《火力与指挥控制》 CSCD 北大核心 2024年第8期166-174,共9页
针对在多类弹药协同攻击地面工事类目标任务中火力方案优选效率低的问题,提出一种基于双层决斗DQN(dueling double deep Q network,D3QN)的火力方案优选方法。该方法将打击过程建模为马尔科夫决策过程(Markov decision processes,MDP),... 针对在多类弹药协同攻击地面工事类目标任务中火力方案优选效率低的问题,提出一种基于双层决斗DQN(dueling double deep Q network,D3QN)的火力方案优选方法。该方法将打击过程建模为马尔科夫决策过程(Markov decision processes,MDP),设计其状态空间和动作空间,设计综合奖励函数激励火力方案生成策略优化,使智能体通过强化学习框架对策略进行自主训练。仿真实验结果表明,该方法对地面工事类目标的火力方案进行决策,相较于传统启发式智能算法能够获得较优的火力方案,其计算效率和结果的稳定性相较于传统深度强化学习算法具有更明显的优势。 展开更多
关键词 深度强化学习 深度q网络 D3qN 组合优化 火力方案优选
下载PDF
基于威胁机制-双重深度Q网络的多功能雷达认知干扰决策
9
作者 黄湘松 查力根 潘大鹏 《应用科技》 CAS 2024年第4期145-153,共9页
针对传统深度Q网络(deep Q network,DQN)在雷达认知干扰决策中容易产生经验遗忘,从而重复执行错误决策的问题,本文提出了一种基于威胁机制双重深度Q网络(threat warning mechanism-double DQN,TW-DDQN)的认知干扰决策方法,该机制包含威... 针对传统深度Q网络(deep Q network,DQN)在雷达认知干扰决策中容易产生经验遗忘,从而重复执行错误决策的问题,本文提出了一种基于威胁机制双重深度Q网络(threat warning mechanism-double DQN,TW-DDQN)的认知干扰决策方法,该机制包含威胁网络和经验回放2种机制。为了验证算法的有效性,在考虑多功能雷达(multifunctional radar,MFR)工作状态与干扰样式之间的关联性的前提下,搭建了基于认知电子战的仿真环境,分析了雷达与干扰机之间的对抗博弈过程,并且在使用TW-DDQN进行训练的过程中,讨论了威胁半径与威胁步长参数的不同对训练过程的影响。仿真实验结果表明,干扰机通过自主学习成功与雷达进行了长时间的博弈,有80%的概率成功突防,训练效果明显优于传统DQN和优先经验回放DDQN(prioritized experience replay-DDQN,PER-DDQN)。 展开更多
关键词 干扰决策 认知电子战 深度q网络 强化学习 干扰机 多功能雷达 经验回放 恒虚警率探测
下载PDF
演化算法的DQN网络参数优化方法
10
作者 曹子建 郭瑞麒 +2 位作者 贾浩文 李骁 徐恺 《西安工业大学学报》 CAS 2024年第2期219-231,共13页
为了解决DQN(Deep Q Network)在早期会出现盲目搜索、勘探利用不均并导致整个算法收敛过慢的问题,从探索前期有利于算法训练的有效信息获取与利用的角度出发,以差分演化(Differential Evolution)算法为例,提出了一种基于演化算法优化DQ... 为了解决DQN(Deep Q Network)在早期会出现盲目搜索、勘探利用不均并导致整个算法收敛过慢的问题,从探索前期有利于算法训练的有效信息获取与利用的角度出发,以差分演化(Differential Evolution)算法为例,提出了一种基于演化算法优化DQN网络参数以加快其收敛速度的方法(DE-DQN)。首先,将DQN的网络参数编码为演化个体;其次,分别采用“运行步长”和“平均回报”两种适应度函数评价方式;利用CartPole控制问题进行仿真对比,验证了两种评价方式的有效性。最后,实验结果表明,在智能体训练5 000代时所提出的改进算法,以“运行步长”为适应度函数时,在运行步长、平均回报和累计回报上分别提高了82.7%,18.1%和25.1%,并优于改进DQN算法;以“平均回报”为适应度函数时,在运行步长、平均回报和累计回报上分别提高了74.9%,18.5%和13.3%并优于改进DQN算法。这说明了DE-DQN算法相较于传统的DQN及其改进算法前期能获得更多有用信息,加快收敛速度。 展开更多
关键词 深度强化学习 深度q网络 收敛加速 演化算法 自动控制
下载PDF
基于改进联邦竞争深度Q网络的多微网能量管理策略
11
作者 黎海涛 刘伊然 +3 位作者 杨艳红 肖浩 谢冬雪 裴玮 《电力系统自动化》 EI CSCD 北大核心 2024年第8期174-184,共11页
目前,基于联邦深度强化学习的微网(MG)能量管理研究未考虑多类型能量转换与MG间电量交易的问题,同时,频繁交互模型参数导致通信时延较大。基于此,以一种包含风、光、电、气等多类型能源的MG为研究对象,构建了支持MG间电量交易和MG内能... 目前,基于联邦深度强化学习的微网(MG)能量管理研究未考虑多类型能量转换与MG间电量交易的问题,同时,频繁交互模型参数导致通信时延较大。基于此,以一种包含风、光、电、气等多类型能源的MG为研究对象,构建了支持MG间电量交易和MG内能量转换的能量管理模型,提出基于正余弦算法的联邦竞争深度Q网络学习算法,并基于该算法设计了计及能量交易与转换的多MG能量管理与优化策略。仿真结果表明,所提能量管理策略在保护数据隐私的前提下,能够得到更高奖励且最大化MG经济收益,同时降低了通信时延。 展开更多
关键词 微网(MG) 联邦学习 竞争深度q网络 正余弦算法 能量管理
下载PDF
基于DQN和功率分配的FDA-MIMO雷达抗扫频干扰
12
作者 周长霖 王春阳 +3 位作者 宫健 谭铭 包磊 刘明杰 《雷达科学与技术》 北大核心 2024年第2期155-160,169,共7页
频率分集阵列(Frequency Diversity Array,FDA)雷达由于其阵列元件的频率增量产生了许多新的特性,包括其可以通过发射功率分配进行灵活的发射波形频谱控制。在以扫频干扰为电磁干扰环境的假设下,首先,通过引入强化学习的框架,建立了频... 频率分集阵列(Frequency Diversity Array,FDA)雷达由于其阵列元件的频率增量产生了许多新的特性,包括其可以通过发射功率分配进行灵活的发射波形频谱控制。在以扫频干扰为电磁干扰环境的假设下,首先,通过引入强化学习的框架,建立了频率分集阵列-多输入多输出(Frequency Diversity Array-Multiple Input Multiple Output,FDA-MIMO)雷达与电磁干扰环境交互模型,使得FDA-MIMO雷达能够在与电磁环境交互过程中,感知干扰抑制干扰。其次,本文提出了一种基于深度Q网络(Deep Q-Network,DQN)和FDA-MIMO雷达发射功率分配的扫频干扰抑制方法,使得雷达系统能够在充分利用频谱资源的情况下最大化SINR。最后,仿真结果证实,在强化学习框架下,FDA-MIMO雷达能够通过对发射功率分配进行优化,完成干扰抑制,提升雷达性能。 展开更多
关键词 频率分集阵列 扫频干扰 强化学习 深度q网络 功率分配
下载PDF
An intelligent task offloading algorithm(iTOA)for UAV edge computing network 被引量:8
13
作者 Jienan Chen Siyu Chen +3 位作者 Siyu Luo Qi Wang Bin Cao Xiaoqian Li 《Digital Communications and Networks》 SCIE 2020年第4期433-443,共11页
Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of im... Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively. 展开更多
关键词 Unmanned aerial vehicles(UAVs) Mobile edge computing(MEC) Intelligent task offloading algorithm(iTOA) Monte Carlo tree search(MCTS) deep reinforcement learning Splitting deep neural network(sDNN)
下载PDF
基于深度Q网络的无人车侦察路径规划
14
作者 夏雨奇 黄炎焱 陈恰 《系统工程与电子技术》 EI CSCD 北大核心 2024年第9期3070-3081,共12页
在城市战场环境下,无人侦察车有助于指挥部更好地了解目标地区情况,提升决策准确性,降低军事行动的威胁。目前,无人侦察车多采用阿克曼转向结构,传统算法规划的路径不符合无人侦察车的运动学模型。对此,将自行车运动模型和深度Q网络相结... 在城市战场环境下,无人侦察车有助于指挥部更好地了解目标地区情况,提升决策准确性,降低军事行动的威胁。目前,无人侦察车多采用阿克曼转向结构,传统算法规划的路径不符合无人侦察车的运动学模型。对此,将自行车运动模型和深度Q网络相结合,通过端到端的方式生成无人侦察车的运动轨迹。针对深度Q网络学习速度慢、泛化能力差的问题,根据神经网络的训练特点提出基于经验分类的深度Q网络,并提出具有一定泛化能力的状态空间。仿真实验结果表明,相较于传统路径规划算法,所提算法规划出的路径更符合无人侦察车的运动轨迹并提升无人侦察车的学习效率和泛化能力。 展开更多
关键词 深度强化学习 无人侦察车 路径规划 深度q网络
下载PDF
基于改进DDQN的无人车路径规划算法
15
作者 曹京威 何秋生 《组合机床与自动化加工技术》 北大核心 2024年第8期48-53,共6页
针对DDQN算法在路径规划方面存在收敛速度慢和路径质量低等问题,基于DDQN算法研究了一种无人车路径规划算法。首先,通过获得多个时刻的奖励值,将这些奖励累加并均值处理从而充分利用奖励值信息;然后,通过优化斥力生成的方向改进人工势场... 针对DDQN算法在路径规划方面存在收敛速度慢和路径质量低等问题,基于DDQN算法研究了一种无人车路径规划算法。首先,通过获得多个时刻的奖励值,将这些奖励累加并均值处理从而充分利用奖励值信息;然后,通过优化斥力生成的方向改进人工势场法,并用改进的人工势场法代替随机探索提升收敛速度;最后,通过判断路径与障碍物的关系移除冗余节点,并使用贝塞尔曲线对路径进行平滑处理提升路径质量。仿真结果表明,在20×20的两种环境中,改进的DDQN算法相比原始DDQN算法收敛速度分别提升69.01%和55.88%,路径长度分别缩短21.39%和14.33%,并且路径平滑度更高。将改进的DDQN算法部署在无人车上进行检验,结果表明无人车能够较好完成路径规划任务。 展开更多
关键词 强化学习 深度q网络 人工势场 路径规划
下载PDF
基于认知行为模型的启发加速深度Q网络
16
作者 李嘉祥 陈浩 +1 位作者 黄健 张中杰 《计算机应用与软件》 北大核心 2024年第9期148-155,共8页
由于状态-动作空间的扩大或奖励回报稀疏,强化学习智能体在复杂环境下从零开始学习最优策略将更为困难。由此提出基于智能体认知行为模型的启发加速深度Q网络,将符号化的规则表示融入学习网络,动态引导智能体策略学习,解决有效加速智能... 由于状态-动作空间的扩大或奖励回报稀疏,强化学习智能体在复杂环境下从零开始学习最优策略将更为困难。由此提出基于智能体认知行为模型的启发加速深度Q网络,将符号化的规则表示融入学习网络,动态引导智能体策略学习,解决有效加速智能体学习的问题。该算法将启发知识建模为基于BDI(Belief-Desire-Intention)的认知行为模型,用于产生认知行为知识引导智能体策略学习,设计启发策略网络在线引导智能体的动作选择。GYM典型环境与星际争霸2环境下实验表明,该算法可以根据环境变化动态提取有效的认知行为知识,并借助启发策略网络加速智能体策略收敛。 展开更多
关键词 强化学习 认知行为模型 启发加速深度q网络
下载PDF
基于双深度Q网络算法的多用户端对端能源共享机制研究
17
作者 武东昊 王国烽 +2 位作者 毛毳 陈玉萍 张有兵 《高技术通讯》 CAS 北大核心 2024年第7期755-764,共10页
端对端(P2P)电力交易作为用户侧能源市场的一种新的能源平衡和互动方式,可以有效促进用户群体内的能源共享,提高参与能源市场用户的经济效益。然而传统求解用户间P2P交易的方法依赖对于光伏、负荷数据的预测,难以实时响应用户间的源荷... 端对端(P2P)电力交易作为用户侧能源市场的一种新的能源平衡和互动方式,可以有效促进用户群体内的能源共享,提高参与能源市场用户的经济效益。然而传统求解用户间P2P交易的方法依赖对于光伏、负荷数据的预测,难以实时响应用户间的源荷变动问题。为此,本文建立了一种以多类型用户为基础的多用户P2P能源社区交易模型,并引入基于双深度Q网络(DDQN)的强化学习(RL)算法对其进行求解。所提方法通过DDQN算法中的预测网络以及目标网络读取多用户P2P能源社区中的环境信息,训练后的神经网络可通过实时的光伏、负荷以及电价数据对当前社区内的多用户P2P交易问题进行求解。案例仿真结果表明,所提方法在促进社区内用户间P2P能源交易共享的同时,保证了多用户P2P能源社区的经济性。 展开更多
关键词 端对端(P2P)能源共享 强化学习(RL) 能源交易市场 双深度q网络(DDqN)算法
下载PDF
基于DQN的机场加油车动态调度方法研究
18
作者 陈维兴 李业波 《西北工业大学学报》 EI CAS CSCD 北大核心 2024年第4期764-773,共10页
针对实际航班时刻不确定导致机场加油车利用率低、调度实时性差的问题,提出一种结合了多目标深度强化学习框架的深度Q网络加油车动态调度方法。建立了以最大化加油任务准时率以及平均空闲车辆占比为目标的优化模型;设计了5个衡量车辆当... 针对实际航班时刻不确定导致机场加油车利用率低、调度实时性差的问题,提出一种结合了多目标深度强化学习框架的深度Q网络加油车动态调度方法。建立了以最大化加油任务准时率以及平均空闲车辆占比为目标的优化模型;设计了5个衡量车辆当前状态的状态特征作为网络的输入,再根据2种目标提出了2种调度策略作为动作空间,使得算法能够根据航班动态数据实时生成动态调度方案;完成了对机场加油车动态调度模型的求解,并利用不同规模的算例验证了算法的有效性以及实时性。将所提方法应用于实际调度中,结果表明,与人工调度相比,平均每天加油任务准时完成数增加9.43个,车辆平均工作时间减少57.6 min,DQN的结果更具优势,提升了加油车运行效率。 展开更多
关键词 机场加油车 动态调度 深度强化学习 深度q网络 多目标优化
下载PDF
采用可供性改进DQN的推动-抓取技能学习
19
作者 温凯 李东年 +1 位作者 陈成军 赵正旭 《组合机床与自动化加工技术》 北大核心 2024年第11期34-37,43,共5页
在机器人自主抓取领域,由于抓取对象的大小形状以及分布状态的随机性,仅靠单一的抓取操作完成对工作区域内物体的抓取是十分困难的,而推动和抓取动作的结合可以降低抓取环境的复杂性,通过推动操作可以改变抓取对象的分布以便于更好的抓... 在机器人自主抓取领域,由于抓取对象的大小形状以及分布状态的随机性,仅靠单一的抓取操作完成对工作区域内物体的抓取是十分困难的,而推动和抓取动作的结合可以降低抓取环境的复杂性,通过推动操作可以改变抓取对象的分布以便于更好的抓取。但是推动动作的添加同时也会产生一些无效的推动,会降低模型的学习效率。在基于深度Q网络(deep Q-network,DQN)的视觉推动抓取(visual pushing for grasping,VPG)模型的基础上,提出了一种可供性方案用于简化机器人动作规划空间的搜索复杂度,加快机器人抓取的学习进程。通过减少在任何给定情况下可用的行动数量来实现更快的计划,有助于从数据中更高效和精确地学习模型。最后通过在V-rep仿真平台上的仿真场景验证了所提方法的有效性。 展开更多
关键词 机器人抓取 可供性 深度q网络 深度强化学习
下载PDF
基于多智能体深度Q网络交互的板壳加强筋生长式设计
20
作者 钟意 杨勇 +3 位作者 姜学涛 潘顺洋 朱其新 王磊 《中国机械工程》 EI CAS CSCD 北大核心 2024年第8期1397-1404,共8页
基于板壳加强筋生长步序列的马尔可夫性质,提出了板壳加强筋生长式设计的强化学习驱动策略。以结构整体应变能最小化为目标,运用马尔可夫决策过程对板壳加强筋的生长过程进行建模。通过引入多智能体系统,共享加强筋生长式过程的状态奖... 基于板壳加强筋生长步序列的马尔可夫性质,提出了板壳加强筋生长式设计的强化学习驱动策略。以结构整体应变能最小化为目标,运用马尔可夫决策过程对板壳加强筋的生长过程进行建模。通过引入多智能体系统,共享加强筋生长式过程的状态奖励并记忆特定动作,降低学习复杂度,实现了加强筋生长式过程奖励值的波动收敛,达成板壳加强筋生长式设计策略。最后给出算例并将平滑处理后的加强筋布局与经典算法的设计结果进行对比,验证了基于多智能体深度Q网络交互的板壳加强筋生长式设计的有效性。 展开更多
关键词 板壳加强筋 生长式 多智能体深度q网络 布局设计 强化学习
下载PDF
上一页 1 2 13 下一页 到第
使用帮助 返回顶部