期刊文献+
共找到1,809篇文章
< 1 2 91 >
每页显示 20 50 100
A digital twins enabled underwater intelligent internet vehicle path planning system via reinforcement learning and edge computing
1
作者 Jiachen Yang Meng Xi +2 位作者 Jiabao Wen Yang Li Houbing Herbert Song 《Digital Communications and Networks》 SCIE CSCD 2024年第2期282-291,共10页
The Autonomous Underwater Glider(AUG)is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications,in which path planning is an essential problem.Due to th... The Autonomous Underwater Glider(AUG)is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications,in which path planning is an essential problem.Due to the complexity and variability of the ocean,accurate environment modeling and flexible path planning algorithms are pivotal challenges.The traditional models mainly utilize mathematical functions,which are not complete and reliable.Most existing path planning algorithms depend on the environment and lack flexibility.To overcome these challenges,we propose a path planning system for underwater intelligent internet vehicles.It applies digital twins and sensor data to map the real ocean environment to a virtual digital space,which provides a comprehensive and reliable environment for path simulation.We design a value-based reinforcement learning path planning algorithm and explore the optimal network structure parameters.The path simulation is controlled by a closed-loop model integrated into the terminal vehicle through edge computing.The integration of state input enriches the learning of neural networks and helps to improve generalization and flexibility.The task-related reward function promotes the rapid convergence of the training.The experimental results prove that our reinforcement learning based path planning algorithm has great flexibility and can effectively adapt to a variety of different ocean conditions. 展开更多
关键词 Digital twins Reinforcement learning Edge computing Underwater intelligent internet vehicle path planning
下载PDF
Efficient Penetration Testing Path Planning Based on Reinforcement Learning with Episodic Memory
2
作者 Ziqiao Zhou Tianyang Zhou +1 位作者 Jinghao Xu Junhu Zhu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2613-2634,共22页
Intelligent penetration testing is of great significance for the improvement of the security of information systems,and the critical issue is the planning of penetration test paths.In view of the difficulty for attack... Intelligent penetration testing is of great significance for the improvement of the security of information systems,and the critical issue is the planning of penetration test paths.In view of the difficulty for attackers to obtain complete network information in realistic network scenarios,Reinforcement Learning(RL)is a promising solution to discover the optimal penetration path under incomplete information about the target network.Existing RL-based methods are challenged by the sizeable discrete action space,which leads to difficulties in the convergence.Moreover,most methods still rely on experts’knowledge.To address these issues,this paper proposes a penetration path planning method based on reinforcement learning with episodic memory.First,the penetration testing problem is formally described in terms of reinforcement learning.To speed up the training process without specific prior knowledge,the proposed algorithm introduces episodic memory to store experienced advantageous strategies for the first time.Furthermore,the method offers an exploration strategy based on episodic memory to guide the agents in learning.The design makes full use of historical experience to achieve the purpose of reducing blind exploration and improving planning efficiency.Ultimately,comparison experiments are carried out with the existing RL-based methods.The results reveal that the proposed method has better convergence performance.The running time is reduced by more than 20%. 展开更多
关键词 Intelligent penetration testing penetration testing path planning reinforcement learning episodic memory exploration strategy
下载PDF
Research on the Lifting Path of Data Literacy Ability of Applied University Teachers under the Perspective of Organizational Learning
3
作者 Shuli Gao Yanli Guo 《Open Journal of Applied Sciences》 2024年第2期320-329,共10页
With the arrival of the big data era, the modern higher education model has undergone radical changes, and higher requirements have been put forward for the data literacy of college teachers. The paper first analyzes ... With the arrival of the big data era, the modern higher education model has undergone radical changes, and higher requirements have been put forward for the data literacy of college teachers. The paper first analyzes the connotation of teacher data literacy, and then combs through the status quo and dilemmas of teachers’ data literacy ability in applied universities. The paper proposes to enhance the data literacy ability of teachers from the perspective of organizational learning. Through building a digital culture, building a data-driven teaching environment, and constructing an interdisciplinary learning community to further promote the application of the theory and practice of datafication inside and outside the organization, and ultimately improve the quality of teaching. 展开更多
关键词 Organizational learning Teachers’ Data Literacy Lifting paths
下载PDF
改进Q-Learning的路径规划算法研究
4
作者 宋丽君 周紫瑜 +2 位作者 李云龙 侯佳杰 何星 《小型微型计算机系统》 CSCD 北大核心 2024年第4期823-829,共7页
针对Q-Learning算法学习效率低、收敛速度慢且在动态障碍物的环境下路径规划效果不佳的问题,本文提出一种改进Q-Learning的移动机器人路径规划算法.针对该问题,算法根据概率的突变性引入探索因子来平衡探索和利用以加快学习效率;通过在... 针对Q-Learning算法学习效率低、收敛速度慢且在动态障碍物的环境下路径规划效果不佳的问题,本文提出一种改进Q-Learning的移动机器人路径规划算法.针对该问题,算法根据概率的突变性引入探索因子来平衡探索和利用以加快学习效率;通过在更新函数中设计深度学习因子以保证算法探索概率;融合遗传算法,避免陷入局部路径最优同时按阶段探索最优迭代步长次数,以减少动态地图探索重复率;最后提取输出的最优路径关键节点采用贝塞尔曲线进行平滑处理,进一步保证路径平滑度和可行性.实验通过栅格法构建地图,对比实验结果表明,改进后的算法效率相较于传统算法在迭代次数和路径上均有较大优化,且能够较好的实现动态地图下的路径规划,进一步验证所提方法的有效性和实用性. 展开更多
关键词 移动机器人 路径规划 Q-learning算法 平滑处理 动态避障
下载PDF
基于改进Q-Learning的移动机器人路径规划算法
5
作者 王立勇 王弘轩 +2 位作者 苏清华 王绅同 张鹏博 《电子测量技术》 北大核心 2024年第9期85-92,共8页
随着移动机器人在生产生活中的深入应用,其路径规划能力也需要向快速性和环境适应性兼备发展。为解决现有移动机器人使用强化学习方法进行路径规划时存在的探索前期容易陷入局部最优、反复搜索同一区域,探索后期收敛率低、收敛速度慢的... 随着移动机器人在生产生活中的深入应用,其路径规划能力也需要向快速性和环境适应性兼备发展。为解决现有移动机器人使用强化学习方法进行路径规划时存在的探索前期容易陷入局部最优、反复搜索同一区域,探索后期收敛率低、收敛速度慢的问题,本研究提出一种改进的Q-Learning算法。该算法改进Q矩阵赋值方法,使迭代前期探索过程具有指向性,并降低碰撞的情况;改进Q矩阵迭代方法,使Q矩阵更新具有前瞻性,避免在一个小区域中反复探索;改进随机探索策略,在迭代前期全面利用环境信息,后期向目标点靠近。在不同栅格地图仿真验证结果表明,本文算法在Q-Learning算法的基础上,通过上述改进降低探索过程中的路径长度、减少抖动并提高收敛的速度,具有更高的计算效率。 展开更多
关键词 路径规划 强化学习 移动机器人 Q-learning算法 ε-decreasing策略
下载PDF
基于Q-Learning的航空器滑行路径规划研究
6
作者 王兴隆 王睿峰 《中国民航大学学报》 CAS 2024年第3期28-33,共6页
针对传统算法规划航空器滑行路径准确度低、不能根据整体场面运行情况进行路径规划的问题,提出一种基于Q-Learning的路径规划方法。通过对机场飞行区网络结构模型和强化学习的仿真环境分析,设置了状态空间和动作空间,并根据路径的合规... 针对传统算法规划航空器滑行路径准确度低、不能根据整体场面运行情况进行路径规划的问题,提出一种基于Q-Learning的路径规划方法。通过对机场飞行区网络结构模型和强化学习的仿真环境分析,设置了状态空间和动作空间,并根据路径的合规性和合理性设定了奖励函数,将路径合理性评价值设置为滑行路径长度与飞行区平均滑行时间乘积的倒数。最后,分析了动作选择策略参数对路径规划模型的影响。结果表明,与A*算法和Floyd算法相比,基于Q-Learning的路径规划在滑行距离最短的同时,避开了相对繁忙的区域,路径合理性评价值高。 展开更多
关键词 滑行路径规划 机场飞行区 强化学习 Q-learning
下载PDF
基于多步信息辅助的Q-learning路径规划算法
7
作者 王越龙 王松艳 晁涛 《系统仿真学报》 CAS CSCD 北大核心 2024年第9期2137-2148,共12页
为提升静态环境下移动机器人路径规划能力,解决传统Q-learning算法在路径规划中收敛速度慢的问题,提出一种基于多步信息辅助机制的Q-learning改进算法。利用ε-greedy策略中贪婪动作的多步信息与历史最优路径长度更新资格迹,使有效的资... 为提升静态环境下移动机器人路径规划能力,解决传统Q-learning算法在路径规划中收敛速度慢的问题,提出一种基于多步信息辅助机制的Q-learning改进算法。利用ε-greedy策略中贪婪动作的多步信息与历史最优路径长度更新资格迹,使有效的资格迹在算法迭代中持续发挥作用,用保存的多步信息解决可能落入的循环陷阱;使用局部多花朵的花授粉算法初始化Q值表,提升机器人前期搜索效率;基于机器人不同探索阶段的目的,结合迭代路径长度的标准差与机器人成功到达目标点的次数设计动作选择策略,以增强算法对环境信息探索与利用的平衡能力。实验结果表明:该算法具有较快的收敛速度,验证了算法的可行性与有效性。 展开更多
关键词 路径规划 Q-learning 收敛速度 动作选择策略 栅格地图
下载PDF
基于改进Q-learning算法移动机器人局部路径研究
8
作者 方文凯 廖志高 《计算机与数字工程》 2024年第5期1265-1269,1274,共6页
针对局部路径规划时因无法提前获取环境信息导致移动机器人搜索不到合适的路径,以及在采用马尔可夫决策过程中传统强化学习算法应用于局部路径规划时存在着学习效率低下及收敛速度较慢等问题,提出一种改进的Q-learn-ing(QL)算法。首先... 针对局部路径规划时因无法提前获取环境信息导致移动机器人搜索不到合适的路径,以及在采用马尔可夫决策过程中传统强化学习算法应用于局部路径规划时存在着学习效率低下及收敛速度较慢等问题,提出一种改进的Q-learn-ing(QL)算法。首先设计一种动态自适应贪婪策略,用于平衡移动机器人对环境探索和利用之间的问题;其次根据A*算法思想设计启发式学习评估模型,从而动态调整学习因子并为搜索路径提供导向作用;最后引入三阶贝塞尔曲线规划对路径进行平滑处理。通过Pycharm平台仿真结果表明,使得改进后的QL算法所规划的路径长度、搜索效率及路径平滑性等特性上都优于传统Sarsa算法及QL算法,比传统Sarsa算法迭代次数提高32.3%,搜索时间缩短27.08%,比传统QL算法迭代次数提高27.32%,搜索时间缩短17.28%,路径规划的拐点大幅度减少,局部路径优化效果较为明显。 展开更多
关键词 移动机器人 Q-learning算法 局部路径 A^(*)算法 贝塞尔曲线
下载PDF
LSTM-DPPO based deep reinforcement learning controller for path following optimization of unmanned surface vehicle 被引量:1
9
作者 XIA Jiawei ZHU Xufang +1 位作者 LIU Zhong XIA Qingtao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第5期1343-1358,共16页
To solve the path following control problem for unmanned surface vehicles(USVs),a control method based on deep reinforcement learning(DRL)with long short-term memory(LSTM)networks is proposed.A distributed proximal po... To solve the path following control problem for unmanned surface vehicles(USVs),a control method based on deep reinforcement learning(DRL)with long short-term memory(LSTM)networks is proposed.A distributed proximal policy opti-mization(DPPO)algorithm,which is a modified actor-critic-based type of reinforcement learning algorithm,is adapted to improve the controller performance in repeated trials.The LSTM network structure is introduced to solve the strong temporal cor-relation USV control problem.In addition,a specially designed path dataset,including straight and curved paths,is established to simulate various sailing scenarios so that the reinforcement learning controller can obtain as much handling experience as possible.Extensive numerical simulation results demonstrate that the proposed method has better control performance under missions involving complex maneuvers than trained with limited scenarios and can potentially be applied in practice. 展开更多
关键词 unmanned surface vehicle(USV) deep reinforce-ment learning(DRL) path following path dataset proximal po-licy optimization long short-term memory(LSTM)
下载PDF
基于改进Q-learning算法的移动机器人路径规划
10
作者 井征淼 刘宏杰 周永录 《火力与指挥控制》 CSCD 北大核心 2024年第3期135-141,共7页
针对传统Q-learning算法应用在路径规划中存在收敛速度慢、运行时间长、学习效率差等问题,提出一种将人工势场法和传统Q-learning算法结合的改进Q-learning算法。该算法引入人工势场法的引力函数与斥力函数,通过对比引力函数动态选择奖... 针对传统Q-learning算法应用在路径规划中存在收敛速度慢、运行时间长、学习效率差等问题,提出一种将人工势场法和传统Q-learning算法结合的改进Q-learning算法。该算法引入人工势场法的引力函数与斥力函数,通过对比引力函数动态选择奖励值,以及对比斥力函数计算姿值,动态更新Q值,使移动机器人具有目的性的探索,并且优先选择离障碍物较远的位置移动。通过仿真实验证明,与传统Q-learning算法、引入引力场算法对比,改进Q-learning算法加快了收敛速度,缩短了运行时间,提高了学习效率,降低了与障碍物相撞的概率,使移动机器人能够快速地找到一条无碰撞通路。 展开更多
关键词 移动机器人 路径规划 改进的Q-learning 人工势场法 强化学习
下载PDF
基于Q-learning的搜救机器人自主路径规划
11
作者 褚晶 邓旭辉 岳颀 《南京航空航天大学学报》 CAS CSCD 北大核心 2024年第2期364-374,共11页
当人为和自然灾害突然发生时,在极端情况下快速部署搜救机器人是拯救生命的关键。为了完成救援任务,搜救机器人需要在连续动态未知环境中,自主进行路径规划以到达救援目标位置。本文提出了一种搜救机器人传感器配置方案,应用基于Q⁃tabl... 当人为和自然灾害突然发生时,在极端情况下快速部署搜救机器人是拯救生命的关键。为了完成救援任务,搜救机器人需要在连续动态未知环境中,自主进行路径规划以到达救援目标位置。本文提出了一种搜救机器人传感器配置方案,应用基于Q⁃table和神经网络的Q⁃learning算法,实现搜救机器人的自主控制,解决了在未知环境中如何避开静态和动态障碍物的路径规划问题。如何平衡训练过程的探索与利用是强化学习的挑战之一,本文在贪婪搜索和Boltzmann搜索的基础上,提出了对搜索策略进行动态选择的混合优化方法。并用MATLAB进行了仿真,结果表明所提出的方法是可行有效的。采用该传感器配置的搜救机器人能够有效地响应环境变化,到达目标位置的同时成功避开静态、动态障碍物。 展开更多
关键词 搜救机器人 路径规划 传感器配置 Q⁃learning 神经网络
下载PDF
3D Path Optimisation of Unmanned Aerial Vehicles Using Q Learning-Controlled GWO-AOA
12
作者 K.Sreelakshmy Himanshu Gupta +3 位作者 Om Prakash Verma Kapil Kumar Abdelhamied A.Ateya Naglaa F.Soliman 《Computer Systems Science & Engineering》 SCIE EI 2023年第6期2483-2503,共21页
Unmanned Aerial Vehicles(UAVs)or drones introduced for military applications are gaining popularity in several other fields as well such as security and surveillance,due to their ability to perform repetitive and tedi... Unmanned Aerial Vehicles(UAVs)or drones introduced for military applications are gaining popularity in several other fields as well such as security and surveillance,due to their ability to perform repetitive and tedious tasks in hazardous environments.Their increased demand created the requirement for enabling the UAVs to traverse independently through the Three Dimensional(3D)flight environment consisting of various obstacles which have been efficiently addressed by metaheuristics in past literature.However,not a single optimization algorithms can solve all kind of optimization problem effectively.Therefore,there is dire need to integrate metaheuristic for general acceptability.To address this issue,in this paper,a novel reinforcement learning controlled Grey Wolf Optimisation-Archimedes Optimisation Algorithm(QGA)has been exhaustively introduced and exhaustively validated firstly on 22 benchmark functions and then,utilized to obtain the optimum flyable path without collision for UAVs in three dimensional environment.The performance of the developed QGA has been compared against the various metaheuristics.The simulation experimental results reveal that the QGA algorithm acquire a feasible and effective flyable path more efficiently in complicated environment. 展开更多
关键词 Archimedes optimisation algorithm grey wolf optimisation path planning reinforcement learning unmanned aerial vehicles
下载PDF
Personalized Learning Path Recommendations for Software Testing Courses Based on Knowledge Graphs
13
作者 Wei Zheng Ruonan Gu +2 位作者 Xiaoxue Wu Lipeng Gao Han Li 《计算机教育》 2023年第12期63-70,共8页
Software testing courses are characterized by strong practicality,comprehensiveness,and diversity.Due to the differences among students and the needs to design personalized solutions for their specific requirements,th... Software testing courses are characterized by strong practicality,comprehensiveness,and diversity.Due to the differences among students and the needs to design personalized solutions for their specific requirements,the design of the existing software testing courses fails to meet the demands for personalized learning.Knowledge graphs,with their rich semantics and good visualization effects,have a wide range of applications in the field of education.In response to the current problem of software testing courses which fails to meet the needs for personalized learning,this paper offers a learning path recommendation based on knowledge graphs to provide personalized learning paths for students. 展开更多
关键词 Knowledge graphs Software testing learning path Personalized education
下载PDF
基于BAS和Q-Learning的移动机器人路径规划算法研究 被引量:1
14
作者 程晶晶 周明龙 邓雄峰 《长春工程学院学报(自然科学版)》 2023年第3期107-111,共5页
针对传统Q-Learning存在的起始阶段盲目搜索、动作选择策略不科学的问题,提出了联合BAS和Q-Learning的移动机器人路径规划算法。该算法运用欧式距离定位坐标,引入搜索因子对搜索时间和学习速度进行平衡改进,并采用BAS和贝塞尔曲线分别优... 针对传统Q-Learning存在的起始阶段盲目搜索、动作选择策略不科学的问题,提出了联合BAS和Q-Learning的移动机器人路径规划算法。该算法运用欧式距离定位坐标,引入搜索因子对搜索时间和学习速度进行平衡改进,并采用BAS和贝塞尔曲线分别优化Q值、平滑路径。将提出的算法应用于仿真试验和实物试验,结果表明其能够高效、精准地在静态和动态障碍物环境下获得最优路径,具有较强的工程适用性。 展开更多
关键词 移动机器人 路径规划 Q-learning算法 天牛须搜索 路径平滑
下载PDF
基于Q-learning的轻量化填充结构3D打印路径规划
15
作者 徐文鹏 王东晓 +3 位作者 付林朋 张鹏 侯守明 曾艳阳 《传感器与微系统》 CSCD 北大核心 2023年第12期44-47,共4页
针对轻量化填充结构模型,提出了一种基于Q-learning算法的3D打印路径规划方法,来改善该结构路径规划中转弯与启停次数较多的问题。首先对填充和分层处理后的模型切片进行预处理,然后以减少打印头转弯和启停动作为目标,构建相对应的马尔... 针对轻量化填充结构模型,提出了一种基于Q-learning算法的3D打印路径规划方法,来改善该结构路径规划中转弯与启停次数较多的问题。首先对填充和分层处理后的模型切片进行预处理,然后以减少打印头转弯和启停动作为目标,构建相对应的马尔可夫决策过程数学模型,多次迭代动作价值函数至其收敛,求解出一组取得最大回报值的动作策略,按照所设定的数学模型将该策略转义输出为打印路径,最后通过对比实验进行验证。实验结果表明:该方法能有效减少打印头的转弯和启停次数,增加打印路径的连续性,节省打印时间,同时可以在一定程度上提升打印质量。 展开更多
关键词 3D打印 路径规划 Q-learning算法 轻量化填充结构
下载PDF
Learning Path(学习路径)在大学英语翻转课堂中的应用 被引量:2
16
作者 夏莉 夏光峰 《黑河学院学报》 2020年第6期113-117,共5页
大学英语教学改革要求充分利用信息技术手段,采用翻转课堂引导学生开展高效的主动式学习,这是当前大学英语教学研究的热点。国内英语教师在翻转课堂研究和实践过程中容易忽视学习过程导向问题,以Learning Path规划设计解决翻转课堂学习... 大学英语教学改革要求充分利用信息技术手段,采用翻转课堂引导学生开展高效的主动式学习,这是当前大学英语教学研究的热点。国内英语教师在翻转课堂研究和实践过程中容易忽视学习过程导向问题,以Learning Path规划设计解决翻转课堂学习过程控制和分级施教的方法,以Moodle平台上的课程实例,构建大学英语教学中Learning Path的规划思路和实现方法,并改进与发展Learning Path。 展开更多
关键词 学习路径 大学英语 翻转课堂 个性化学习
下载PDF
基于改进Q-learning算法和DWA的路径规划 被引量:2
17
作者 王志伟 邹艳丽 +2 位作者 刘唐慧美 侯凤萍 余自淳 《传感器与微系统》 CSCD 北大核心 2023年第9期148-152,共5页
针对传统Q-learning算法出现的规划路线转折点多,探索效率低,以及无法实现动态环境下的路径规划问题,提出一种基于改进Q-learning算法和动态窗口法(DWA)的融合算法。首先,改变传统Q-learning算法的搜索方式,由原先的8方向变成16方向;利... 针对传统Q-learning算法出现的规划路线转折点多,探索效率低,以及无法实现动态环境下的路径规划问题,提出一种基于改进Q-learning算法和动态窗口法(DWA)的融合算法。首先,改变传统Q-learning算法的搜索方式,由原先的8方向变成16方向;利用模拟退火算法对Q-learning进行迭代优化;通过路径节点优化算法进行节点简化,提高路径平滑度。然后,提取改进Q-learning算法规划路径的节点,将其作为DWA算法的临时目标,前进过程中,能够实时躲避环境中出现的动静态障碍物。最终实验结果表明:融合算法具有较好的路径规划能力,实现了全局最优和有效避障的效果。 展开更多
关键词 Q-learning算法 路径规划 动态窗口法
下载PDF
基于改进Q-learning算法的移动机器人局部路径规划 被引量:3
18
作者 张耀玉 李彩虹 +2 位作者 张国胜 李永迪 梁振英 《山东理工大学学报(自然科学版)》 CAS 2023年第2期1-6,共6页
针对Q-learning算法在移动机器人局部路径规划中存在的学习速度慢、效率低等问题,提出一种改进的IQ-learning算法。首先设计了栅格地图,建立机器人八连通的运行环境。其次基于栅格地图设计了状态、动作、Q值表、奖惩函数和动作选择策略;... 针对Q-learning算法在移动机器人局部路径规划中存在的学习速度慢、效率低等问题,提出一种改进的IQ-learning算法。首先设计了栅格地图,建立机器人八连通的运行环境。其次基于栅格地图设计了状态、动作、Q值表、奖惩函数和动作选择策略;在Q-learning算法的基础上,IQ-learning在奖惩函数中增加了对角线运动奖励值,鼓励机器人向八个方向探索路径,将平移运动和对角线运动相结合,减少规划路径长度和在初始阶段的盲目搜索,加快算法的收敛速度。最后利用设计的IQ-learning算法学习策略,分别在离散型、一字型、U型和混合型等障碍物环境下,学习移动机器人的局部路径规划任务,并与Q-learning的规划结果相比较,得出IQ-learning算法能够在更少的学习次数中以较少的步数找到最短路径,规划效率有所提高。 展开更多
关键词 移动机器人 Q-learning算法 IQ-learning算法 局部路径规划 栅格地图
下载PDF
Deep Learning Aided SCL Decoding of Polar Codes with Shifted-Pruning 被引量:1
19
作者 Yang Lu Mingmin Zhao +2 位作者 Ming Lei Chan Wang Minjian Zhao 《China Communications》 SCIE CSCD 2023年第1期153-170,共18页
Recently,a generalized successive cancellation list(SCL)decoder implemented with shiftedpruning(SP)scheme,namely the SCL-SP-ωdecoder,is presented for polar codes,which is able to shift the pruning window at mostωtim... Recently,a generalized successive cancellation list(SCL)decoder implemented with shiftedpruning(SP)scheme,namely the SCL-SP-ωdecoder,is presented for polar codes,which is able to shift the pruning window at mostωtimes during each SCL re-decoding attempt to prevent the correct path from being eliminated.The candidate positions for applying the SP scheme are selected by a shifting metric based on the probability that the elimination occurs.However,the number of exponential/logarithm operations involved in the SCL-SP-ωdecoder grows linearly with the number of information bits and list size,which leads to high computational complexity.In this paper,we present a detailed analysis of the SCL-SP-ωdecoder in terms of the decoding performance and complexity,which unveils that the choice of the shifting metric is essential for improving the decoding performance and reducing the re-decoding attempts simultaneously.Then,we introduce a simplified metric derived from the path metric(PM)domain,and a custom-tailored deep learning(DL)network is further designed to enhance the efficiency of the proposed simplified metric.The proposed metrics are both free of transcendental functions and hence,are more hardware-friendly than the existing metrics.Simulation results show that the proposed DL-aided metric provides the best error correction performance as comparison with the state of the art. 展开更多
关键词 polar codes successive cancellation list decoding deep learning shifted-pruning path metric
下载PDF
Fuzzy Q learning algorithm for dual-aircraft path planning to cooperatively detect targets by passive radars 被引量:6
20
作者 Xiang Gao Yangwang Fang Youli Wu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2013年第5期800-810,共11页
The problem of passive detection discussed in this paper involves searching and locating an aerial emitter by dualaircraft using passive radars. In order to improve the detection probability and accuracy, a fuzzy Q le... The problem of passive detection discussed in this paper involves searching and locating an aerial emitter by dualaircraft using passive radars. In order to improve the detection probability and accuracy, a fuzzy Q learning algorithrn for dual-aircraft flight path planning is proposed. The passive detection task model of the dual-aircraft is set up based on the partition of the target active radar's radiation area. The problem is formulated as a Markov decision process (MDP) by using the fuzzy theory to make a generalization of the state space and defining the transition functions, action space and reward function properly. Details of the path planning algorithm are presented. Simulation results indicate that the algorithm can provide adaptive strategies for dual-aircraft to control their flight paths to detect a non-maneuvering or maneu- vering target. 展开更多
关键词 Markov decision process (MDP) fuzzy Q learning dual-aircraft coordination path planning passive detection.
下载PDF
上一页 1 2 91 下一页 到第
使用帮助 返回顶部