期刊文献+
共找到218篇文章
< 1 2 11 >
每页显示 20 50 100
Accurate Machine Learning Predictions of Sci-Fi Film Performance
1
作者 Amjed Al Fahoum Tahani A.Ghobon 《Journal of New Media》 2023年第1期1-22,共22页
A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive researc... A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry. 展开更多
关键词 Film success rate prediction optimized feature selection robust machine learning nearest neighbors’ algorithms
下载PDF
Recent innovation in benchmark rates (BMR):evidence from influential factors on Turkish Lira Overnight Reference Interest Rate with machine learning algorithms 被引量:2
2
作者 Öer Depren Mustafa Tevfik Kartal Serpil KılıçDepren 《Financial Innovation》 2021年第1期942-961,共20页
Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced... Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates. 展开更多
关键词 Benchmark rate Determinants Machine learning algorithms TURKEY
下载PDF
Adaptive Learning Rate Optimization BP Algorithm with Logarithmic Objective Function
3
作者 李春雨 盛昭瀚 《Journal of Southeast University(English Edition)》 EI CAS 1997年第1期47-51,共5页
This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic o... This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm 展开更多
关键词 BP algorithm ADAPTIVE learning rate optimization fault diagnosis logarithmic objective FUNCTION
下载PDF
Fast Learning in Spiking Neural Networks by Learning Rate Adaptation 被引量:2
4
作者 方慧娟 罗继亮 王飞 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2012年第6期1219-1224,共6页
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de... For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN. 展开更多
关键词 spiking neural networks learning algorithm learning rate adaptation Tennessee Eastman process
下载PDF
Improved IChOA-Based Reinforcement Learning for Secrecy Rate Optimization in Smart Grid Communications
5
作者 Mehrdad Shoeibi Mohammad Mehdi Sharifi Nevisi +3 位作者 Sarvenaz Sadat Khatami Diego Martín Sepehr Soltani Sina Aghakhani 《Computers, Materials & Continua》 SCIE EI 2024年第11期2819-2843,共25页
In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open... In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error. 展开更多
关键词 Smart grid communication secrecy rate optimization reinforcement learning improved chimp optimization algorithm
下载PDF
基于IAOA-KELM的储气库注采管柱内腐蚀速率预测 被引量:1
6
作者 骆正山 于瑶如 +1 位作者 骆济豪 王小完 《安全与环境学报》 CAS CSCD 北大核心 2024年第3期971-977,共7页
针对储气库注采管柱的内腐蚀速率预测问题,建立了基于阿基米德优化算法(Archimedes Optimization Algorithm,AOA)与核极限学习机(Kernel Extreme Learning Machine,KELM)相结合的模型提高腐蚀速率预测精度。通过引入佳点集、改进密度降... 针对储气库注采管柱的内腐蚀速率预测问题,建立了基于阿基米德优化算法(Archimedes Optimization Algorithm,AOA)与核极限学习机(Kernel Extreme Learning Machine,KELM)相结合的模型提高腐蚀速率预测精度。通过引入佳点集、改进密度降低因子、采用黄金正弦算法缩小搜索空间,提高局部开发能力,利用改进阿基米德优化算法(Improved Archimedes Optimization Algorithm,IAOA)优化KELM正则化系数(C)和核函数参数(γ),进而建立IAOA-KELM储气库注采管柱内腐蚀速率预测模型;使用MATLAB软件运用该模型对某注采管柱内腐蚀数据集进行学习与预测,将IAOA-KELM模型与KELM、粒子群优化算法(Particle Swarm Optimization,PSO)-KELM、AOA-KELM结果进行预测误差对比。结果表明,IAOA-KELM模型的预测值与实际值较为拟合,其E RMSE为0.65%,E MAE为0.39%,R 2为99.83%,均优于其他模型。研究表明,IAOA-KELM模型能够更为准确地预测储气库注采管柱内腐蚀速率,为储气库注采管柱的运维及储气库的健康管理提供参考。 展开更多
关键词 安全工程 地下储气库 注采管柱 核极限学习机 改进阿基米德优化算法 腐蚀速率
下载PDF
深度学习优化器进展综述 被引量:2
7
作者 常禧龙 梁琨 李文涛 《计算机工程与应用》 CSCD 北大核心 2024年第7期1-12,共12页
优化器是提高深度学习模型性能的关键因素,通过最小化损失函数使得模型的参数和真实参数接近从而提高模型的性能。随着GPT等大语言模型成为自然语言处理领域研究焦点,以梯度下降优化器为核心的传统优化器对大模型的优化效果甚微。因此... 优化器是提高深度学习模型性能的关键因素,通过最小化损失函数使得模型的参数和真实参数接近从而提高模型的性能。随着GPT等大语言模型成为自然语言处理领域研究焦点,以梯度下降优化器为核心的传统优化器对大模型的优化效果甚微。因此自适应矩估计类优化器应运而生,其在提高模型泛化能力等方面显著优于传统优化器。以梯度下降、自适应梯度和自适应矩估计三类优化器为主线,分析其原理及优劣。将优化器应用到Transformer架构中,选取法-英翻译任务作为评估基准,通过实验深入探讨优化器在特定任务上的效果差异。实验结果表明,自适应矩估计类优化器在机器翻译任务上有效提高模型的性能。同时,展望优化器的发展方向并给出在具体任务上的应用场景。 展开更多
关键词 优化器 机器翻译 TRANSFORMER 深度学习 学习率预热算法
下载PDF
基于IMODA自适应深度信念网络的复杂模拟电路故障诊断方法
8
作者 巩彬 安爱民 +1 位作者 石耀科 杜先君 《电子科技大学学报》 EI CAS CSCD 北大核心 2024年第3期327-344,共18页
针对传统DBN在无监督训练过程中预训练耗时久、诊断精度差等问题,提出了一种基于改进多目标蜻蜓优化自适应深度信念网络(IMODA-ADBN)的模拟电路故障诊断方法。首先,根据参数更新方向的异同提出了自适应学习率,提高网络收敛速度;其次,传... 针对传统DBN在无监督训练过程中预训练耗时久、诊断精度差等问题,提出了一种基于改进多目标蜻蜓优化自适应深度信念网络(IMODA-ADBN)的模拟电路故障诊断方法。首先,根据参数更新方向的异同提出了自适应学习率,提高网络收敛速度;其次,传统DBN在有监督调优过程利用BP算法,然而BP算法存在易陷入局部最优的问题,为了改善该问题,利用改进的MODA算法取代BP算法提高网络分类精度。在IMODA算法中,添加Logistic混沌印射和基于对立跳跃以获得帕累托最优解,增加算法的多样性,提高算法的性能。在7个多目标数学基准问题上测试该算法,并与3种元启发式优化算法(MODA、MOPSO和NSGA-II)进行比较,证明了IMODA-ADBN网络模型具有稳定性。最后将IMODAADBN运用到二级四运放双二阶低通滤波器的诊断实验中,实验结果表明该方法在收敛速度快的基础上保证了分类精度,诊断率更高,能够实现高难故障的分类与定位。 展开更多
关键词 模拟电路 MODA算法 自适应学习率 深度信念网络 故障诊断
下载PDF
用于训练神经网络的自适应梯度下降优化算法 被引量:3
9
作者 阮乐笑 《哈尔滨商业大学学报(自然科学版)》 CAS 2024年第1期25-31,共7页
由于神经网络规模的扩大,模型训练变得越来越困难.为应对这一问题,提出了一种新的自适应优化算法——Adaboundinject.选取Adam的改进算法Adabound算法,引入动态学习率边界,实现了自适应算法向随机梯度下降(SGD)的平稳过渡.为了避免最小... 由于神经网络规模的扩大,模型训练变得越来越困难.为应对这一问题,提出了一种新的自适应优化算法——Adaboundinject.选取Adam的改进算法Adabound算法,引入动态学习率边界,实现了自适应算法向随机梯度下降(SGD)的平稳过渡.为了避免最小值的超调,减少在最小值附近的振荡,在Adabound的二阶矩中加入一阶矩,利用短期参数更新作为权重,以控制参数更新.为了验证算法性能,在凸环境下,通过理论证明了Adaboundinject具有收敛性.在非凸环境下,进行了多组实验,采用了不同的神经网络模型,通过与其他自适应算法对比,验证了该算法相比其他优化算法具有更好的性能.实验结果表明,Adaboundinject算法在深度学习优化领域具有重要的应用价值,能够有效提高模型训练的效率和精度. 展开更多
关键词 深度学习 自适应优化算法 神经网络模型 图像识别 动态学习率边界 短期参数更新
下载PDF
一种玉米收获机控制系统的机器学习算法应用
10
作者 孙沛 《农机化研究》 北大核心 2024年第3期190-194,共5页
为进一步提升我国玉米收获机的作业水平、降低其籽粒损失率,基于机器学习算法理念,针对其控制系统展开优化研究。以玉米收获机控制系统的结构组成为基础,恰当地选择对象与目标明确的机器学习算法为应用核心,建立系统的控制模型,同步实... 为进一步提升我国玉米收获机的作业水平、降低其籽粒损失率,基于机器学习算法理念,针对其控制系统展开优化研究。以玉米收获机控制系统的结构组成为基础,恰当地选择对象与目标明确的机器学习算法为应用核心,建立系统的控制模型,同步实施系统的硬件配置以满足机器学习算法在进行收获作业时各功能要求的实现,并展开此机器学习算法应用下的玉米收获机作业试验。试验结果表明:经机器学习算法应用前后对比,应用后的玉米收获机控制系统的控制精度相对可提升7.92%,在保证了秸秆切断合格率与苞皮去除率的前提下,实现了籽粒损失率相对降低2.32%的良好控制作业效果,整机作业效率比机器学习算法应用前提升了5.12%,并提高了收获控制精度及收获籽粒完整性。 展开更多
关键词 玉米收获机 机器学习算法 控制精度 籽粒损失率 智能优化
下载PDF
基于动态学习率边界的隐私保护算法
11
作者 钱振 《哈尔滨商业大学学报(自然科学版)》 CAS 2024年第2期186-192,共7页
深度学习优化算法在对数据进行训练时容易导致隐私泄露,卷积神经网络在进行隐私计算时会因为计算每个样本的梯度而带来巨大的内存开销,针对以上问题,提出一种结合混合重影剪裁的差分隐私动态学习率边界算法.将AdaBound优化算法与差分隐... 深度学习优化算法在对数据进行训练时容易导致隐私泄露,卷积神经网络在进行隐私计算时会因为计算每个样本的梯度而带来巨大的内存开销,针对以上问题,提出一种结合混合重影剪裁的差分隐私动态学习率边界算法.将AdaBound优化算法与差分隐私相结合,缓解了算法在训练时的极端学习率和不稳定现象,减少了在反向传播过程中因为加入噪声而对模型收敛速度产生的影响.在卷积层上使用混合重影剪裁,简化了更新中对于梯度的直接计算所带来的开销成本,可以有效地训练差分隐私模型.最后,通过仿真实验,与其他经典的差分隐私算法进行对比,实验表明,算法实现了在相同隐私预算下更高的准确率,具有更优的性能,对模型的隐私保护效果更好. 展开更多
关键词 差分隐私 深度学习 随机梯度下降 图像分类 自适应算法 学习率剪裁
下载PDF
基于RUN-GRU-attention模型的实车动力电池健康状态估计方法
12
作者 刘定宏 董文楷 +2 位作者 李召阳 张红烛 齐昕 《储能科学与技术》 CAS CSCD 北大核心 2024年第9期3042-3058,共17页
实车动力电池的健康状态(state of health,SOH)评估存在数据质量差、工况不统一、数据利用率低等问题,本文面向阶梯倍率充电工况构建多源特征提取及SOH估计模型。首先,通过数据清洗、切割、填充,获取独立的充电片段;其次,基于不同电流... 实车动力电池的健康状态(state of health,SOH)评估存在数据质量差、工况不统一、数据利用率低等问题,本文面向阶梯倍率充电工况构建多源特征提取及SOH估计模型。首先,通过数据清洗、切割、填充,获取独立的充电片段;其次,基于不同电流阶段计算容量,实现原始数据利用率达96.9%,并与单独限定SOC范围计算容量的方法相比,误差降低48.1%以上;然后,从当前工况、历史累积两个维度提取多个健康因子,对于当前工况特征值,通过灰色关联度及干扰性随机森林重要度分析双重筛选。对于历史累积特征值,利用Spearson相关性分析和核主成分分析方法(kernel principal component analysis,KPCA)降低信息冗余;最后,对门控循环单元网络模型(gated recurrent unit,GRU)引入注意力机制和龙格库塔优化算法(Runge Kutta optimizer,RUN),建立RUN-GRU-attention模型,基于实车运行数据集与现有5种模型进行对比,实验结果表明,无论是包含单阶段还是多阶段电流的测试样本,优化模型的估计精度更佳,误差不高于0.0086,并且随着充电循环次数增加表现出良好的误差收敛性,可有效预测SOH波动趋势。 展开更多
关键词 实车动力电池 阶梯倍率充电 健康状态估计 多源特征提取 龙格库塔优化算法 机器学习
下载PDF
带有校正项的自适应梯度下降优化算法
13
作者 黄建勇 周跃进 《哈尔滨商业大学学报(自然科学版)》 CAS 2024年第2期200-207,共8页
基于批处理的随机梯度下降(SGD)优化算法通常用于训练卷积神经网络(CNNs),其性能的优劣直接影响神经网络收敛的速度.近年来,一些自适应梯度下降优化算法被提出,如Adam、Radam算法等.然而,这些优化算法既没有利用历史迭代的梯度范数,也... 基于批处理的随机梯度下降(SGD)优化算法通常用于训练卷积神经网络(CNNs),其性能的优劣直接影响神经网络收敛的速度.近年来,一些自适应梯度下降优化算法被提出,如Adam、Radam算法等.然而,这些优化算法既没有利用历史迭代的梯度范数,也没有利用随机子样本中梯度的二阶矩,这些导致自适应梯度下降优化算法收敛速度较慢,性能也不稳定.结合历史梯度范数和梯度的二阶矩,提出了一种新的自适应梯度下降优化算法normEve.通过模拟仿真实验,实验结果表明,提出的新算法在结合历史梯度范数和梯度二阶矩的情形下能有效地提高算法的收敛速度.通过实例验证新算法与Adam优化算法比较,新算法的测试准确率大于Adam优化算法,验证了新算法的优越性. 展开更多
关键词 梯度下降 神经网络 梯度范数 自适应学习率 分类 优化算法
下载PDF
加速随机递归梯度下降算法的复杂度分析
14
作者 费经泰 程一元 查星星 《萍乡学院学报》 2024年第3期5-11,共7页
课题组为进一步降低传统随机递归梯度下降算法(SARAH)复杂度,利用内循环数目倍增技术,提出了一种新的算法--Epoch-Doubling-SARAH算法,并通过构造Lyapunov函数证明了Epoch-Doubling-SARAH算法在非强凸条件下具有线性收敛阶,且推导出了... 课题组为进一步降低传统随机递归梯度下降算法(SARAH)复杂度,利用内循环数目倍增技术,提出了一种新的算法--Epoch-Doubling-SARAH算法,并通过构造Lyapunov函数证明了Epoch-Doubling-SARAH算法在非强凸条件下具有线性收敛阶,且推导出了算法的复杂度为O(1/ε+nlog(1/ε)),该结果优于SARAH算法复杂度。再将Epoch-Doubling-SARAH算法与SARAH算法在Mnist和Mushroom两个数据集上进行对比实验,实验结果表明Epoch-Doubling-SARAH算法具有更快的收敛速度,进而说明了本文算法理论分析的正确性。 展开更多
关键词 机器学习 随机递归梯度 下降算法 循环倍增 收敛速率 算法复杂度
下载PDF
船舶设计任务动态调度预测
15
作者 李敬花 杨易 何沁园 《造船技术》 2024年第5期8-15,共8页
在船舶设计过程中经常会出现随机新设计任务,为船舶设计任务调度方案的制订带来一定的困难。基于反向传播(Back Propagation, BP)算法,引入动量-自适应学习率反向传播(Momentum and Self-Adaptive Learning Rate Back Propagation, MSBP... 在船舶设计过程中经常会出现随机新设计任务,为船舶设计任务调度方案的制订带来一定的困难。基于反向传播(Back Propagation, BP)算法,引入动量-自适应学习率反向传播(Momentum and Self-Adaptive Learning Rate Back Propagation, MSBP)算法预测随机新设计任务是否可加入制订的船舶设计任务调度方案,以解决扰动情况下的船舶设计任务动态调度(Dynamic Scheduling of Ship Design Tasks, DSSDT)问题。为减小求解空间和训练难度,选择对调度结果具有重大影响的属性作为MSBP算法的特征值。基于抽取的特征值构建MSBP算法模型,并采用大量数据完成对模型的训练。对比试验结果表明,MSBP算法的准确性优于未改进的BP算法,某项随机新设计任务的可调度性与其优先级最为密切。 展开更多
关键词 船舶 设计任务 随机新设计任务 调度预测 船舶设计任务动态调度 反向传播算法 动量-自适应学习率反向传播算法
下载PDF
ONLINE REGULARIZED GENERALIZED GRADIENT CLASSIFICATION ALGORITHMS
16
作者 Leilei Zhang Baohui Sheng Jianli Wang 《Analysis in Theory and Applications》 2010年第3期278-300,共23页
This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence o... This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this al- gorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization parameters. 展开更多
关键词 online learning algorithm reproducing kernel Hilbert space generalized gra-dient Clarke's directional derivative learning rate
下载PDF
Research on three-step accelerated gradient algorithm in deep learning
17
作者 Yongqiang Lian Yincai Tang Shirong Zhou 《Statistical Theory and Related Fields》 2022年第1期40-57,共18页
Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we giv... Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we give the convergence of the algorithms from an ini-tial value to the optimal value of an objective function in simple quadratic form.Based on the convergence property of the quadratic function,two sister sequences of NAG’s iteration and par-allel tangent methods in neural networks,the three-step accelerated gradient(TAG)algorithm is proposed,which has three sequences other than two sister sequences.To illustrate the perfor-mance of this algorithm,we compare the proposed algorithm with the three other algorithms in quadratic function,high-dimensional quadratic functions,and nonquadratic function.Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning.For conveniently facilitate the proposed algorithms,we rewite the R package‘neuralnet’and extend it to‘supneuralnet’.All kinds of deep learning algorithms in this paper are included in‘supneuralnet’package.Finally,we show our algorithms are superior to other algorithms in four case studies. 展开更多
关键词 Accelerated algorithm backpropagation deep learning learning rate MOMENTUM stochastic gradient descent
原文传递
Application of Evolutionary Algorithm for Optimal Directional Overcurrent Relay Coordination
18
作者 N. M. Stenane K. A. Folly 《Journal of Computer and Communications》 2014年第9期103-111,共9页
In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent re... In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent relays in an interconnected power system network. The problem of coordinating directional overcurrent relays is formulated as an optimization problem that is solved via the improved GAs and PBIL. The simulation results obtained using the improved GAs are compared with those obtained using PBIL. The results show that the improved GA proposed in this paper performs better than PBIL. 展开更多
关键词 EVOLUTIONARY algorithmS GA learning rate OPTIMAL RELAY COORDINATION PBIL
下载PDF
Improving the accuracy of heart disease diagnosis with an augmented back propagation algorithm
19
作者 颜红梅 《Journal of Chongqing University》 CAS 2003年第1期31-34,共4页
A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale ... A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis. 展开更多
关键词 multilayer perceptron back propagation algorithm heart disease momentum term adaptive learning rate the forgetting mechanics conjugate gradients method
下载PDF
高速率生理盐水联合高强度深度学习重建算法对“三低”头颈部CTA图像质量的影响 被引量:2
20
作者 樊敏 袁元 +4 位作者 程巍 廖凯 杨行 王思梦 李真林 《中国医疗设备》 2023年第7期90-95,102,共7页
目的 探讨高速率生理盐水联合高强度深度学习重建(High-Strength Deep Learning Image Reconstruction,DLIR-H)算法对“三低”(低管电压、低对比剂用量、低对比剂注射速率)头颈部CT血管造影(CT Angiography,CTA)图像质量的影响。方法 ... 目的 探讨高速率生理盐水联合高强度深度学习重建(High-Strength Deep Learning Image Reconstruction,DLIR-H)算法对“三低”(低管电压、低对比剂用量、低对比剂注射速率)头颈部CT血管造影(CT Angiography,CTA)图像质量的影响。方法 前瞻性收集于我院行头颈部CTA检查的90例患者,随机分为A、B、C组,每组各30例。A组以4.5 mL/s的速率注射50 mL对比剂,以3.0 mL/s的速率注射40 mL生理盐水,采用120 kVp,使用60%自适应迭代重建(Adaptive Statistical IterativeReconstruction-V,ASIR-V)算法重建;B组以3.0mL/s的速率注射30mL对比剂,以3.0mL/s的速率注射40mL生理盐水,采用80 kVp,使用60%ASIR-V算法重建;C组以3.0 mL/s的速率注射30 mL对比剂,以5.0 mL/s的速率注射40 mL生理盐水,采用80 kVp,使用DLIR-H算法重建。比较3组图像间的CT值、图像噪声(Standard Deviation,SD)、信噪比(Signal to Noise Ratio,SNR)、对比噪声比(Contrast to Noise Ratio,CNR)、CT容积剂量指数(CT Dose Index Volume,CTDIvol)及CT剂量长度乘积,比较3组图像质量的主观评分。结果 3组图像的CT值、SD值、SNR及CNR的差异均具有统计学意义(P<0.001)。C组在上腔静脉处CT值低于A、B组,其余各目标血管CT值、SNR及CNR均高于A、B组。C组各目标血管SD值小于B组,差异具有统计学意义(P<0.05);C组与A组的SD值差异无统计学意义(P>0.05)。C组CTDIvol、有效辐射剂量、对比剂用量较A组分别降低了55%、54%、40%。3组图像的主观评分差异具有统计学意义(P<0.001),C组与A组的差异无统计学意义(P>0.05)。结论 在“三低”技术下高速率生理盐水联合DLIR-H算法在头颈部CTA中的应用不仅能获得与常规剂量相当的图像质量,还可以使辐射剂量、对比剂用量和对比剂注射速率均明显减少。 展开更多
关键词 生理盐水 注射速率 低剂量 CT血管造影 高强度深度学习重建算法
下载PDF
上一页 1 2 11 下一页 到第
使用帮助 返回顶部