A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive researc...A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry.展开更多
Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced...Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates.展开更多
This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic o...This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm展开更多
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de...For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.展开更多
In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open...In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error.展开更多
实车动力电池的健康状态(state of health,SOH)评估存在数据质量差、工况不统一、数据利用率低等问题,本文面向阶梯倍率充电工况构建多源特征提取及SOH估计模型。首先,通过数据清洗、切割、填充,获取独立的充电片段;其次,基于不同电流...实车动力电池的健康状态(state of health,SOH)评估存在数据质量差、工况不统一、数据利用率低等问题,本文面向阶梯倍率充电工况构建多源特征提取及SOH估计模型。首先,通过数据清洗、切割、填充,获取独立的充电片段;其次,基于不同电流阶段计算容量,实现原始数据利用率达96.9%,并与单独限定SOC范围计算容量的方法相比,误差降低48.1%以上;然后,从当前工况、历史累积两个维度提取多个健康因子,对于当前工况特征值,通过灰色关联度及干扰性随机森林重要度分析双重筛选。对于历史累积特征值,利用Spearson相关性分析和核主成分分析方法(kernel principal component analysis,KPCA)降低信息冗余;最后,对门控循环单元网络模型(gated recurrent unit,GRU)引入注意力机制和龙格库塔优化算法(Runge Kutta optimizer,RUN),建立RUN-GRU-attention模型,基于实车运行数据集与现有5种模型进行对比,实验结果表明,无论是包含单阶段还是多阶段电流的测试样本,优化模型的估计精度更佳,误差不高于0.0086,并且随着充电循环次数增加表现出良好的误差收敛性,可有效预测SOH波动趋势。展开更多
在船舶设计过程中经常会出现随机新设计任务,为船舶设计任务调度方案的制订带来一定的困难。基于反向传播(Back Propagation, BP)算法,引入动量-自适应学习率反向传播(Momentum and Self-Adaptive Learning Rate Back Propagation, MSBP...在船舶设计过程中经常会出现随机新设计任务,为船舶设计任务调度方案的制订带来一定的困难。基于反向传播(Back Propagation, BP)算法,引入动量-自适应学习率反向传播(Momentum and Self-Adaptive Learning Rate Back Propagation, MSBP)算法预测随机新设计任务是否可加入制订的船舶设计任务调度方案,以解决扰动情况下的船舶设计任务动态调度(Dynamic Scheduling of Ship Design Tasks, DSSDT)问题。为减小求解空间和训练难度,选择对调度结果具有重大影响的属性作为MSBP算法的特征值。基于抽取的特征值构建MSBP算法模型,并采用大量数据完成对模型的训练。对比试验结果表明,MSBP算法的准确性优于未改进的BP算法,某项随机新设计任务的可调度性与其优先级最为密切。展开更多
This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence o...This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this al- gorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization parameters.展开更多
Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we giv...Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we give the convergence of the algorithms from an ini-tial value to the optimal value of an objective function in simple quadratic form.Based on the convergence property of the quadratic function,two sister sequences of NAG’s iteration and par-allel tangent methods in neural networks,the three-step accelerated gradient(TAG)algorithm is proposed,which has three sequences other than two sister sequences.To illustrate the perfor-mance of this algorithm,we compare the proposed algorithm with the three other algorithms in quadratic function,high-dimensional quadratic functions,and nonquadratic function.Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning.For conveniently facilitate the proposed algorithms,we rewite the R package‘neuralnet’and extend it to‘supneuralnet’.All kinds of deep learning algorithms in this paper are included in‘supneuralnet’package.Finally,we show our algorithms are superior to other algorithms in four case studies.展开更多
In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent re...In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent relays in an interconnected power system network. The problem of coordinating directional overcurrent relays is formulated as an optimization problem that is solved via the improved GAs and PBIL. The simulation results obtained using the improved GAs are compared with those obtained using PBIL. The results show that the improved GA proposed in this paper performs better than PBIL.展开更多
A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale ...A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.展开更多
文摘A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry.
文摘Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates.
文摘This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm
基金Supported by the National Natural Science Foundation of China (60904018, 61203040)the Natural Science Foundation of Fujian Province of China (2009J05147, 2011J01352)+1 种基金the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004)the Science Research Foundation of Huaqiao University (09BS617)
文摘For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.
文摘In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error.
文摘实车动力电池的健康状态(state of health,SOH)评估存在数据质量差、工况不统一、数据利用率低等问题,本文面向阶梯倍率充电工况构建多源特征提取及SOH估计模型。首先,通过数据清洗、切割、填充,获取独立的充电片段;其次,基于不同电流阶段计算容量,实现原始数据利用率达96.9%,并与单独限定SOC范围计算容量的方法相比,误差降低48.1%以上;然后,从当前工况、历史累积两个维度提取多个健康因子,对于当前工况特征值,通过灰色关联度及干扰性随机森林重要度分析双重筛选。对于历史累积特征值,利用Spearson相关性分析和核主成分分析方法(kernel principal component analysis,KPCA)降低信息冗余;最后,对门控循环单元网络模型(gated recurrent unit,GRU)引入注意力机制和龙格库塔优化算法(Runge Kutta optimizer,RUN),建立RUN-GRU-attention模型,基于实车运行数据集与现有5种模型进行对比,实验结果表明,无论是包含单阶段还是多阶段电流的测试样本,优化模型的估计精度更佳,误差不高于0.0086,并且随着充电循环次数增加表现出良好的误差收敛性,可有效预测SOH波动趋势。
文摘This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this al- gorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization parameters.
基金This work was supported by National Natural Science Foun-dation of China(11271136,81530086)Program of Shanghai Subject Chief Scientist(14XD1401600)the 111 Project of China(No.B14019).
文摘Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we give the convergence of the algorithms from an ini-tial value to the optimal value of an objective function in simple quadratic form.Based on the convergence property of the quadratic function,two sister sequences of NAG’s iteration and par-allel tangent methods in neural networks,the three-step accelerated gradient(TAG)algorithm is proposed,which has three sequences other than two sister sequences.To illustrate the perfor-mance of this algorithm,we compare the proposed algorithm with the three other algorithms in quadratic function,high-dimensional quadratic functions,and nonquadratic function.Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning.For conveniently facilitate the proposed algorithms,we rewite the R package‘neuralnet’and extend it to‘supneuralnet’.All kinds of deep learning algorithms in this paper are included in‘supneuralnet’package.Finally,we show our algorithms are superior to other algorithms in four case studies.
文摘In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent relays in an interconnected power system network. The problem of coordinating directional overcurrent relays is formulated as an optimization problem that is solved via the improved GAs and PBIL. The simulation results obtained using the improved GAs are compared with those obtained using PBIL. The results show that the improved GA proposed in this paper performs better than PBIL.
基金the Natural Science Foundation of China (No. 30070211).
文摘A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.