期刊文献+
共找到1,360篇文章
< 1 2 68 >
每页显示 20 50 100
Prediction model for corrosion rate of low-alloy steels under atmospheric conditions using machine learning algorithms 被引量:3
1
作者 Jingou Kuang Zhilin Long 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第2期337-350,共14页
This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while ... This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models. 展开更多
关键词 machine learning low-alloy steel atmospheric corrosion prediction corrosion rate feature fusion
下载PDF
Machine learning applications in healthcare clinical practice and research
2
作者 Nikolaos-Achilleas Arkoudis Stavros P Papadakos 《World Journal of Clinical Cases》 SCIE 2025年第1期16-21,共6页
Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligen... Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research. 展开更多
关键词 Machine learning Artificial INTELLIGENCE CLINICAL Practice RESEARCH Glomerular filtration rate Non-alcoholic fatty liver disease MEDICINE
下载PDF
Federated Learning Model for Auto Insurance Rate Setting Based on Tweedie Distribution 被引量:1
3
作者 Tao Yin Changgen Peng +2 位作者 Weijie Tan Dequan Xu Hanlin Tang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期827-843,共17页
In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining ... In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data fromboth partieswithout exchanging data.The assessment results of the scheme approach those of the Tweedie regressionmodel learned fromcentralized data,and outperformthe Tweedie regressionmodel learned independently by a single party. 展开更多
关键词 rate setting Tweedie distribution generalized linear models federated learning homomorphic encryption
下载PDF
A performance-based hybrid deep learning model for predicting TBM advance rate using Attention-ResNet-LSTM 被引量:1
4
作者 Sihao Yu Zixin Zhang +2 位作者 Shuaifeng Wang Xin Huang Qinghua Lei 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期65-80,共16页
The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a k... The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a key parameter of TBM operation and reflects the TBM-ground interaction,for which a reliable prediction helps optimize the TBM performance.Here,we develop a hybrid neural network model,called Attention-ResNet-LSTM,for accurate prediction of the TBM advance rate.A database including geological properties and TBM operational parameters from the Yangtze River Natural Gas Pipeline Project is used to train and test this deep learning model.The evolutionary polynomial regression method is adopted to aid the selection of input parameters.The results of numerical exper-iments show that our Attention-ResNet-LSTM model outperforms other commonly-used intelligent models with a lower root mean square error and a lower mean absolute percentage error.Further,parametric analyses are conducted to explore the effects of the sequence length of historical data and the model architecture on the prediction accuracy.A correlation analysis between the input and output parameters is also implemented to provide guidance for adjusting relevant TBM operational parameters.The performance of our hybrid intelligent model is demonstrated in a case study of TBM tunneling through a complex ground with variable strata.Finally,data collected from the Baimang River Tunnel Project in Shenzhen of China are used to further test the generalization of our model.The results indicate that,compared to the conventional ResNet-LSTM model,our model has a better predictive capability for scenarios with unknown datasets due to its self-adaptive characteristic. 展开更多
关键词 Tunnel boring machine(TBM) Advance rate Deep learning Attention-ResNet-LSTM Evolutionary polynomial regression
下载PDF
Prediction of corrosion rate for friction stir processed WE43 alloy by combining PSO-based virtual sample generation and machine learning 被引量:1
5
作者 Annayath Maqbool Abdul Khalad Noor Zaman Khan 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第4期1518-1528,共11页
The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corros... The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys. 展开更多
关键词 Corrosion rate Friction stir processing Virtual sample generation Particle swarm optimization Machine learning Graphical user interface
下载PDF
Machine learning-based comparison of factors influencing estimated glomerular filtration rate in Chinese women with or without nonalcoholic fatty liver 被引量:1
6
作者 I-Chien Chen Lin-Ju Chou +2 位作者 Shih-Chen Huang Ta-Wei Chu Shang-Sen Lee 《World Journal of Clinical Cases》 SCIE 2024年第15期2506-2521,共16页
BACKGROUND The prevalence of non-alcoholic fatty liver(NAFLD)has increased recently.Subjects with NAFLD are known to have higher chance for renal function impairment.Many past studies used traditional multiple linear ... BACKGROUND The prevalence of non-alcoholic fatty liver(NAFLD)has increased recently.Subjects with NAFLD are known to have higher chance for renal function impairment.Many past studies used traditional multiple linear regression(MLR)to identify risk factors for decreased estimated glomerular filtration rate(eGFR).However,medical research is increasingly relying on emerging machine learning(Mach-L)methods.The present study enrolled healthy women to identify factors affecting eGFR in subjects with and without NAFLD(NAFLD+,NAFLD-)and to rank their importance.AIM To uses three different Mach-L methods to identify key impact factors for eGFR in healthy women with and without NAFLD.METHODS A total of 65535 healthy female study participants were enrolled from the Taiwan MJ cohort,accounting for 32 independent variables including demographic,biochemistry and lifestyle parameters(independent variables),while eGFR was used as the dependent variable.Aside from MLR,three Mach-L methods were applied,including stochastic gradient boosting,eXtreme gradient boosting and elastic net.Errors of estimation were used to define method accuracy,where smaller degree of error indicated better model performance.RESULTS Income,albumin,eGFR,High density lipoprotein-Cholesterol,phosphorus,forced expiratory volume in one second(FEV1),and sleep time were all lower in the NAFLD+group,while other factors were all significantly higher except for smoking area.Mach-L had lower estimation errors,thus outperforming MLR.In Model 1,age,uric acid(UA),FEV1,plasma calcium level(Ca),plasma albumin level(Alb)and T-bilirubin were the most important factors in the NAFLD+group,as opposed to age,UA,FEV1,Alb,lactic dehydrogenase(LDH)and Ca for the NAFLD-group.Given the importance percentage was much higher than the 2nd important factor,we built Model 2 by removing age.CONCLUSION The eGFR were lower in the NAFLD+group compared to the NAFLD-group,with age being was the most important impact factor in both groups of healthy Chinese women,followed by LDH,UA,FEV1 and Alb.However,for the NAFLD-group,TSH and SBP were the 5th and 6th most important factors,as opposed to Ca and BF in the NAFLD+group. 展开更多
关键词 Non-alcoholic fatty liver Estimated glomerular filtration rate Machine learning Chinese women
下载PDF
Improved IChOA-Based Reinforcement Learning for Secrecy Rate Optimization in Smart Grid Communications
7
作者 Mehrdad Shoeibi Mohammad Mehdi Sharifi Nevisi +3 位作者 Sarvenaz Sadat Khatami Diego Martín Sepehr Soltani Sina Aghakhani 《Computers, Materials & Continua》 SCIE EI 2024年第11期2819-2843,共25页
In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open... In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error. 展开更多
关键词 Smart grid communication secrecy rate optimization reinforcement learning improved chimp optimization algorithm
下载PDF
Application of machine learning in predicting the rate-dependent compressive strength of rocks 被引量:9
8
作者 Mingdong Wei Wenzhao Meng +1 位作者 Feng Dai Wei Wu 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2022年第5期1356-1365,共10页
Accurate prediction of compressive strength of rocks relies on the rate-dependent behaviors of rocks, and correlation among the geometrical, physical, and mechanical properties of rocks. However, these properties may ... Accurate prediction of compressive strength of rocks relies on the rate-dependent behaviors of rocks, and correlation among the geometrical, physical, and mechanical properties of rocks. However, these properties may not be easy to control in laboratory experiments, particularly in dynamic compression experiments. By training three machine learning models based on the support vector machine(SVM), backpropagation neural network(BPNN), and random forest(RF) algorithms, we isolated different input parameters, such as static compressive strength, P-wave velocity, specimen dimension, grain size, bulk density, and strain rate, to identify their importance in the strength prediction. Our results demonstrated that the RF algorithm shows a better performance than the other two algorithms. The strain rate is a key input parameter influencing the performance of these models, while the others(e.g. static compressive strength and P-wave velocity) are less important as their roles can be compensated by alternative parameters. The results also revealed that the effect of specimen dimension on the rock strength can be overshadowed at high strain rates, while the effect on the dynamic increase factor(i.e. the ratio of dynamic to static compressive strength) becomes significant. The dynamic increase factors for different specimen dimensions bifurcate when the strain rate reaches a relatively high value, a clue to improve our understanding of the transitional behaviors of rocks from low to high strain rates. 展开更多
关键词 Machine learning Rock dynamics Compressive strength Strain rate
下载PDF
Enhanced Reconfigurable Intelligent Surface Assisted mmWave Communication: A Federated Learning 被引量:7
9
作者 Lixin Li Donghui Ma +4 位作者 Huan Ren Dawei Wang Xiao Tang Wei Liang Tong Bai 《China Communications》 SCIE CSCD 2020年第10期115-128,共14页
Reconfigurable intelligent surface(RIS)has been proposed as a potential solution to improve the coverage and spectrum efficiency for future wireless communication.However,the privacy of users’data is often ignored in... Reconfigurable intelligent surface(RIS)has been proposed as a potential solution to improve the coverage and spectrum efficiency for future wireless communication.However,the privacy of users’data is often ignored in previous works,such as the user’s location information during channel estimation.In this paper,we propose a privacy-preserving design paradigm combining federated learning(FL)with RIS in the mmWave communication system.Based on FL,the local models are trained and encrypted using the private data managed on each local device.Following this,a global model is generated by aggregating them at the central server.The optimal model is trained for establishing the mapping function between channel state information(CSI)and RIS’configuration matrix in order to maximize the achievable rate of the received signal.Simulation results demonstrate that the proposed scheme can effectively approach to the theoretical value generated by centralized machine learning(ML),while protecting user’privacy. 展开更多
关键词 reconfigurable intelligent surface PRIVACY federated learning achievable rate
下载PDF
Deep Learning Based Signal Detection for Quadrature Spatial Modulation System
10
作者 Shu Dingyun Peng Yuyang +2 位作者 Yue Ming Fawaz AL-Hazemi Mohammad Meraj Mirza 《China Communications》 SCIE CSCD 2024年第10期78-85,共8页
With the development of communication systems, modulation methods are becoming more and more diverse. Among them, quadrature spatial modulation(QSM) is considered as one method with less capacity and high efficiency. ... With the development of communication systems, modulation methods are becoming more and more diverse. Among them, quadrature spatial modulation(QSM) is considered as one method with less capacity and high efficiency. In QSM, the traditional signal detection methods sometimes are unable to meet the actual requirement of low complexity of the system. Therefore, this paper proposes a signal detection scheme for QSM systems using deep learning to solve the complexity problem. Results from the simulations show that the bit error rate performance of the proposed deep learning-based detector is better than that of the zero-forcing(ZF) and minimum mean square error(MMSE) detectors, and similar to the maximum likelihood(ML) detector. Moreover, the proposed method requires less processing time than ZF, MMSE,and ML. 展开更多
关键词 bit error rate COMPLEXITY deep learning quadrature spatial modulation
下载PDF
Understanding the creep behaviors and mechanisms of Mg-Gd-Zn alloys via machine learning
11
作者 Shuxia Ouyang Xiaobing Hu +7 位作者 Qingfeng Wu Jeong Ah Lee Jae Heung Lee Chenjin Zhang Chunhui Wang Hyoung Seop Kim Guangyu Yang Wanqi Jie 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第8期3281-3291,共11页
Mg-Gd-Zn based alloys have better creep resistance than other Mg alloys and attract more attention at elevated temperatures.However,the multiple alloying elements and various heat treatment conditions,combined with co... Mg-Gd-Zn based alloys have better creep resistance than other Mg alloys and attract more attention at elevated temperatures.However,the multiple alloying elements and various heat treatment conditions,combined with complex microstructural evolution during creep tests,bring great challenges in understanding and predicting creep behaviors.In this study,we proposed to predict the creep properties and reveal the creep mechanisms of Mg-Gd-Zn based alloys by machine learning.On the one hand,the minimum creep rates were effectively predicted by using a support vector regression model.The complex and nonmonotonic effects of test temperature,test stress,alloying elements,and heat treatment conditions on the creep properties were revealed.On the other hand,the creep stress exponents and creep activation energies were calculated by machine learning to analyze the variation of creep mechanisms,based on which the constitutive equations of Mg-Gd-Zn based alloys were obtained.This study introduces an efficient method to comprehend creep behaviors through machine learning,offering valuable insights for the future design and selection of Mg alloys. 展开更多
关键词 Mg-Gd-Zn based alloys Machine learning Creep rate Creep mechanism Constitutive equation
下载PDF
Fast Learning in Spiking Neural Networks by Learning Rate Adaptation 被引量:2
12
作者 方慧娟 罗继亮 王飞 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2012年第6期1219-1224,共6页
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de... For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN. 展开更多
关键词 spiking neural networks learning algorithm learning rate adaptation Tennessee Eastman process
下载PDF
Deep Learning-Based ECG Classification for Arterial Fibrillation Detection
13
作者 Muhammad Sohail Irshad Tehreem Masood +3 位作者 Arfan Jaffar Muhammad Rashid Sheeraz Akram Abeer Aljohani 《Computers, Materials & Continua》 SCIE EI 2024年第6期4805-4824,共20页
The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnos... The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnosis increases the patient’s chances of recovery.However,issues like overfitting and inconsistent accuracy across datasets remain challenges.In a quest to address these challenges,a study presents two prominent deep learning architectures,ResNet-50 and DenseNet-121,to evaluate their effectiveness in AFib detection.The aim was to create a robust detection mechanism that consistently performs well.Metrics such as loss,accuracy,precision,sensitivity,and Area Under the Curve(AUC)were utilized for evaluation.The findings revealed that ResNet-50 surpassed DenseNet-121 in all evaluated categories.It demonstrated lower loss rate 0.0315 and 0.0305 superior accuracy of 98.77%and 98.88%,precision of 98.78%and 98.89%and sensitivity of 98.76%and 98.86%for training and validation,hinting at its advanced capability for AFib detection.These insights offer a substantial contribution to the existing literature on deep learning applications for AFib detection from ECG signals.The comparative performance data assists future researchers in selecting suitable deep-learning architectures for AFib detection.Moreover,the outcomes of this study are anticipated to stimulate the development of more advanced and efficient ECG-based AFib detection methodologies,for more accurate and early detection of AFib,thereby fostering improved patient care and outcomes. 展开更多
关键词 Convolution neural network atrial fibrillation area under curve ECG false positive rate deep learning CLASSIFICATION
下载PDF
Energy-Efficient Traffic Offloading for RSMA-Based Hybrid Satellite Terrestrial Networks with Deep Reinforcement Learning
14
作者 Qingmiao Zhang Lidong Zhu +1 位作者 Yanyan Chen Shan Jiang 《China Communications》 SCIE CSCD 2024年第2期49-58,共10页
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p... As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm. 展开更多
关键词 deep reinforcement learning energy efficiency hybrid satellite terrestrial networks rate splitting multiple access traffic offloading
下载PDF
Recent innovation in benchmark rates (BMR):evidence from influential factors on Turkish Lira Overnight Reference Interest Rate with machine learning algorithms 被引量:2
15
作者 Öer Depren Mustafa Tevfik Kartal Serpil KılıçDepren 《Financial Innovation》 2021年第1期942-961,共20页
Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced... Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates. 展开更多
关键词 Benchmark rate Determinants Machine learning algorithms TURKEY
下载PDF
LEARNING RATES OF KERNEL-BASED ROBUST CLASSIFICATION 被引量:1
16
作者 Shuhua WANG Baohuai SHENG 《Acta Mathematica Scientia》 SCIE CSCD 2022年第3期1173-1190,共18页
This paper considers a robust kernel regularized classification algorithm with a non-convex loss function which is proposed to alleviate the performance deterioration caused by the outliers.A comparison relationship b... This paper considers a robust kernel regularized classification algorithm with a non-convex loss function which is proposed to alleviate the performance deterioration caused by the outliers.A comparison relationship between the excess misclassification error and the excess generalization error is provided;from this,along with the convex analysis theory,a kind of learning rate is derived.The results show that the performance of the classifier is effected by the outliers,and the extent of impact can be controlled by choosing the homotopy parameters properly. 展开更多
关键词 Support vector machine robust classification quasiconvex loss function learning rate right-sided directional derivative
下载PDF
Choice of discount rate in reinforcement learning with long-delay rewards 被引量:1
17
作者 LIN Xiangyang XING Qinghua LIU Fuxian 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第2期381-392,共12页
In the world, most of the successes are results of longterm efforts. The reward of success is extremely high, but before that, a long-term investment process is required. People who are “myopic” only value short-ter... In the world, most of the successes are results of longterm efforts. The reward of success is extremely high, but before that, a long-term investment process is required. People who are “myopic” only value short-term rewards and are unwilling to make early-stage investments, so they hardly get the ultimate success and the corresponding high rewards. Similarly, for a reinforcement learning(RL) model with long-delay rewards, the discount rate determines the strength of agent’s “farsightedness”.In order to enable the trained agent to make a chain of correct choices and succeed finally, the feasible region of the discount rate is obtained through mathematical derivation in this paper firstly. It satisfies the “farsightedness” requirement of agent. Afterwards, in order to avoid the complicated problem of solving implicit equations in the process of choosing feasible solutions,a simple method is explored and verified by theoreti cal demonstration and mathematical experiments. Then, a series of RL experiments are designed and implemented to verify the validity of theory. Finally, the model is extended from the finite process to the infinite process. The validity of the extended model is verified by theories and experiments. The whole research not only reveals the significance of the discount rate, but also provides a theoretical basis as well as a practical method for the choice of discount rate in future researches. 展开更多
关键词 reinforcement learning(RL) discount rate longdelay reward Q-learning treasure-detecting model feasible solution
下载PDF
Design of a Multi-Stage Ensemble Model for Thyroid Prediction Using Learning Approaches
18
作者 M.L.Maruthi Prasad R.Santhosh 《Intelligent Automation & Soft Computing》 2024年第1期1-13,共13页
This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated mod... This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated model to attain earlier prediction.Some existing model fails to give better prediction accuracy.Here,a novel clinical decision support system is framed to make the proper decision during a time of complexity.Multiple stages are followed in the proposed framework,which plays a substantial role in thyroid prediction.These steps include i)data acquisition,ii)outlier prediction,and iii)multi-stage weight-based ensemble learning process(MS-WEL).The weighted analysis of the base classifier and other classifier models helps bridge the gap encountered in one single classifier model.Various classifiers aremerged to handle the issues identified in others and intend to enhance the prediction rate.The proposed model provides superior outcomes and gives good quality prediction rate.The simulation is done in the MATLAB 2020a environment and establishes a better trade-off than various existing approaches.The model gives a prediction accuracy of 97.28%accuracy compared to other models and shows a better trade than others. 展开更多
关键词 THYROID machine learning PRE-PROCESSING classification prediction rate
下载PDF
Machine Learning-based USD/PKR Exchange Rate Forecasting Using Sentiment Analysis of Twitter Data 被引量:1
19
作者 Samreen Naeem Wali Khan Mashwani +4 位作者 Aqib Ali M.Irfan Uddin Marwan Mahmoud Farrukh Jamal Christophe Chesneau 《Computers, Materials & Continua》 SCIE EI 2021年第6期3451-3461,共11页
This study proposes an approach based on machine learning to forecast currency exchange rates by applying sentiment analysis to messages on Twitter(called tweets).A dataset of the exchange rates between the United Sta... This study proposes an approach based on machine learning to forecast currency exchange rates by applying sentiment analysis to messages on Twitter(called tweets).A dataset of the exchange rates between the United States Dollar(USD)and the Pakistani Rupee(PKR)was formed by collecting information from a forex website as well as a collection of tweets from the business community in Pakistan containing finance-related words.The dataset was collected in raw form,and was subjected to natural language processing by way of data preprocessing.Response variable labeling was then applied to the standardized dataset,where the response variables were divided into two classes:“1”indicated an increase in the exchange rate and“−1”indicated a decrease in it.To better represent the dataset,we used linear discriminant analysis and principal component analysis to visualize the data in three-dimensional vector space.Clusters that were obtained using a sampling approach were then used for data optimization.Five machine learning classifiers—the simple logistic classifier,the random forest,bagging,naïve Bayes,and the support vector machine—were applied to the optimized dataset.The results show that the simple logistic classifier yielded the highest accuracy of 82.14%for the USD and the PKR exchange rates forecasting. 展开更多
关键词 Machine learning exchange rate sentiment analysis linear discriminant analysis principal component analysis simple logistic
下载PDF
Adaptive learning rate GMM for moving object detection in outdoor surveillance for sudden illumination changes 被引量:1
20
作者 HOCINE Labidi 曹伟 +2 位作者 丁庸 张笈 罗森林 《Journal of Beijing Institute of Technology》 EI CAS 2016年第1期145-151,共7页
A dynamic learning rate Gaussian mixture model(GMM)algorithm is proposed to deal with the problem of slow adaption of GMM in the case of moving object detection in the outdoor surveillance,especially in the presence... A dynamic learning rate Gaussian mixture model(GMM)algorithm is proposed to deal with the problem of slow adaption of GMM in the case of moving object detection in the outdoor surveillance,especially in the presence of sudden illumination changes.The GMM is mostly used for detecting objects in complex scenes for intelligent monitoring systems.To solve this problem,a mixture Gaussian model has been built for each pixel in the video frame,and according to the scene change from the frame difference,the learning rate of GMM can be dynamically adjusted.The experiments show that the proposed method gives good results with an adaptive GMM learning rate when we compare it with GMM method with a fixed learning rate.The method was tested on a certain dataset,and tests in the case of sudden natural light changes show that our method has a better accuracy and lower false alarm rate. 展开更多
关键词 object detection background modeling Gaussian mixture model(GMM) learning rate frame difference
下载PDF
上一页 1 2 68 下一页 到第
使用帮助 返回顶部