期刊文献+
共找到9,235篇文章
< 1 2 250 >
每页显示 20 50 100
STRONGLY CONVERGENT INERTIAL FORWARD-BACKWARD-FORWARD ALGORITHM WITHOUT ON-LINE RULE FOR VARIATIONAL INEQUALITIES
1
作者 姚永红 Abubakar ADAMU Yekini SHEHU 《Acta Mathematica Scientia》 SCIE CSCD 2024年第2期551-566,共16页
This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inerti... This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inertial parameters and the iterates,which have been assumed by several authors whenever a strongly convergent algorithm with an inertial extrapolation step is proposed for a variational inequality problem.Consequently,our proof arguments are different from what is obtainable in the relevant literature.Finally,we give numerical tests to confirm the theoretical analysis and show that our proposed algorithm is superior to related ones in the literature. 展开更多
关键词 forward-backward-forward algorithm inertial extrapolation variational inequality on-line rule
下载PDF
Optimizing Deep Learning for Computer-Aided Diagnosis of Lung Diseases: An Automated Method Combining Evolutionary Algorithm, Transfer Learning, and Model Compression
2
作者 Hassen Louati Ali Louati +1 位作者 Elham Kariri Slim Bechikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2519-2547,共29页
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w... Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures. 展开更多
关键词 Computer-aided diagnosis deep learning evolutionary algorithms deep compression transfer learning
下载PDF
An Online Fake Review Detection Approach Using Famous Machine Learning Algorithms
3
作者 Asma Hassan Alshehri 《Computers, Materials & Continua》 SCIE EI 2024年第2期2767-2786,共20页
Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,... Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,and real account purchases,immoral actors demonize rivals and advertise their goods.Most academic and industry efforts have been aimed at detecting fake/fraudulent product or service evaluations for years.The primary hurdle to identifying fraudulent reviews is the lack of a reliable means to distinguish fraudulent reviews from real ones.This paper adopts a semi-supervised machine learning method to detect fake reviews on any website,among other things.Online reviews are classified using a semi-supervised approach(PU-learning)since there is a shortage of labeled data,and they are dynamic.Then,classification is performed using the machine learning techniques Support Vector Machine(SVM)and Nave Bayes.The performance of the suggested system has been compared with standard works,and experimental findings are assessed using several assessment metrics. 展开更多
关键词 SECURITY fake review semi-supervised learning ML algorithms review detection
下载PDF
Data-Driven Learning Control Algorithms for Unachievable Tracking Problems
4
作者 Zeyi Zhang Hao Jiang +1 位作者 Dong Shen Samer S.Saab 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期205-218,共14页
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in... For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings. 展开更多
关键词 Data-driven algorithms incomplete information iterative learning control gradient information unachievable problems
下载PDF
Uniaxial Compressive Strength Prediction for Rock Material in Deep Mine Using Boosting-Based Machine Learning Methods and Optimization Algorithms
5
作者 Junjie Zhao Diyuan Li +1 位作者 Jingtai Jiang Pingkuang Luo 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期275-304,共30页
Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining envir... Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining environments under high in-situ stress.Thus,this study aims to develop an advanced model for predicting the UCS of rockmaterial in deepmining environments by combining three boosting-basedmachine learning methods with four optimization algorithms.For this purpose,the Lead-Zinc mine in Southwest China is considered as the case study.Rock density,P-wave velocity,and point load strength index are used as input variables,and UCS is regarded as the output.Subsequently,twelve hybrid predictive models are obtained.Root mean square error(RMSE),mean absolute error(MAE),coefficient of determination(R2),and the proportion of the mean absolute percentage error less than 20%(A-20)are selected as the evaluation metrics.Experimental results showed that the hybridmodel consisting of the extreme gradient boostingmethod and the artificial bee colony algorithm(XGBoost-ABC)achieved satisfactory results on the training dataset and exhibited the best generalization performance on the testing dataset.The values of R2,A-20,RMSE,and MAE on the training dataset are 0.98,1.0,3.11 MPa,and 2.23MPa,respectively.The highest values of R2 and A-20(0.93 and 0.96),and the smallest RMSE and MAE values of 4.78 MPa and 3.76MPa,are observed on the testing dataset.The proposed hybrid model can be considered a reliable and effective method for predicting rock UCS in deep mines. 展开更多
关键词 Uniaxial compression strength strength prediction machine learning optimization algorithm
下载PDF
Quantification of the concrete freeze–thaw environment across the Qinghai–Tibet Plateau based on machine learning algorithms
6
作者 QIN Yanhui MA Haoyuan +3 位作者 ZHANG Lele YIN Jinshuai ZHENG Xionghui LI Shuo 《Journal of Mountain Science》 SCIE CSCD 2024年第1期322-334,共13页
The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering ma... The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering managers should take into account. In this paper, we propose a more realistic method to calculate the number of concrete freeze–thaw cycles(NFTCs) on the QTP. The calculated results show that the NFTCs increase as the altitude of the meteorological station increases with the average NFTCs being 208.7. Four machine learning methods, i.e., the random forest(RF) model, generalized boosting method(GBM), generalized linear model(GLM), and generalized additive model(GAM), are used to fit the NFTCs. The root mean square error(RMSE) values of the RF, GBM, GLM, and GAM are 32.3, 4.3, 247.9, and 161.3, respectively. The R^(2) values of the RF, GBM, GLM, and GAM are 0.93, 0.99, 0.48, and 0.66, respectively. The GBM method performs the best compared to the other three methods, which was shown by the results of RMSE and R^(2) values. The quantitative results from the GBM method indicate that the lowest, medium, and highest NFTC values are distributed in the northern, central, and southern parts of the QTP, respectively. The annual NFTCs in the QTP region are mainly concentrated at 160 and above, and the average NFTCs is 200 across the QTP. Our results can provide scientific guidance and a theoretical basis for the freezing resistance design of concrete in various projects on the QTP. 展开更多
关键词 Freeze–thaw cycles Quantification Machine learning algorithms Qinghai–Tibet Plateau CONCRETE
下载PDF
Gradient Optimizer Algorithm with Hybrid Deep Learning Based Failure Detection and Classification in the Industrial Environment
7
作者 Mohamed Zarouan Ibrahim M.Mehedi +1 位作者 Shaikh Abdul Latif Md.Masud Rana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1341-1364,共24页
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu... Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects. 展开更多
关键词 Fault detection Industry 4.0 gradient optimizer algorithm deep learning rotating machineries artificial intelligence
下载PDF
Extended Deep Learning Algorithm for Improved Brain Tumor Diagnosis System
8
作者 M.Adimoolam K.Maithili +7 位作者 N.M.Balamurugan R.Rajkumar S.Leelavathy Raju Kannadasan Mohd Anul Haq Ilyas Khan ElSayed M.Tag El Din Arfat Ahmad Khan 《Intelligent Automation & Soft Computing》 2024年第1期33-55,共23页
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st... At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated. 展开更多
关键词 Brain tumor extended deep learning algorithm convolution neural network tumor detection deep learning
下载PDF
改进Q-Learning的路径规划算法研究
9
作者 宋丽君 周紫瑜 +2 位作者 李云龙 侯佳杰 何星 《小型微型计算机系统》 CSCD 北大核心 2024年第4期823-829,共7页
针对Q-Learning算法学习效率低、收敛速度慢且在动态障碍物的环境下路径规划效果不佳的问题,本文提出一种改进Q-Learning的移动机器人路径规划算法.针对该问题,算法根据概率的突变性引入探索因子来平衡探索和利用以加快学习效率;通过在... 针对Q-Learning算法学习效率低、收敛速度慢且在动态障碍物的环境下路径规划效果不佳的问题,本文提出一种改进Q-Learning的移动机器人路径规划算法.针对该问题,算法根据概率的突变性引入探索因子来平衡探索和利用以加快学习效率;通过在更新函数中设计深度学习因子以保证算法探索概率;融合遗传算法,避免陷入局部路径最优同时按阶段探索最优迭代步长次数,以减少动态地图探索重复率;最后提取输出的最优路径关键节点采用贝塞尔曲线进行平滑处理,进一步保证路径平滑度和可行性.实验通过栅格法构建地图,对比实验结果表明,改进后的算法效率相较于传统算法在迭代次数和路径上均有较大优化,且能够较好的实现动态地图下的路径规划,进一步验证所提方法的有效性和实用性. 展开更多
关键词 移动机器人 路径规划 Q-learning算法 平滑处理 动态避障
下载PDF
Dynamic plugging regulating strategy of pipeline robot based on reinforcement learning
10
作者 Xing-Yuan Miao Hong Zhao 《Petroleum Science》 SCIE EI CAS CSCD 2024年第1期597-608,共12页
Pipeline isolation plugging robot (PIPR) is an important tool in pipeline maintenance operation. During the plugging process, the violent vibration will occur by the flow field, which can cause serious damage to the p... Pipeline isolation plugging robot (PIPR) is an important tool in pipeline maintenance operation. During the plugging process, the violent vibration will occur by the flow field, which can cause serious damage to the pipeline and PIPR. In this paper, we propose a dynamic regulating strategy to reduce the plugging-induced vibration by regulating the spoiler angle and plugging velocity. Firstly, the dynamic plugging simulation and experiment are performed to study the flow field changes during dynamic plugging. And the pressure difference is proposed to evaluate the degree of flow field vibration. Secondly, the mathematical models of pressure difference with plugging states and spoiler angles are established based on the extreme learning machine (ELM) optimized by improved sparrow search algorithm (ISSA). Finally, a modified Q-learning algorithm based on simulated annealing is applied to determine the optimal strategy for the spoiler angle and plugging velocity in real time. The results show that the proposed method can reduce the plugging-induced vibration by 19.9% and 32.7% on average, compared with single-regulating methods. This study can effectively ensure the stability of the plugging process. 展开更多
关键词 Pipeline isolation plugging robot Plugging-induced vibration Dynamic regulating strategy Extreme learning machine Improved sparrow search algorithm Modified Q-learning algorithm
下载PDF
AWeb Application Fingerprint Recognition Method Based on Machine Learning
11
作者 Yanmei Shi Wei Yu +1 位作者 Yanxia Zhao Yungang Jia 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期887-906,共20页
Web application fingerprint recognition is an effective security technology designed to identify and classify web applications,thereby enhancing the detection of potential threats and attacks.Traditional fingerprint r... Web application fingerprint recognition is an effective security technology designed to identify and classify web applications,thereby enhancing the detection of potential threats and attacks.Traditional fingerprint recognition methods,which rely on preannotated feature matching,face inherent limitations due to the ever-evolving nature and diverse landscape of web applications.In response to these challenges,this work proposes an innovative web application fingerprint recognition method founded on clustering techniques.The method involves extensive data collection from the Tranco List,employing adjusted feature selection built upon Wappalyzer and noise reduction through truncated SVD dimensionality reduction.The core of the methodology lies in the application of the unsupervised OPTICS clustering algorithm,eliminating the need for preannotated labels.By transforming web applications into feature vectors and leveraging clustering algorithms,our approach accurately categorizes diverse web applications,providing comprehensive and precise fingerprint recognition.The experimental results,which are obtained on a dataset featuring various web application types,affirm the efficacy of the method,demonstrating its ability to achieve high accuracy and broad coverage.This novel approach not only distinguishes between different web application types effectively but also demonstrates superiority in terms of classification accuracy and coverage,offering a robust solution to the challenges of web application fingerprint recognition. 展开更多
关键词 Web application fingerprint recognition unsupervised learning clustering algorithm feature extraction automated testing network security
下载PDF
A Two-Layer Encoding Learning Swarm Optimizer Based on Frequent Itemsets for Sparse Large-Scale Multi-Objective Optimization
12
作者 Sheng Qi Rui Wang +3 位作者 Tao Zhang Xu Yang Ruiqing Sun Ling Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1342-1357,共16页
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.... Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed. 展开更多
关键词 Evolutionary algorithms learning swarm optimiza-tion sparse large-scale optimization sparse large-scale multi-objec-tive problems two-layer encoding.
下载PDF
Constrained Multi-Objective Optimization With Deep Reinforcement Learning Assisted Operator Selection
13
作者 Fei Ming Wenyin Gong +1 位作者 Ling Wang Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期919-931,共13页
Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been dev... Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been developed with the use of different algorithmic strategies,evolutionary operators,and constraint-handling techniques.The performance of CMOEAs may be heavily dependent on the operators used,however,it is usually difficult to select suitable operators for the problem at hand.Hence,improving operator selection is promising and necessary for CMOEAs.This work proposes an online operator selection framework assisted by Deep Reinforcement Learning.The dynamics of the population,including convergence,diversity,and feasibility,are regarded as the state;the candidate operators are considered as actions;and the improvement of the population state is treated as the reward.By using a Q-network to learn a policy to estimate the Q-values of all actions,the proposed approach can adaptively select an operator that maximizes the improvement of the population according to the current state and thereby improve the algorithmic performance.The framework is embedded into four popular CMOEAs and assessed on 42 benchmark problems.The experimental results reveal that the proposed Deep Reinforcement Learning-assisted operator selection significantly improves the performance of these CMOEAs and the resulting algorithm obtains better versatility compared to nine state-of-the-art CMOEAs. 展开更多
关键词 Constrained multi-objective optimization deep Qlearning deep reinforcement learning(DRL) evolutionary algorithms evolutionary operator selection
下载PDF
Relevance of sleep for wellness:New trends in using artificial intelligence and machine learning
14
作者 Deb Sanjay Nag Amlan Swain +2 位作者 Seelora Sahu Abhishek Chatterjee Bhanu Pratap Swain 《World Journal of Clinical Cases》 SCIE 2024年第7期1196-1199,共4页
Sleep and well-being have been intricately linked,and sleep hygiene is paramount for developing mental well-being and resilience.Although widespread,sleep disorders require elaborate polysomnography laboratory and pat... Sleep and well-being have been intricately linked,and sleep hygiene is paramount for developing mental well-being and resilience.Although widespread,sleep disorders require elaborate polysomnography laboratory and patient-stay with sleep in unfamiliar environments.Current technologies have allowed various devices to diagnose sleep disorders at home.However,these devices are in various validation stages,with many already receiving approvals from competent authorities.This has captured vast patient-related physiologic data for advanced analytics using artificial intelligence through machine and deep learning applications.This is expected to be integrated with patients’Electronic Health Records and provide individualized prescriptive therapy for sleep disorders in the future. 展开更多
关键词 Sleep initiation and maintenance disorders Sleep apnea OBSTRUCTIVE Machine learning Artificial intelligence algorithmS
下载PDF
Research on classification method of high myopic maculopathy based on retinal fundus images and optimized ALFA-Mix active learning algorithm 被引量:1
15
作者 Shao-Jun Zhu Hao-Dong Zhan +4 位作者 Mao-Nian Wu Bo Zheng Bang-Quan Liu Shao-Chong Zhang Wei-Hua Yang 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2023年第7期995-1004,共10页
AIM:To conduct a classification study of high myopic maculopathy(HMM)using limited datasets,including tessellated fundus,diffuse chorioretinal atrophy,patchy chorioretinal atrophy,and macular atrophy,and minimize anno... AIM:To conduct a classification study of high myopic maculopathy(HMM)using limited datasets,including tessellated fundus,diffuse chorioretinal atrophy,patchy chorioretinal atrophy,and macular atrophy,and minimize annotation costs,and to optimize the ALFA-Mix active learning algorithm and apply it to HMM classification.METHODS:The optimized ALFA-Mix algorithm(ALFAMix+)was compared with five algorithms,including ALFA-Mix.Four models,including Res Net18,were established.Each algorithm was combined with four models for experiments on the HMM dataset.Each experiment consisted of 20 active learning rounds,with 100 images selected per round.The algorithm was evaluated by comparing the number of rounds in which ALFA-Mix+outperformed other algorithms.Finally,this study employed six models,including Efficient Former,to classify HMM.The best-performing model among these models was selected as the baseline model and combined with the ALFA-Mix+algorithm to achieve satisfactor y classification results with a small dataset.RESULTS:ALFA-Mix+outperforms other algorithms with an average superiority of 16.6,14.75,16.8,and 16.7 rounds in terms of accuracy,sensitivity,specificity,and Kappa value,respectively.This study conducted experiments on classifying HMM using several advanced deep learning models with a complete training set of 4252 images.The Efficient Former achieved the best results with an accuracy,sensitivity,specificity,and Kappa value of 0.8821,0.8334,0.9693,and 0.8339,respectively.Therefore,by combining ALFA-Mix+with Efficient Former,this study achieved results with an accuracy,sensitivity,specificity,and Kappa value of 0.8964,0.8643,0.9721,and 0.8537,respectively.CONCLUSION:The ALFA-Mix+algorithm reduces the required samples without compromising accuracy.Compared to other algorithms,ALFA-Mix+outperforms in more rounds of experiments.It effectively selects valuable samples compared to other algorithms.In HMM classification,combining ALFA-Mix+with Efficient Former enhances model performance,further demonstrating the effectiveness of ALFA-Mix+. 展开更多
关键词 high myopic maculopathy deep learning active learning image classification ALFA-Mix algorithm
下载PDF
Trading in Fast-ChangingMarkets withMeta-Reinforcement Learning
16
作者 Yutong Tian Minghan Gao +1 位作者 Qiang Gao Xiao-Hong Peng 《Intelligent Automation & Soft Computing》 2024年第2期175-188,共14页
How to find an effective trading policy is still an open question mainly due to the nonlinear and non-stationary dynamics in a financial market.Deep reinforcement learning,which has recently been used to develop tradi... How to find an effective trading policy is still an open question mainly due to the nonlinear and non-stationary dynamics in a financial market.Deep reinforcement learning,which has recently been used to develop trading strategies by automatically extracting complex features from a large amount of data,is struggling to deal with fastchanging markets due to sample inefficiency.This paper applies the meta-reinforcement learning method to tackle the trading challenges faced by conventional reinforcement learning(RL)approaches in non-stationary markets for the first time.In our work,the history trading data is divided into multiple task data and for each of these data themarket condition is relatively stationary.Then amodel agnosticmeta-learning(MAML)-based tradingmethod involving a meta-learner and a normal learner is proposed.A trading policy is learned by the meta-learner across multiple task data,which is then fine-tuned by the normal learner through a small amount of data from a new market task before trading in it.To improve the adaptability of the MAML-based method,an ordered multiplestep updating mechanism is also proposed to explore the changing dynamic within a task market.The simulation results demonstrate that the proposed MAML-based trading methods can increase the annualized return rate by approximately 180%,200%,and 160%,increase the Sharpe ratio by 180%,90%,and 170%,and decrease the maximum drawdown by 30%,20%,and 40%,compared to the traditional RL approach in three stock index future markets,respectively. 展开更多
关键词 algorithmic trading reinforcement learning fast-changing market meta-reinforcement learning
下载PDF
Genetic algorithm-optimized backpropagation neural network establishes a diagnostic prediction model for diabetic nephropathy:Combined machine learning and experimental validation in mice 被引量:1
17
作者 WEI LIANG ZONGWEI ZHANG +5 位作者 KEJU YANG HONGTU HU QIANG LUO ANKANG YANG LI CHANG YUANYUAN ZENG 《BIOCELL》 SCIE 2023年第6期1253-1263,共11页
Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of D... Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of DN to reduce the prevalence and delay the development of DN.Kidney biopsy is the gold standard for diagnosing DN;however,its invasive character is its primary limitation.The machine learning approach provides a non-invasive and specific criterion for diagnosing DN,although traditional machine learning algorithms need to be improved to enhance diagnostic performance.Methods:We applied high-throughput RNA sequencing to obtain the genes related to DN tubular tissues and normal tubular tissues of mice.Then machine learning algorithms,random forest,LASSO logistic regression,and principal component analysis were used to identify key genes(CES1G,CYP4A14,NDUFA4,ABCC4,ACE).Then,the genetic algorithm-optimized backpropagation neural network(GA-BPNN)was used to improve the DN diagnostic model.Results:The AUC value of the GA-BPNN model in the training dataset was 0.83,and the AUC value of the model in the validation dataset was 0.81,while the AUC values of the SVM model in the training dataset and external validation dataset were 0.756 and 0.650,respectively.Thus,this GA-BPNN gave better values than the traditional SVM model.This diagnosis model may aim for personalized diagnosis and treatment of patients with DN.Immunohistochemical staining further confirmed that the tissue and cell expression of NADH dehydrogenase(ubiquinone)1 alpha subcomplex,4-like 2(NDUFA4L2)in tubular tissue in DN mice were decreased.Conclusion:The GA-BPNN model has better accuracy than the traditional SVM model and may provide an effective tool for diagnosing DN. 展开更多
关键词 Diabetic nephropathy Renal tubule Machine learning Diagnostic model Genetic algorithm
下载PDF
A Whale Optimization Algorithm with Distributed Collaboration and Reverse Learning Ability 被引量:1
18
作者 Zhedong Xu Yongbo Su +1 位作者 Fang Yang Ming Zhang 《Computers, Materials & Continua》 SCIE EI 2023年第6期5965-5986,共22页
Due to the development of digital transformation,intelligent algorithms are getting more and more attention.The whale optimization algorithm(WOA)is one of swarm intelligence optimization algorithms and is widely used ... Due to the development of digital transformation,intelligent algorithms are getting more and more attention.The whale optimization algorithm(WOA)is one of swarm intelligence optimization algorithms and is widely used to solve practical engineering optimization problems.However,with the increased dimensions,higher requirements are put forward for algorithm performance.The double population whale optimization algorithm with distributed collaboration and reverse learning ability(DCRWOA)is proposed to solve the slow convergence speed and unstable search accuracy of the WOA algorithm in optimization problems.In the DCRWOA algorithm,the novel double population search strategy is constructed.Meanwhile,the reverse learning strategy is adopted in the population search process to help individuals quickly jump out of the non-ideal search area.Numerical experi-ments are carried out using standard test functions with different dimensions(10,50,100,200).The optimization case of shield construction parameters is also used to test the practical application performance of the proposed algo-rithm.The results show that the DCRWOA algorithm has higher optimization accuracy and stability,and the convergence speed is significantly improved.Therefore,the proposed DCRWOA algorithm provides a better method for solving practical optimization problems. 展开更多
关键词 Whale optimization algorithm double population cooperation DISTRIBUTION reverse learning convergence speed
下载PDF
Improved Bat Algorithm with Deep Learning-Based Biomedical ECG Signal Classification Model
19
作者 Marwa Obayya Nadhem NEMRI +5 位作者 Lubna A.Alharbi Mohamed K.Nour Mrim M.Alnfiai Mohammed Abdullah Al-Hagery Nermin M.Salem Mesfer Al Duhayyim 《Computers, Materials & Continua》 SCIE EI 2023年第2期3151-3166,共16页
With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-base... With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-based healthcare to real-time observation-based healthcare.Biomedical Electrocardiogram(ECG)signals are generally utilized in examination and diagnosis of Cardiovascular Diseases(CVDs)since it is quick and non-invasive in nature.Due to increasing number of patients in recent years,the classifier efficiency gets reduced due to high variances observed in ECG signal patterns obtained from patients.In such scenario computer-assisted automated diagnostic tools are important for classification of ECG signals.The current study devises an Improved Bat Algorithm with Deep Learning Based Biomedical ECGSignal Classification(IBADL-BECGC)approach.To accomplish this,the proposed IBADL-BECGC model initially pre-processes the input signals.Besides,IBADL-BECGC model applies NasNet model to derive the features from test ECG signals.In addition,Improved Bat Algorithm(IBA)is employed to optimally fine-tune the hyperparameters related to NasNet approach.Finally,Extreme Learning Machine(ELM)classification algorithm is executed to perform ECG classification method.The presented IBADL-BECGC model was experimentally validated utilizing benchmark dataset.The comparison study outcomes established the improved performance of IBADL-BECGC model over other existing methodologies since the former achieved a maximum accuracy of 97.49%. 展开更多
关键词 Data science ECG signals improved bat algorithm deep learning biomedical data data classification machine learning
下载PDF
Research on optimal intelligent routing algorithm for IoV with machine learning and smart contract
20
作者 Baofeng Ji Mingkun Zhang +4 位作者 Ling Xing Xiaoli Li Chunguo Li Congzheng Han Hong Wen 《Digital Communications and Networks》 SCIE CSCD 2023年第1期47-55,共9页
The huge increase in the communication network rate has made the application fields and scenarios for vehicular ad hoc networks more abundant and diversified and proposed more requirements for the efficiency and quali... The huge increase in the communication network rate has made the application fields and scenarios for vehicular ad hoc networks more abundant and diversified and proposed more requirements for the efficiency and quality of data transmission.To improve the limited communication distance and poor communication quality of the Internet of Vehicles(IoV),an optimal intelligent routing algorithm is proposed in this paper.Combined multiweight decision algorithm with the greedy perimeter stateless routing protocol,designed and evaluated standardized function for link stability.Linear additive weighting is used to optimize link stability and distance to improve the packet delivery rate of the IoV.The blockchain system is used as the storage structure for relay data,and the smart contract incentive algorithm based on machine learning is used to encourage relay vehicles to provide more communication bandwidth for data packet transmission.The proposed scheme is simulated and analyzed under different scenarios and different parameters.The experimental results demonstrate that the proposed scheme can effectively reduce the packet loss rate and improve system performance. 展开更多
关键词 IoV Machine learning Smart contract Routing algorithm GPSR Relay transmission
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部