期刊文献+
共找到64篇文章
< 1 2 4 >
每页显示 20 50 100
Application of Time Scale to Parameters Tuning of Active Disturbance Rejection Controller for Induction Motor 被引量:2
1
作者 邵立伟 廖晓钟 张宇河 《Journal of Beijing Institute of Technology》 EI CAS 2007年第4期419-423,共5页
Active disturbance rejection controller (ADRC) has good performance in induction motor (IM) control system, but controller parameter is difficult to tune. A method of tuning ADRC parameter by time scale is analyzed. T... Active disturbance rejection controller (ADRC) has good performance in induction motor (IM) control system, but controller parameter is difficult to tune. A method of tuning ADRC parameter by time scale is analyzed. The IM time scale is obtained by theoretical analysis. Combining the relations between scale time and ADRC parameters, ADRC parameter tuning in IM vector control based stator flux oriented is obtained. This parameter tuning method is validated by simulations and it provides a new technique for tuning of ADRC parameters of IM. 展开更多
关键词 time scale active disturbance rejection controller parameter tuning
下载PDF
Tuning PID Parameters Based on a Combination of the Expert System and the Improved Genetic Algorithms 被引量:3
2
作者 Zuo Xin Zhang Junfeng Luo Xionglin 《Petroleum Science》 SCIE CAS CSCD 2005年第4期71-76,共6页
a new strategy combining an expert system and improved genetic algorithms is presented for tuning proportional-integral-derivative (PID) parameters for petrochemical processes. This retains the advantages of genetic... a new strategy combining an expert system and improved genetic algorithms is presented for tuning proportional-integral-derivative (PID) parameters for petrochemical processes. This retains the advantages of genetic algorithms, namely rapid convergence and attainment of the global optimum. Utilization of an orthogonal experiment method solves the determination of the genetic factors. Combination with an expert system can make best use of the actual experience of the plant operators. Simulation results of typical process systems examples show a good control performance and robustness. 展开更多
关键词 PID parameters tuning orthogonal experiment method genetic algorithm expert system
下载PDF
A Two-stage Tuning Method of Servo Parameters for Feed Drives in Machine Tools 被引量:2
3
作者 ZHOU Yong PENG Fang-yu CHEN Ji-hong LI Bin 《International Journal of Plant Engineering and Management》 2007年第3期171-180,共10页
Based on the evaluation of dynamic performance for feed drives in machine tools, this paper presents a two-stage tuning method of servo parameters. In the first stage, the evaluation of dynamic performance, parameter ... Based on the evaluation of dynamic performance for feed drives in machine tools, this paper presents a two-stage tuning method of servo parameters. In the first stage, the evaluation of dynamic performance, parameter tuning and optimization on a mechatronic integrated system simulation platform of feed drives are performed. As a result, a servo parameter combination is acquired. In the second stage, the servo parameter combination from the first stage is set and tuned further in a real machine tool whose dynamic performance is measured and evaluated using the cross grid encoder developed by Heidenhain GmbH. A case study shows that this method simplifies the test process effectively and results in a good dynamic performance in a real machine tool. 展开更多
关键词 two-stage tuning method feed drive servo parameter tuning evaluation of dynamic performance
下载PDF
A method of tuning PID parameters for P-GMAW based on physical experiments
4
作者 林放 黄文超 +2 位作者 魏仲华 高理文 薛家祥 《China Welding》 EI CAS 2011年第1期59-63,共5页
To improve welding quality, a method of proportional-integral-differential (PlD) parameters tuning based on pulsed gas metal arc welding (P-GMAW) control was put forward. Aiming at the request of dynamic responsiv... To improve welding quality, a method of proportional-integral-differential (PlD) parameters tuning based on pulsed gas metal arc welding (P-GMAW) control was put forward. Aiming at the request of dynamic responsiveness of PGMA W constant current control, a self-developed welding waveform wavelet analyzer was employed. By tuning the proportional parameter, integration time and differential time in sequence, the optimal PID parameters could be achieved. The results showed that, due to the PID parameters tuned by this method, the welding process was stable and the weld bead appearance was nice. The requirement of dynamic responsiveness of P-GMAW constant current control was fully met. 展开更多
关键词 pulsed gas metal arc welding (p-GMAW) PID controller parameter tuning
下载PDF
Controller Parameter Tuning of Delta Robot Based on Servo Identification 被引量:6
5
作者 ZHAO Qing WANG Panfeng MEI Jiangping 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2015年第2期267-275,共9页
High-speed pick-and-place parallel robot is a system where the inertia imposed on the motor shafts is real-time changing with the system configurations.High quality of computer control with proper controller parameter... High-speed pick-and-place parallel robot is a system where the inertia imposed on the motor shafts is real-time changing with the system configurations.High quality of computer control with proper controller parameters is conducive to overcoming this problem and has a significant effect on reducing the robot's tracking error.By taking Delta robot as an example,a method for parameter tuning of the fixed gain motion controller is presented.Having identifying the parameters of the servo system in the frequency domain by the sinusoidal excitation,the PD+feedforward control strategy is proposed to adapt to the varying inertia loads,allowing the controller parameters to be tuned by minimizing the mean square tracking error along a typical trajectory.A set of optimum parameters is obtained through computer simulations and the effectiveness of the proposed approach is validated by experiments on a real prototype machine.Let the traveling plate undergoes a specific trajectory and the results show that the tracking error can be reduced by at least 50%in comparison with the conventional auto-tuning and Z-N methods.The proposed approach is a whole workspace optimization and can be applied to the parameter tuning of fixed gain motion controllers. 展开更多
关键词 parallel robot servo system identification parameter tuning mean square error
下载PDF
Parameter Tuning Method for Dither Compensation of a Pneumatic Proportional Valve with Friction 被引量:5
6
作者 WANG Tao SONG Yang +1 位作者 HUANG Leisheng FAN Wei 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2016年第3期607-614,共8页
In the practical application of pneumatic control devices, the nonlinearity of a pneumatic control valve become the main factor affecting the control effect, which comes mainly from the dynamic friction force. The dyn... In the practical application of pneumatic control devices, the nonlinearity of a pneumatic control valve become the main factor affecting the control effect, which comes mainly from the dynamic friction force. The dynamic friction inside the valve may cause hysteresis and a dead zone. In this paper, a dither compensation mechanism is proposed to reduce negative effects on the basis of analyzing the mechanism of friction force. The specific dither signal(using a sinusoidal signal) was superimposed on the control signal of the valve. Based on the relationship between the parameters of the dither signal and the inherent characteristics of the proportional servo valve, a parameter tuning method was proposed, which uses a displacement sensor to measure the maximum static friction inside the valve. According to the experimental results, the proper amplitude ranges are determined for different pressures. In order to get the optimal parameters of the dither signal, some dither compensation experiments have been carried out on different signal amplitude and gas pressure conditions. Optimal parameters are determined under two kinds of pressure conditions. Using tuning parameters the valve spool displacement experiment has been taken. From the experiment results, hysteresis of the proportional servo valve is significantly reduced. And through simulation and experiments, the cut-off frequency of the proportional valve has also been widened. Therefore after adding the dither signal, the static and dynamic characteristics of the proportional valve are both improved to a certain degree. This research proposes a parameter tuning method of dither signal, and the validity of the method is verified experimentally. 展开更多
关键词 proportional valve hysteresis dither compensation parameter tuning method
下载PDF
Proportion integral-type active disturbance rejection generalized predictive control for distillation process based on grey wolf optimization parameter tuning 被引量:1
7
作者 Jia Ren Zengqiang Chen +2 位作者 Mingwei Sun Qinglin Sun Zenghui Wang 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2022年第9期234-244,共11页
The high-purity distillation column system is strongly nonlinear and coupled,which makes it difficult to control.Active disturbance rejection control(ADRC)has been widely used in distillation systems,but it has limita... The high-purity distillation column system is strongly nonlinear and coupled,which makes it difficult to control.Active disturbance rejection control(ADRC)has been widely used in distillation systems,but it has limitations in controlling distillation systems with large time delays since ADRC employs ESO and feedback control law to estimate the total disturbance of the system without considering the large time delays.This paper designs a proportion integral-type active disturbance rejection generalized predictive control(PI-ADRGPC)algorithm to control the distillation column system with large time delay.It replaces the PD controller in ADRC with a proportion integral-type generalized predictive control(PI-GPC),thereby improving the performance of control systems with large time delays.Since the proposed controller has many parameters and is difficult to tune,this paper proposes to use the grey wolf optimization(GWO)to tune these parameters,whose structure can also be used by other intelligent optimization algorithms.The performance of GWO tuned PI-ADRGPC is compared with the control performance of GWO tuned ADRC method,multi-verse optimizer(MVO)tuned PI-ADRGPC and MVO tuned ADRC.The simulation results show that the proposed strategy can track reference well and has a good disturbance rejection performance. 展开更多
关键词 Proportion integral-type active disturbance rejection generalized predictive control Grey wolf optimization Parameter tuning DISTILLATION Process control PREDICTION
下载PDF
A Novel Tuning Method for Predictive Control of VAV Air Conditioning System Based on Machine Learning and Improved PSO
8
作者 Ning He Kun Xi +1 位作者 Mengrui Zhang Shang Li 《Journal of Beijing Institute of Technology》 EI CAS 2022年第4期350-361,共12页
The variable air volume(VAV)air conditioning system is with strong coupling and large time delay,for which model predictive control(MPC)is normally used to pursue performance improvement.Aiming at the difficulty of th... The variable air volume(VAV)air conditioning system is with strong coupling and large time delay,for which model predictive control(MPC)is normally used to pursue performance improvement.Aiming at the difficulty of the parameter selection of VAV MPC controller which is difficult to make the system have a desired response,a novel tuning method based on machine learning and improved particle swarm optimization(PSO)is proposed.In this method,the relationship between MPC controller parameters and time domain performance indices is established via machine learning.Then the PSO is used to optimize MPC controller parameters to get better performance in terms of time domain indices.In addition,the PSO algorithm is further modified under the principle of population attenuation and event triggering to tune parameters of MPC and reduce the computation time of tuning method.Finally,the effectiveness of the proposed method is validated via a hardware-in-the-loop VAV system. 展开更多
关键词 model predictive control(MPC) parameter tuning machine learning improved particle swarm optimization(PSO)
下载PDF
Adaptive Parallel Particle Swarm Optimization Algorithm Based on Dynamic Exchange of Control Parameters
9
作者 Masaaki Suzuki 《American Journal of Operations Research》 2016年第5期401-413,共14页
Updating the velocity in particle swarm optimization (PSO) consists of three terms: the inertia term, the cognitive term and the social term. The balance of these terms determines the balance of the global and local s... Updating the velocity in particle swarm optimization (PSO) consists of three terms: the inertia term, the cognitive term and the social term. The balance of these terms determines the balance of the global and local search abilities, and therefore the performance of PSO. In this work, an adaptive parallel PSO algorithm, which is based on the dynamic exchange of control parameters between adjacent swarms, has been developed. The proposed PSO algorithm enables us to adaptively optimize inertia factors, learning factors and swarm activity. By performing simulations of a search for the global minimum of a benchmark multimodal function, we have found that the proposed PSO successfully provides appropriate control parameter values, and thus good global optimization performance. 展开更多
关键词 Swarm Intelligence Particle Swarm Optimization Global Optimization Metaheuristics Adaptive Parameter tuning
下载PDF
The Analysis of Peculiar Control Parameters of Artificial Bee Colony Algorithm on the Numerical Optimization Problems
10
作者 Mustafa Servet Kiran Mesut Gündüz 《Journal of Computer and Communications》 2014年第4期127-136,共10页
Artificial bee colony (ABC) algorithm is one of the popular swarm intelligence algorithms. ABC has been developed by being inspired foraging and waggle dance behaviors of real bee colonies in 2005. Since its invention... Artificial bee colony (ABC) algorithm is one of the popular swarm intelligence algorithms. ABC has been developed by being inspired foraging and waggle dance behaviors of real bee colonies in 2005. Since its invention in 2005, many ABC models have been proposed in order to solve different optimization problems. In all the models proposed, there are only one scout bee and a constant limit value used as control parameters for the bee population. In this study, the performance of ABC algorithm on the numeric optimization problems was analyzed by using different number of scout bees and limit values. Experimental results show that the results obtained by using more than one scout bee and different limit values, are better than the results of basic ABC. Therefore, the control parameters of the basic ABC should be tuned according to given class of optimization problems. In this paper, we propose reasonable value ranges of control parameters for the basic ABC in order to obtain better results on the numeric optimization problems. 展开更多
关键词 Artificial Bee Colony Effects of the parameters Parameter tuning Number of Scout Bee Limit Value
下载PDF
Squirrel Search Optimization with Deep Convolutional Neural Network for Human Pose Estimation 被引量:1
11
作者 K.Ishwarya A.Alice Nithya 《Computers, Materials & Continua》 SCIE EI 2023年第3期6081-6099,共19页
Human pose estimation(HPE)is a procedure for determining the structure of the body pose and it is considered a challenging issue in the computer vision(CV)communities.HPE finds its applications in several fields namel... Human pose estimation(HPE)is a procedure for determining the structure of the body pose and it is considered a challenging issue in the computer vision(CV)communities.HPE finds its applications in several fields namely activity recognition and human-computer interface.Despite the benefits of HPE,it is still a challenging process due to the variations in visual appearances,lighting,occlusions,dimensionality,etc.To resolve these issues,this paper presents a squirrel search optimization with a deep convolutional neural network for HPE(SSDCNN-HPE)technique.The major intention of the SSDCNN-HPE technique is to identify the human pose accurately and efficiently.Primarily,the video frame conversion process is performed and pre-processing takes place via bilateral filtering-based noise removal process.Then,the EfficientNet model is applied to identify the body points of a person with no problem constraints.Besides,the hyperparameter tuning of the EfficientNet model takes place by the use of the squirrel search algorithm(SSA).In the final stage,the multiclass support vector machine(M-SVM)technique was utilized for the identification and classification of human poses.The design of bilateral filtering followed by SSA based EfficientNetmodel for HPE depicts the novelty of the work.To demonstrate the enhanced outcomes of the SSDCNN-HPE approach,a series of simulations are executed.The experimental results reported the betterment of the SSDCNN-HPE system over the recent existing techniques in terms of different measures. 展开更多
关键词 Parameter tuning human pose estimation deep learning squirrel search algorithm activity recognition
下载PDF
Task Offloading and Resource Allocation in IoT Based Mobile Edge Computing Using Deep Learning 被引量:1
12
作者 Ily s Abdullaev Natalia Prodanova +3 位作者 KAruna Bhaskar ELaxmi Lydia Seifedine Kadry Jungeun Kim 《Computers, Materials & Continua》 SCIE EI 2023年第8期1463-1477,共15页
Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-... Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-based data center.Smart city benefitted from offloading to edge point.Consider a mobile edge computing(MEC)network in multiple regions.They comprise N MDs and many access points,in which everyMDhasM independent real-time tasks.This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization(TORA-DLSGO)algorithm.The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server,which enables an optimum offloading decision to minimize the system cost.In addition,an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources.The TORA-DLSGO technique uses the deep belief network(DBN)model for optimum offloading decision-making.Finally,the SGO algorithm is used for the parameter tuning of the DBN model.The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems with a maximum reward of 0.8967. 展开更多
关键词 Mobile edge computing seagull optimization deep belief network resource management parameter tuning
下载PDF
Hyperspectral Remote Sensing Image Classification Using Improved Metaheuristic with Deep Learning 被引量:1
13
作者 S.Rajalakshmi S.Nalini +1 位作者 Ahmed Alkhayyat Rami Q.Malik 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1673-1688,共16页
Remote sensing image(RSI)classifier roles a vital play in earth observation technology utilizing Remote sensing(RS)data are extremely exploited from both military and civil fields.More recently,as novel DL approaches ... Remote sensing image(RSI)classifier roles a vital play in earth observation technology utilizing Remote sensing(RS)data are extremely exploited from both military and civil fields.More recently,as novel DL approaches develop,techniques for RSI classifiers with DL have attained important breakthroughs,providing a new opportunity for the research and development of RSI classifiers.This study introduces an Improved Slime Mould Optimization with a graph convolutional network for the hyperspectral remote sensing image classification(ISMOGCN-HRSC)model.The ISMOGCN-HRSC model majorly concentrates on identifying and classifying distinct kinds of RSIs.In the presented ISMOGCN-HRSC model,the synergic deep learning(SDL)model is exploited to produce feature vectors.The GCN model is utilized for image classification purposes to identify the proper class labels of the RSIs.The ISMO algorithm is used to enhance the classification efficiency of the GCN method,which is derived by integrating chaotic concepts into the SMO algorithm.The experimental assessment of the ISMOGCN-HRSC method is tested using a benchmark dataset. 展开更多
关键词 Deep learning remote sensing images image classification slime mould optimization parameter tuning
下载PDF
Automated Video-Based Face Detection Using Harris Hawks Optimization with Deep Learning
14
作者 Latifah Almuqren Manar Ahmed Hamza +1 位作者 Abdullah Mohamed Amgad Atta Abdelmageed 《Computers, Materials & Continua》 SCIE EI 2023年第6期4917-4933,共17页
Face recognition technology automatically identifies an individual from image or video sources.The detection process can be done by attaining facial characteristics from the image of a subject face.Recent developments... Face recognition technology automatically identifies an individual from image or video sources.The detection process can be done by attaining facial characteristics from the image of a subject face.Recent developments in deep learning(DL)and computer vision(CV)techniques enable the design of automated face recognition and tracking methods.This study presents a novel Harris Hawks Optimization with deep learning-empowered automated face detection and tracking(HHODL-AFDT)method.The proposed HHODL-AFDT model involves a Faster region based convolution neural network(RCNN)-based face detection model and HHO-based hyperparameter opti-mization process.The presented optimal Faster RCNN model precisely rec-ognizes the face and is passed into the face-tracking model using a regression network(REGN).The face tracking using the REGN model uses the fea-tures from neighboring frames and foresees the location of the target face in succeeding frames.The application of the HHO algorithm for optimal hyperparameter selection shows the novelty of the work.The experimental validation of the presented HHODL-AFDT algorithm is conducted using two datasets and the experiment outcomes highlighted the superior performance of the HHODL-AFDT model over current methodologies with maximum accuracy of 90.60%and 88.08%under PICS and VTB datasets,respectively. 展开更多
关键词 Face detection face tracking deep learning computer vision video surveillance parameter tuning
下载PDF
Automated Deep Learning Driven Crop Classification on Hyperspectral Remote Sensing Images
15
作者 Mesfer Al Duhayyim Hadeel Alsolai +5 位作者 Siwar Ben Haj Hassine Jaber SAlzahrani Ahmed SSalama Abdelwahed Motwakel Ishfaq Yaseen Abu Sarwar Zamani 《Computers, Materials & Continua》 SCIE EI 2023年第2期3167-3181,共15页
Hyperspectral remote sensing/imaging spectroscopy is a novel approach to reaching a spectrum from all the places of a huge array of spatial places so that several spectral wavelengths are utilized for making coherent ... Hyperspectral remote sensing/imaging spectroscopy is a novel approach to reaching a spectrum from all the places of a huge array of spatial places so that several spectral wavelengths are utilized for making coherent images.Hyperspectral remote sensing contains acquisition of digital images from several narrow,contiguous spectral bands throughout the visible,Thermal Infrared(TIR),Near Infrared(NIR),and Mid-Infrared(MIR)regions of the electromagnetic spectrum.In order to the application of agricultural regions,remote sensing approaches are studied and executed to their benefit of continuous and quantitativemonitoring.Particularly,hyperspectral images(HSI)are considered the precise for agriculture as they can offer chemical and physical data on vegetation.With this motivation,this article presents a novel Hurricane Optimization Algorithm with Deep Transfer Learning Driven Crop Classification(HOADTL-CC)model onHyperspectralRemote Sensing Images.The presentedHOADTL-CC model focuses on the identification and categorization of crops on hyperspectral remote sensing images.To accomplish this,the presentedHOADTL-CC model involves the design ofHOAwith capsule network(CapsNet)model for generating a set of useful feature vectors.Besides,Elman neural network(ENN)model is applied to allot proper class labels into the input HSI.Finally,glowworm swarm optimization(GSO)algorithm is exploited to fine tune the ENNparameters involved in this article.The experimental result scrutiny of the HOADTL-CC method can be tested with the help of benchmark dataset and the results are assessed under distinct aspects.Extensive comparative studies stated the enhanced performance of the HOADTL-CC model over recent approaches with maximum accuracy of 99.51%. 展开更多
关键词 Hyperspectral images remote sensing deep learning hurricane optimization algorithm crop classification parameter tuning
下载PDF
Automated Arabic Text Classification Using Hyperparameter Tuned Hybrid Deep Learning Model
16
作者 Badriyya B.Al-onazi Saud S.Alotaib +4 位作者 Saeed Masoud Alshahrani Najm Alotaibi Mrim M.Alnfiai Ahmed S.Salama Manar Ahmed Hamza 《Computers, Materials & Continua》 SCIE EI 2023年第3期5447-5465,共19页
The text classification process has been extensively investigated in various languages,especially English.Text classification models are vital in several Natural Language Processing(NLP)applications.The Arabic languag... The text classification process has been extensively investigated in various languages,especially English.Text classification models are vital in several Natural Language Processing(NLP)applications.The Arabic language has a lot of significance.For instance,it is the fourth mostly-used language on the internet and the sixth official language of theUnitedNations.However,there are few studies on the text classification process in Arabic.A few text classification studies have been published earlier in the Arabic language.In general,researchers face two challenges in the Arabic text classification process:low accuracy and high dimensionality of the features.In this study,an Automated Arabic Text Classification using Hyperparameter Tuned Hybrid Deep Learning(AATC-HTHDL)model is proposed.The major goal of the proposed AATC-HTHDL method is to identify different class labels for the Arabic text.The first step in the proposed model is to pre-process the input data to transform it into a useful format.The Term Frequency-Inverse Document Frequency(TF-IDF)model is applied to extract the feature vectors.Next,the Convolutional Neural Network with Recurrent Neural Network(CRNN)model is utilized to classify the Arabic text.In the final stage,the Crow Search Algorithm(CSA)is applied to fine-tune the CRNN model’s hyperparameters,showing the work’s novelty.The proposed AATCHTHDL model was experimentally validated under different parameters and the outcomes established the supremacy of the proposed AATC-HTHDL model over other approaches. 展开更多
关键词 Hybrid deep learning natural language processing arabic language text classification parameter tuning
下载PDF
Chimp Optimization Algorithm Based Feature Selection with Machine Learning for Medical Data Classification
17
作者 Firas Abedi Hayder M.A.Ghanimi +6 位作者 Abeer D.Algarni Naglaa F.Soliman Walid El-Shafai Ali Hashim Abbas Zahraa H.Kareem Hussein Muhi Hariz Ahmed Alkhayyat 《Computer Systems Science & Engineering》 SCIE EI 2023年第12期2791-2814,共24页
Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discoveri... Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discovering correlations,patterns,and causal structures within datasets.In the healthcare domain,association rules offer valuable opportunities for building knowledge bases,enabling intelligent diagnoses,and extracting invaluable information rapidly.This paper presents a novel approach called the Machine Learning based Association Rule Mining and Classification for Healthcare Data Management System(MLARMC-HDMS).The MLARMC-HDMS technique integrates classification and association rule mining(ARM)processes.Initially,the chimp optimization algorithm-based feature selection(COAFS)technique is employed within MLARMC-HDMS to select relevant attributes.Inspired by the foraging behavior of chimpanzees,the COA algorithm mimics their search strategy for food.Subsequently,the classification process utilizes stochastic gradient descent with a multilayer perceptron(SGD-MLP)model,while the Apriori algorithm determines attribute relationships.We propose a COA-based feature selection approach for medical data classification using machine learning techniques.This approach involves selecting pertinent features from medical datasets through COA and training machine learning models using the reduced feature set.We evaluate the performance of our approach on various medical datasets employing diverse machine learning classifiers.Experimental results demonstrate that our proposed approach surpasses alternative feature selection methods,achieving higher accuracy and precision rates in medical data classification tasks.The study showcases the effectiveness and efficiency of the COA-based feature selection approach in identifying relevant features,thereby enhancing the diagnosis and treatment of various diseases.To provide further validation,we conduct detailed experiments on a benchmark medical dataset,revealing the superiority of the MLARMCHDMS model over other methods,with a maximum accuracy of 99.75%.Therefore,this research contributes to the advancement of feature selection techniques in medical data classification and highlights the potential for improving healthcare outcomes through accurate and efficient data analysis.The presented MLARMC-HDMS framework and COA-based feature selection approach offer valuable insights for researchers and practitioners working in the field of healthcare data mining and machine learning. 展开更多
关键词 Association rule mining data classification healthcare data machine learning parameter tuning data mining feature selection MLARMC-HDMS COA stochastic gradient descent Apriori algorithm
下载PDF
Optimal Sparse Autoencoder Based Sleep Stage Classification Using Biomedical Signals
18
作者 Ashit Kumar Dutta Yasser Albagory +2 位作者 Manal Al Faraj Yasir A.M.Eltahir Abdul Rahaman Wahab Sait 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1517-1529,共13页
The recently developed machine learning(ML)models have the ability to obtain high detection rate using biomedical signals.Therefore,this article develops an Optimal Sparse Autoencoder based Sleep Stage Classification M... The recently developed machine learning(ML)models have the ability to obtain high detection rate using biomedical signals.Therefore,this article develops an Optimal Sparse Autoencoder based Sleep Stage Classification Model on Electroencephalography(EEG)Biomedical Signals,named OSAE-SSCEEG technique.The major intention of the OSAE-SSCEEG technique is tofind the sleep stage disorders using the EEG biomedical signals.The OSAE-SSCEEG technique primarily undergoes preprocessing using min-max data normalization approach.Moreover,the classification of sleep stages takes place using the Sparse Autoencoder with Smoothed Regularization(SAE-SR)with softmax(SM)approach.Finally,the parameter optimization of the SAE-SR technique is carried out by the use of Coyote Optimization Algorithm(COA)and it leads to boosted classification efficiency.In order to ensure the enhanced performance of the OSAE-SSCEEG technique,a wide ranging simulation analysis is performed and the obtained results demonstrate the betterment of the OSAE-SSCEEG tech-nique over the recent methods. 展开更多
关键词 Biomedical signals EEG sleep stage classification machine learning autoencoder softmax parameter tuning
下载PDF
Red Deer Optimization with Artificial Intelligence Enabled Image Captioning System for Visually Impaired People
19
作者 Anwer Mustafa Hilal Fadwa Alrowais +1 位作者 Fahd N.Al-Wesabi Radwa Marzouk 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1929-1945,共17页
The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applicatio... The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applications like image retrieval or indexing,virtual assistants,image understanding,and support of visually impaired people(VIP).Though the VIP uses other senses,touch and hearing,for recognizing objects and events,the quality of life of those persons is lower than the standard level.Automatic Image captioning generates captions that will be read loudly to the VIP,thereby realizing matters happening around them.This article introduces a Red Deer Optimization with Artificial Intelligence Enabled Image Captioning System(RDOAI-ICS)for Visually Impaired People.The presented RDOAI-ICS technique aids in generating image captions for VIPs.The presented RDOAIICS technique utilizes a neural architectural search network(NASNet)model to produce image representations.Besides,the RDOAI-ICS technique uses the radial basis function neural network(RBFNN)method to generate a textual description.To enhance the performance of the RDOAI-ICS method,the parameter optimization process takes place using the RDO algorithm for NasNet and the butterfly optimization algorithm(BOA)for the RBFNN model,showing the novelty of the work.The experimental evaluation of the RDOAI-ICS method can be tested using a benchmark dataset.The outcomes show the enhancements of the RDOAI-ICS method over other recent Image captioning approaches. 展开更多
关键词 Machine learning image captioning visually impaired people parameter tuning artificial intelligence metaheuristics
下载PDF
Stacked Gated Recurrent Unit Classifier with CT Images for Liver Cancer Classification
20
作者 Mahmoud Ragab Jaber Alyami 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2309-2322,共14页
Liver cancer is one of the major diseases with increased mortality in recent years,across the globe.Manual detection of liver cancer is a tedious and laborious task due to which Computer Aided Diagnosis(CAD)models hav... Liver cancer is one of the major diseases with increased mortality in recent years,across the globe.Manual detection of liver cancer is a tedious and laborious task due to which Computer Aided Diagnosis(CAD)models have been developed to detect the presence of liver cancer accurately and classify its stages.Besides,liver cancer segmentation outcome,using medical images,is employed in the assessment of tumor volume,further treatment plans,and response moni-toring.Hence,there is a need exists to develop automated tools for liver cancer detection in a precise manner.With this motivation,the current study introduces an Intelligent Artificial Intelligence with Equilibrium Optimizer based Liver cancer Classification(IAIEO-LCC)model.The proposed IAIEO-LCC technique initially performs Median Filtering(MF)-based pre-processing and data augmentation process.Besides,Kapur’s entropy-based segmentation technique is used to identify the affected regions in liver.Moreover,VGG-19 based feature extractor and Equilibrium Optimizer(EO)-based hyperparameter tuning processes are also involved to derive the feature vectors.At last,Stacked Gated Recurrent Unit(SGRU)classifier is exploited to detect and classify the liver cancer effectively.In order to demonstrate the superiority of the proposed IAIEO-LCC technique in terms of performance,a wide range of simulations was conducted and the results were inspected under different measures.The comparison study results infer that the proposed IAIEO-LCC technique achieved an improved accuracy of 98.52%. 展开更多
关键词 Liver cancer image segmentation artificial intelligence deep learning CT images parameter tuning
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部