期刊文献+
共找到951篇文章
< 1 2 48 >
每页显示 20 50 100
Swarm-Based Extreme Learning Machine Models for Global Optimization
1
作者 Mustafa Abdul Salam Ahmad Taher Azar Rana Hussien 《Computers, Materials & Continua》 SCIE EI 2022年第3期6339-6363,共25页
Extreme Learning Machine(ELM)is popular in batch learning,sequential learning,and progressive learning,due to its speed,easy integration,and generalization ability.While,Traditional ELM cannot train massive data rapid... Extreme Learning Machine(ELM)is popular in batch learning,sequential learning,and progressive learning,due to its speed,easy integration,and generalization ability.While,Traditional ELM cannot train massive data rapidly and efficiently due to its memory residence,high time and space complexity.In ELM,the hidden layer typically necessitates a huge number of nodes.Furthermore,there is no certainty that the arrangement of weights and biases within the hidden layer is optimal.To solve this problem,the traditional ELM has been hybridized with swarm intelligence optimization techniques.This paper displays five proposed hybrid Algorithms“Salp Swarm Algorithm(SSA-ELM),Grasshopper Algorithm(GOA-ELM),Grey Wolf Algorithm(GWO-ELM),Whale optimizationAlgorithm(WOA-ELM)andMoth Flame Optimization(MFO-ELM)”.These five optimizers are hybridized with standard ELM methodology for resolving the tumor type classification using gene expression data.The proposed models applied to the predication of electricity loading data,that describes the energy use of a single residence over a fouryear period.In the hidden layer,Swarm algorithms are used to pick a smaller number of nodes to speed up the execution of ELM.The best weights and preferences were calculated by these algorithms for the hidden layer.Experimental results demonstrated that the proposed MFO-ELM achieved 98.13%accuracy and this is the highest model in accuracy in tumor type classification gene expression data.While in predication,the proposed GOA-ELM achieved 0.397which is least RMSE compared to the other models. 展开更多
关键词 extreme learning machine salp swarm optimization algorithm grasshopper optimization algorithm grey wolf optimization algorithm moth flame optimization algorithm bio-inspired optimization classification model and whale optimization algorithm
下载PDF
Improved PSO-Extreme Learning Machine Algorithm for Indoor Localization
2
作者 Qiu Wanqing Zhang Qingmiao +1 位作者 Zhao Junhui Yang Lihua 《China Communications》 SCIE CSCD 2024年第5期113-122,共10页
Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the rece... Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the received signal strength indication(RSSI)distance is accord with the location distance.Therefore,how to efficiently match the current RSSI of the user with the RSSI in the fingerprint database is the key to achieve high-accuracy localization.In this paper,a particle swarm optimization-extreme learning machine(PSO-ELM)algorithm is proposed on the basis of the original fingerprinting localization.Firstly,we collect the RSSI of the experimental area to construct the fingerprint database,and the ELM algorithm is applied to the online stages to determine the corresponding relation between the location of the terminal and the RSSI it receives.Secondly,PSO algorithm is used to improve the bias and weight of ELM neural network,and the global optimal results are obtained.Finally,extensive simulation results are presented.It is shown that the proposed algorithm can effectively reduce mean error of localization and improve positioning accuracy when compared with K-Nearest Neighbor(KNN),Kmeans and Back-propagation(BP)algorithms. 展开更多
关键词 extreme learning machine fingerprinting localization indoor localization machine learning particle swarm optimization
下载PDF
Aero-engine Thrust Estimation Based on Ensemble of Improved Wavelet Extreme Learning Machine 被引量:3
3
作者 Zhou Jun Zhang Tianhong 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2018年第2期290-299,共10页
Aero-engine direct thrust control can not only improve the thrust control precision but also save the operating cost by reducing the reserved margin in design and making full use of aircraft engine potential performan... Aero-engine direct thrust control can not only improve the thrust control precision but also save the operating cost by reducing the reserved margin in design and making full use of aircraft engine potential performance.However,it is a big challenge to estimate engine thrust accurately.To tackle this problem,this paper proposes an ensemble of improved wavelet extreme learning machine(EW-ELM)for aircraft engine thrust estimation.Extreme learning machine(ELM)has been proved as an emerging learning technique with high efficiency.Since the combination of ELM and wavelet theory has the both excellent properties,wavelet activation functions are used in the hidden nodes to enhance non-linearity dealing ability.Besides,as original ELM may result in ill-condition and robustness problems due to the random determination of the parameters for hidden nodes,particle swarm optimization(PSO)algorithm is adopted to select the input weights and hidden biases.Furthermore,the ensemble of the improved wavelet ELM is utilized to construct the relationship between the sensor measurements and thrust.The simulation results verify the effectiveness and efficiency of the developed method and show that aero-engine thrust estimation using EW-ELM can satisfy the requirements of direct thrust control in terms of estimation accuracy and computation time. 展开更多
关键词 AERO-ENGINE THRUST estimation WAVELET extreme learning machine particle SWARM optimization neural network ENSEMBLE
下载PDF
A Double-Weighted Deterministic Extreme Learning Machine Based on Sparse Denoising Autoencoder and Its Applications
4
作者 Liang Luo Bolin Liao +1 位作者 Cheng Hua Rongbo Lu 《Journal of Computer and Communications》 2022年第11期138-153,共16页
Extreme learning machine (ELM) is a feedforward neural network-based machine learning method that has the benefits of short training times, strong generalization capabilities, and will not fall into local minima. Howe... Extreme learning machine (ELM) is a feedforward neural network-based machine learning method that has the benefits of short training times, strong generalization capabilities, and will not fall into local minima. However, due to the traditional ELM shallow architecture, it requires a large number of hidden nodes when dealing with high-dimensional data sets to ensure its classification performance. The other aspect, it is easy to degrade the classification performance in the face of noise interference from noisy data. To improve the above problem, this paper proposes a double pseudo-inverse extreme learning machine (DPELM) based on Sparse Denoising AutoEncoder (SDAE) namely, SDAE-DPELM. The algorithm can directly determine the input weight and output weight of the network by using the pseudo-inverse method. As a result, the algorithm only requires a few hidden layer nodes to produce superior classification results when classifying data. And its combination with SDAE can effectively improve the classification performance and noise resistance. Extensive numerical experiments show that the algorithm has high classification accuracy and good robustness when dealing with high-dimensional noisy data and high-dimensional noiseless data. Furthermore, applying such an algorithm to Miao character recognition substantiates its excellent performance, which further illustrates the practicability of the algorithm. 展开更多
关键词 extreme learning machine Sparse Denoising Autoencoder Pseudo-Inverse method Miao Character Recognition
下载PDF
Constrained voting extreme learning machine and its application 被引量:5
5
作者 MIN Mengcan CHEN Xiaofang XIE Yongfang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2021年第1期209-219,共11页
Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.Wit... Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.With the increase of the nodes in the hidden layers,the computation cost is greatly increased.In this paper,we propose a novel algorithm,named constrained voting extreme learning machine(CV-ELM).Compared with the traditional ELM,the CV-ELM determines the input weight and bias based on the differences of between-class samples.At the same time,to improve the accuracy of the proposed method,the voting selection is introduced.The proposed method is evaluated on public benchmark datasets.The experimental results show that the proposed algorithm is superior to the original ELM algorithm.Further,we apply the CV-ELM to the classification of superheat degree(SD)state in the aluminum electrolysis industry,and the recognition accuracy rate reaches87.4%,and the experimental results demonstrate that the proposed method is more robust than the existing state-of-the-art identification methods. 展开更多
关键词 extreme learning machine(ELM) majority voting ensemble method sample based learning superheat degree(SD)
下载PDF
Extreme learning with chemical reaction optimization for stock volatility prediction 被引量:2
6
作者 Sarat Chandra Nayak Bijan Bihari Misra 《Financial Innovation》 2020年第1期290-312,共23页
Extreme learning machine(ELM)allows for fast learning and better generalization performance than conventional gradient-based learning.However,the possible inclusion of non-optimal weight and bias due to random selecti... Extreme learning machine(ELM)allows for fast learning and better generalization performance than conventional gradient-based learning.However,the possible inclusion of non-optimal weight and bias due to random selection and the need for more hidden neurons adversely influence network usability.Further,choosing the optimal number of hidden nodes for a network usually requires intensive human intervention,which may lead to an ill-conditioned situation.In this context,chemical reaction optimization(CRO)is a meta-heuristic paradigm with increased success in a large number of application areas.It is characterized by faster convergence capability and requires fewer tunable parameters.This study develops a learning framework combining the advantages of ELM and CRO,called extreme learning with chemical reaction optimization(ELCRO).ELCRO simultaneously optimizes the weight and bias vector and number of hidden neurons of a single layer feed-forward neural network without compromising prediction accuracy.We evaluate its performance by predicting the daily volatility and closing prices of BSE indices.Additionally,its performance is compared with three other similarly developed models—ELM based on particle swarm optimization,genetic algorithm,and gradient descent—and find the performance of the proposed algorithm superior.Wilcoxon signed-rank and Diebold–Mariano tests are then conducted to verify the statistical significance of the proposed model.Hence,this model can be used as a promising tool for financial forecasting. 展开更多
关键词 extreme learning machine Single layer feed-forward network Artificial chemical reaction optimization Stock volatility prediction Financial time series forecasting Artificial neural network Genetic algorithm Particle swarm optimization
下载PDF
Power Transformer Fault Diagnosis Using Random Forest and Optimized Kernel Extreme Learning Machine 被引量:1
7
作者 Tusongjiang Kari Zhiyang He +3 位作者 Aisikaer Rouzi Ziwei Zhang Xiaojing Ma Lin Du 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期691-705,共15页
Power transformer is one of the most crucial devices in power grid.It is significant to determine incipient faults of power transformers fast and accurately.Input features play critical roles in fault diagnosis accura... Power transformer is one of the most crucial devices in power grid.It is significant to determine incipient faults of power transformers fast and accurately.Input features play critical roles in fault diagnosis accuracy.In order to further improve the fault diagnosis performance of power trans-formers,a random forest feature selection method coupled with optimized kernel extreme learning machine is presented in this study.Firstly,the random forest feature selection approach is adopted to rank 42 related input features derived from gas concentration,gas ratio and energy-weighted dissolved gas analysis.Afterwards,a kernel extreme learning machine tuned by the Aquila optimization algorithm is implemented to adjust crucial parameters and select the optimal feature subsets.The diagnosis accuracy is used to assess the fault diagnosis capability of concerned feature subsets.Finally,the optimal feature subsets are applied to establish fault diagnosis model.According to the experimental results based on two public datasets and comparison with 5 conventional approaches,it can be seen that the average accuracy of the pro-posed method is up to 94.5%,which is superior to that of other conventional approaches.Fault diagnosis performances verify that the optimum feature subset obtained by the presented method can dramatically improve power transformers fault diagnosis accuracy. 展开更多
关键词 Power transformer fault diagnosis kernel extreme learning machine aquila optimization random forest
下载PDF
State of health estimation for lithium-ion battery based on particle swarm optimization algorithm and extreme learning machine
8
作者 Kui Chen Jiali Li +5 位作者 Kai Liu Changshan Bai Jiamin Zhu Guoqiang Gao Guangning Wu Salah Laghrouche 《Green Energy and Intelligent Transportation》 2024年第1期46-54,共9页
Lithium-ion battery State of Health(SOH)estimation is an essential issue in battery management systems.In order to better estimate battery SOH,Extreme Learning Machine(ELM)is used to establish a model to estimate lith... Lithium-ion battery State of Health(SOH)estimation is an essential issue in battery management systems.In order to better estimate battery SOH,Extreme Learning Machine(ELM)is used to establish a model to estimate lithium-ion battery SOH.The Swarm Optimization algorithm(PSO)is used to automatically adjust and optimize the parameters of ELM to improve estimation accuracy.Firstly,collect cyclic aging data of the battery and extract five characteristic quantities related to battery capacity from the battery charging curve and increment capacity curve.Use Grey Relation Analysis(GRA)method to analyze the correlation between battery capacity and five characteristic quantities.Then,an ELM is used to build the capacity estimation model of the lithium-ion battery based on five characteristics,and a PSO is introduced to optimize the parameters of the capacity estimation model.The proposed method is validated by the degradation experiment of the lithium-ion battery under different conditions.The results show that the battery capacity estimation model based on ELM and PSO has better accuracy and stability in capacity estimation,and the average absolute percentage error is less than 1%. 展开更多
关键词 Lithium-ion battery State of health estimation Grey relation analysis method Particle swarm optimization algorithm extreme learning machine
原文传递
Software Defect Prediction Based on Stacked Contractive Autoencoder and Multi-Objective Optimization 被引量:2
9
作者 Nana Zhang Kun Zhu +1 位作者 Shi Ying Xu Wang 《Computers, Materials & Continua》 SCIE EI 2020年第10期279-308,共30页
Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mos... Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mostly regard software defect prediction as a single objective optimization problem,and multi-objective software defect prediction has not been thoroughly investigated.For the above two reasons,we propose the following solutions in this paper:(1)we leverage an advanced deep neural network-Stacked Contractive AutoEncoder(SCAE)to extract the robust deep semantic features from the original defect features,which has stronger discrimination capacity for different classes(defective or non-defective).(2)we propose a novel multi-objective defect prediction model named SMONGE that utilizes the Multi-Objective NSGAII algorithm to optimize the advanced neural network-Extreme learning machine(ELM)based on state-of-the-art Pareto optimal solutions according to the features extracted by SCAE.We mainly consider two objectives.One objective is to maximize the performance of ELM,which refers to the benefit of the SMONGE model.Another objective is to minimize the output weight norm of ELM,which is related to the cost of the SMONGE model.We compare the SCAE with six state-of-the-art feature extraction methods and compare the SMONGE model with multiple baseline models that contain four classic defect predictors and the MONGE model without SCAE across 20 open source software projects.The experimental results verify that the superiority of SCAE and SMONGE on seven evaluation metrics. 展开更多
关键词 Software defect prediction deep neural network stacked contractive autoencoder multi-objective optimization extreme learning machine
下载PDF
A Transfer Learning-Enabled Optimized Extreme Deep Learning Paradigm for Diagnosis of COVID-19 被引量:1
10
作者 Ahmed Reda Sherif Barakat Amira Rezk 《Computers, Materials & Continua》 SCIE EI 2022年第1期1381-1399,共19页
Many respiratory infections around the world have been caused by coronaviruses.COVID-19 is one of the most serious coronaviruses due to its rapid spread between people and the lowest survival rate.There is a high need... Many respiratory infections around the world have been caused by coronaviruses.COVID-19 is one of the most serious coronaviruses due to its rapid spread between people and the lowest survival rate.There is a high need for computer-assisted diagnostics(CAD)in the area of artificial intelligence to help doctors and radiologists identify COVID-19 patients in cloud systems.Machine learning(ML)has been used to examine chest X-ray frames.In this paper,a new transfer learning-based optimized extreme deep learning paradigm is proposed to identify the chest X-ray picture into three classes,a pneumonia patient,a COVID-19 patient,or a normal person.First,three different pre-trainedConvolutionalNeuralNetwork(CNN)models(resnet18,resnet25,densenet201)are employed for deep feature extraction.Second,each feature vector is passed through the binary Butterfly optimization algorithm(bBOA)to reduce the redundant features and extract the most representative ones,and enhance the performance of the CNN models.These selective features are then passed to an improved Extreme learning machine(ELM)using a BOA to classify the chest X-ray images.The proposed paradigm achieves a 99.48%accuracy in detecting covid-19 cases. 展开更多
关键词 Butterfly optimization algorithm(BOA) covid-19 chest X-ray images convolutional neural network(CNN) extreme learning machine(ELM) feature selection
下载PDF
Gas–solid reactor optimization based on EMMS-DPM simulation and machine learning 被引量:1
11
作者 Haolei Zhang Aiqi Zhu +1 位作者 Ji Xu Wei Ge 《Particuology》 SCIE EI CAS CSCD 2024年第6期131-143,共13页
Design,scaling-up,and optimization of industrial reactors mainly depend on step-by-step experiments and engineering experience,which is usually time-consuming,high cost,and high risk.Although numerical simulation can ... Design,scaling-up,and optimization of industrial reactors mainly depend on step-by-step experiments and engineering experience,which is usually time-consuming,high cost,and high risk.Although numerical simulation can reproduce high resolution details of hydrodynamics,thermal transfer,and reaction process in reactors,it is still challenging for industrial reactors due to huge computational cost.In this study,by combining the numerical simulation and artificial intelligence(AI)technology of machine learning(ML),a method is proposed to efficiently predict and optimize the performance of industrial reactors.A gas–solid fluidization reactor for the methanol to olefins process is taken as an example.1500 cases under different conditions are simulated by the coarse-grain discrete particle method based on the Energy-Minimization Multi-Scale model,and thus,the reactor performance data set is constructed.To develop an efficient reactor performance prediction model influenced by multiple factors,the ML method is established including the ensemble learning strategy and automatic hyperparameter optimization technique,which has better performance than the methods based on the artificial neural network.Furthermore,the operating conditions for highest yield of ethylene and propylene or lowest pressure drop are searched with the particle swarm optimization algorithm due to its strength to solve non-linear optimization problems.Results show that decreasing the methanol inflow rate and increasing the catalyst inventory can maximize the yield,while decreasing methanol the inflow rate and reducing the catalyst inventory can minimize the pressure drop.The two objectives are thus conflicting,and the practical operations need to be compromised under different circumstance. 展开更多
关键词 Discrete particle method Artificial intelligence machine learning Particle swarm optimization Industrial reactor optimization
原文传递
A Data-Driven Rutting Depth Short-Time Prediction Model With Metaheuristic Optimization for Asphalt Pavements Based on RIOHTrack
12
作者 Zhuoxuan Li Iakov Korovin +4 位作者 Xinli Shi Sergey Gorbachev Nadezhda Gorbacheva Wei Huang Jinde Cao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第10期1918-1932,共15页
Rutting of asphalt pavements is a crucial design criterion in various pavement design guides. A good road transportation base can provide security for the transportation of oil and gas in road transportation. This stu... Rutting of asphalt pavements is a crucial design criterion in various pavement design guides. A good road transportation base can provide security for the transportation of oil and gas in road transportation. This study attempts to develop a robust artificial intelligence model to estimate different asphalt pavements’ rutting depth clips, temperature, and load axes as primary characteristics. The experiment data were obtained from19 asphalt pavements with different crude oil sources on a 2.038km long full-scale field accelerated pavement test track(Road Track Institute, RIOHTrack) in Tongzhou, Beijing. In addition,this paper also proposes to build complex networks with different pavement rutting depths through complex network methods and the Louvain algorithm for community detection. The most critical structural elements can be selected from different asphalt pavement rutting data, and similar structural elements can be found. An extreme learning machine algorithm with residual correction(RELM) is designed and optimized using an independent adaptive particle swarm algorithm. The experimental results of the proposed method are compared with several classical machine learning algorithms, with predictions of average root mean squared error(MSE), average mean absolute error(MAE), and a verage mean absolute percentage error(MAPE) for 19 asphalt pavements reaching 1.742, 1.363, and 1.94% respectively. The experiments demonstrate that the RELM algorithm has an advantage over classical machine learning methods in dealing with non-linear problems in road engineering. Notably, the method ensures the adaptation of the simulated environment to different levels of abstraction through the cognitive analysis of the production environment parameters. It is a promising alternative method that facilitates the rapid assessment of pavement conditions and could be applied in the future to production processes in the oil and gas industry. 展开更多
关键词 extreme learning machine algorithm with residual correction(RELM) metaheuristic optimization oil-gas transportation RIOHTrack rutting depth
下载PDF
An Overview of Recently Developed Coupled Simulation Optimization Approaches for Reliability Based Minimum Cost Design of Water Retaining Structures
13
作者 Muqdad Al-Juboori Bithin Datta 《Open Journal of Optimization》 2018年第4期79-112,共34页
This paper reviews several recently-developed techniques for the minimum-cost optimal design of water-retaining structures (WRSs), which integrate the effects of seepage. These include the incorporation of uncertainty... This paper reviews several recently-developed techniques for the minimum-cost optimal design of water-retaining structures (WRSs), which integrate the effects of seepage. These include the incorporation of uncertainty in heterogeneous soil parameter estimates and quantification of reliability. This review is limited to methods based on coupled simulation-optimization (S-O) models. In this context, the design of WRSs is mainly affected by hydraulic design variables such as seepage quantities, which are difficult to determine from closed-form solutions or approximation theories. An S-O model is built by integrating numerical seepage modeling responses to an optimization algorithm based on efficient surrogate models. The surrogate models (meta-models) are trained on simulated data obtained from finite element numerical code solutions. The proposed methodology is applied using several machine learning techniques and optimization solvers to optimize the design of WRS by incorporating different design variables and boundary conditions. Additionally, the effects of several scenarios of flow domain hydraulic conductivity are integrated into the S-O model. Also, reliability based optimum design concepts are incorporated in the S-O model to quantify uncertainty in seepage quantities due to uncertainty in hydraulic conductivity estimates. We can conclude that the S-O model can efficiently optimize WRS designs. The ANN, SVM, and GPR machine learning technique-based surrogate models are efficiently and expeditiously incorporated into the S-O models to imitate the numerical responses of simulations of various problems. 展开更多
关键词 Linked Simulation-optimization Water-Retaining Structures machine learning Technique RELIABILITY based Optimum Design Multi-Realization optimization Model Heterogeneous Hydraulic Conductivity
下载PDF
Deep kernel extreme learning machine classifier based on the improved sparrow search algorithm
14
作者 Zhao Guangyuan Lei Yu 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2024年第3期15-29,共15页
In the classification problem,deep kernel extreme learning machine(DKELM)has the characteristics of efficient processing and superior performance,but its parameters optimization is difficult.To improve the classificat... In the classification problem,deep kernel extreme learning machine(DKELM)has the characteristics of efficient processing and superior performance,but its parameters optimization is difficult.To improve the classification accuracy of DKELM,a DKELM algorithm optimized by the improved sparrow search algorithm(ISSA),named as ISSA-DKELM,is proposed in this paper.Aiming at the parameter selection problem of DKELM,the DKELM classifier is constructed by using the optimal parameters obtained by ISSA optimization.In order to make up for the shortcomings of the basic sparrow search algorithm(SSA),the chaotic transformation is first applied to initialize the sparrow position.Then,the position of the discoverer sparrow population is dynamically adjusted.A learning operator in the teaching-learning-based algorithm is fused to improve the position update operation of the joiners.Finally,the Gaussian mutation strategy is added in the later iteration of the algorithm to make the sparrow jump out of local optimum.The experimental results show that the proposed DKELM classifier is feasible and effective,and compared with other classification algorithms,the proposed DKELM algorithm aciheves better test accuracy. 展开更多
关键词 deep kernel extreme learning machine(DKELM) improved sparrow search algorithm(ISSA) CLASSIFIER parameters optimization
原文传递
Enhancing the Linearity Characteristics of Photoelectric Displacement Sensor Based on Extreme Learning Machine Method 被引量:2
15
作者 Murugan SETHURAMALINGAM Umayal SUBBIAH 《Photonic Sensors》 SCIE EI CAS CSCD 2015年第1期24-31,共8页
Photoelectric displacement sensors rarely possess a perfectly linear transfer characteristic, but always have some degree of non-linearity over their range of operation. If the sensor output is nonlinear, it will prod... Photoelectric displacement sensors rarely possess a perfectly linear transfer characteristic, but always have some degree of non-linearity over their range of operation. If the sensor output is nonlinear, it will produce a whole assortment of problems. This paper presents a method to compensate the nonlinearity of the photoelectric displacement sensor based on the extreme learning machine (ELM) method which significantly reduces the amount of time needed to train a neural network with the output voltage of the optical displacement sensor and the measured input displacement to eliminate the nonlinear errors in the training process. The use of this proposed method was demonstrated through computer simulation with the experimental data of the sensor. The results revealed that the proposed method compensated the presence of nonlinearity in the sensor with very low training time, lowest mean squared error (MSE) value, and better linearity. This research work involved less computational complexity, and it behaved a good performance for nonlinearity compensation for the photoelectric displacement sensor and has a good application prospect. 展开更多
关键词 Photoelectric displacement sensor NONLINEARITY extreme learning machine method
原文传递
Hybrid method integrating machine learning and particle swarm optimization for smart chemical process operations 被引量:6
16
作者 Haoqin Fang Jianzhao Zhou +6 位作者 Zhenyu Wang Ziqi Qiu Yihua Sun Yue Lin Ke Chen Xiantai Zhou Ming Pan 《Frontiers of Chemical Science and Engineering》 SCIE EI CSCD 2022年第2期274-287,共14页
Modeling and optimization is crucial to smart chemical process operations.However,a large number of nonlinearities must be considered in a typical chemical process according to complex unit operations,chemical reactio... Modeling and optimization is crucial to smart chemical process operations.However,a large number of nonlinearities must be considered in a typical chemical process according to complex unit operations,chemical reactions and separations.This leads to a great challenge of implementing mechanistic models into industrial-scale problems due to the resulting computational complexity.Thus,this paper presents an efficient hybrid framework of integrating machine learning and particle swarm optimization to overcome the aforementioned difficulties.An industrial propane dehydrogenation process was carried out to demonstrate the validity and efficiency of our method.Firstly,a data set was generated based on process mechanistic simulation validated by industrial data,which provides sufficient and reasonable samples for model training and testing.Secondly,four well-known machine learning methods,namely,K-nearest neighbors,decision tree,support vector machine,and artificial neural network,were compared and used to obtain the prediction models of the processes operation.All of these methods achieved highly accurate model by adjusting model parameters on the basis of high-coverage data and properly features.Finally,optimal process operations were obtained by using the particle swarm optimization approach. 展开更多
关键词 smart chemical process operations data generation hybrid method machine learning particle swarm optimization
原文传递
A novel hybrid estimation of distribution algorithm for solving hybrid flowshop scheduling problem with unrelated parallel machine 被引量:9
17
作者 孙泽文 顾幸生 《Journal of Central South University》 SCIE EI CAS CSCD 2017年第8期1779-1788,共10页
The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this wor... The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this work, a novel mathematic model for the hybrid flow shop scheduling problem with unrelated parallel machine(HFSPUPM) was proposed. Additionally, an effective hybrid estimation of distribution algorithm was proposed to solve the HFSPUPM, taking advantage of the features in the mathematic model. In the optimization algorithm, a new individual representation method was adopted. The(EDA) structure was used for global search while the teaching learning based optimization(TLBO) strategy was used for local search. Based on the structure of the HFSPUPM, this work presents a series of discrete operations. Simulation results show the effectiveness of the proposed hybrid algorithm compared with other algorithms. 展开更多
关键词 hybrid estimation of distribution algorithm teaching learning based optimization strategy hybrid flow shop unrelated parallel machine scheduling
下载PDF
Real-time human blood pressure measurement based on laser self-mixing interferometry with extreme learning machine 被引量:3
18
作者 WANG Xiu-lin LÜLi-ping +1 位作者 HU Lu HUANG Wen-cai 《Optoelectronics Letters》 EI 2020年第6期467-470,共4页
In this paper, we present a method based on self-mixing interferometry combing extreme learning machine for real-time human blood pressure measurement. A signal processing method based on wavelet transform is applied ... In this paper, we present a method based on self-mixing interferometry combing extreme learning machine for real-time human blood pressure measurement. A signal processing method based on wavelet transform is applied to extract reversion point in the self-mixing interference signal, thus the pulse wave profile is successfully reconstructed. Considering the blood pressure values are intrinsically related to characteristic parameters of the pulse wave, 80 samples from the MIMIC-II database are used to train the extreme learning machine blood pressure model. In the experiment, 15 measured samples of pulse wave signal are used as the prediction sets. The results show that the errors of systolic and diastolic blood pressure are both within 5 mm Hg compared with that by the Coriolis method. 展开更多
关键词 PROFILE Real-time human blood pressure measurement based on laser self-mixing interferometry with extreme learning machine
原文传递
Adaptive meta-learning extreme learning machine with golden eagle optimization and logistic map for forecasting the incomplete data of solar iradiance 被引量:1
19
作者 Sarunyoo Boriratrit Pradit Fuangfoo +1 位作者 Chitchai Srithapon Rongrit Chatthaworn 《Energy and AI》 2023年第3期36-51,共16页
Solar energy has become crucial in producing electrical energy because it is inexhaustible and sustainable.However,its uncertain generation causes problems in power system operation.Therefore,solar irradiance forecast... Solar energy has become crucial in producing electrical energy because it is inexhaustible and sustainable.However,its uncertain generation causes problems in power system operation.Therefore,solar irradiance forecasting is significant for suitable controlling power system operation,organizing the transmission expansion planning,and dispatching power system generation.Nonetheless,the forecasting performance can be decreased due to the unfitted prediction model and lacked preprocessing.To deal with mentioned issues,this paper pro-poses Meta-Learning Extreme Learning Machine optimized with Golden Eagle Optimization and Logistic Map(MGEL-ELM)and the Same Datetime Interval Averaged Imputation algorithm(SAME)for improving the fore-casting performance of incomplete solar irradiance time series datasets.Thus,the proposed method is not only imputing incomplete forecasting data but also achieving forecasting accuracy.The experimental result of fore-casting solar irradiance dataset in Thailand indicates that the proposed method can achieve the highest coeffi-cient of determination value up to 0.9307 compared to state-of-the-art models.Furthermore,the proposed method consumes less forecasting time than the deep learning model. 展开更多
关键词 Data imputation Golden eagle optimization Logistic maps Meta-learning extreme learning machine Renewable energy forecasting
原文传递
Optimization of Interval Type-2 Fuzzy Logic System Using Grasshopper Optimization Algorithm
20
作者 Saima Hassan Mojtaba Ahmadieh Khanesar +3 位作者 Nazar Kalaf Hussein Samir Brahim Belhaouari Usman Amjad Wali Khan Mashwani 《Computers, Materials & Continua》 SCIE EI 2022年第5期3513-3531,共19页
The estimation of the fuzzy membership function parameters for interval type 2 fuzzy logic system(IT2-FLS)is a challenging task in the presence of uncertainty and imprecision.Grasshopper optimization algorithm(GOA)is ... The estimation of the fuzzy membership function parameters for interval type 2 fuzzy logic system(IT2-FLS)is a challenging task in the presence of uncertainty and imprecision.Grasshopper optimization algorithm(GOA)is a fresh population based meta-heuristic algorithm that mimics the swarming behavior of grasshoppers in nature,which has good convergence ability towards optima.The main objective of this paper is to apply GOA to estimate the optimal parameters of the Gaussian membership function in an IT2-FLS.The antecedent part parameters(Gaussian membership function parameters)are encoded as a population of artificial swarm of grasshoppers and optimized using its algorithm.Tuning of the consequent part parameters are accomplished using extreme learning machine.The optimized IT2-FLS(GOAIT2FELM)obtained the optimal premise parameters based on tuned consequent part parameters and is then applied on the Australian national electricity market data for the forecasting of electricity loads and prices.The forecasting performance of the proposed model is compared with other population-based optimized IT2-FLS including genetic algorithm and artificial bee colony optimization algorithm.Analysis of the performance,on the same data-sets,reveals that the proposed GOAIT2FELM could be a better approach for improving the accuracy of the IT2-FLS as compared to other variants of the optimized IT2-FLS. 展开更多
关键词 Parameter optimization grasshopper optimization algorithm interval type-2 fuzzy logic system extreme learning machine electricity market forecasting
下载PDF
上一页 1 2 48 下一页 到第
使用帮助 返回顶部