Extreme Learning Machine(ELM)is popular in batch learning,sequential learning,and progressive learning,due to its speed,easy integration,and generalization ability.While,Traditional ELM cannot train massive data rapid...Extreme Learning Machine(ELM)is popular in batch learning,sequential learning,and progressive learning,due to its speed,easy integration,and generalization ability.While,Traditional ELM cannot train massive data rapidly and efficiently due to its memory residence,high time and space complexity.In ELM,the hidden layer typically necessitates a huge number of nodes.Furthermore,there is no certainty that the arrangement of weights and biases within the hidden layer is optimal.To solve this problem,the traditional ELM has been hybridized with swarm intelligence optimization techniques.This paper displays five proposed hybrid Algorithms“Salp Swarm Algorithm(SSA-ELM),Grasshopper Algorithm(GOA-ELM),Grey Wolf Algorithm(GWO-ELM),Whale optimizationAlgorithm(WOA-ELM)andMoth Flame Optimization(MFO-ELM)”.These five optimizers are hybridized with standard ELM methodology for resolving the tumor type classification using gene expression data.The proposed models applied to the predication of electricity loading data,that describes the energy use of a single residence over a fouryear period.In the hidden layer,Swarm algorithms are used to pick a smaller number of nodes to speed up the execution of ELM.The best weights and preferences were calculated by these algorithms for the hidden layer.Experimental results demonstrated that the proposed MFO-ELM achieved 98.13%accuracy and this is the highest model in accuracy in tumor type classification gene expression data.While in predication,the proposed GOA-ELM achieved 0.397which is least RMSE compared to the other models.展开更多
Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the rece...Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the received signal strength indication(RSSI)distance is accord with the location distance.Therefore,how to efficiently match the current RSSI of the user with the RSSI in the fingerprint database is the key to achieve high-accuracy localization.In this paper,a particle swarm optimization-extreme learning machine(PSO-ELM)algorithm is proposed on the basis of the original fingerprinting localization.Firstly,we collect the RSSI of the experimental area to construct the fingerprint database,and the ELM algorithm is applied to the online stages to determine the corresponding relation between the location of the terminal and the RSSI it receives.Secondly,PSO algorithm is used to improve the bias and weight of ELM neural network,and the global optimal results are obtained.Finally,extensive simulation results are presented.It is shown that the proposed algorithm can effectively reduce mean error of localization and improve positioning accuracy when compared with K-Nearest Neighbor(KNN),Kmeans and Back-propagation(BP)algorithms.展开更多
Aero-engine direct thrust control can not only improve the thrust control precision but also save the operating cost by reducing the reserved margin in design and making full use of aircraft engine potential performan...Aero-engine direct thrust control can not only improve the thrust control precision but also save the operating cost by reducing the reserved margin in design and making full use of aircraft engine potential performance.However,it is a big challenge to estimate engine thrust accurately.To tackle this problem,this paper proposes an ensemble of improved wavelet extreme learning machine(EW-ELM)for aircraft engine thrust estimation.Extreme learning machine(ELM)has been proved as an emerging learning technique with high efficiency.Since the combination of ELM and wavelet theory has the both excellent properties,wavelet activation functions are used in the hidden nodes to enhance non-linearity dealing ability.Besides,as original ELM may result in ill-condition and robustness problems due to the random determination of the parameters for hidden nodes,particle swarm optimization(PSO)algorithm is adopted to select the input weights and hidden biases.Furthermore,the ensemble of the improved wavelet ELM is utilized to construct the relationship between the sensor measurements and thrust.The simulation results verify the effectiveness and efficiency of the developed method and show that aero-engine thrust estimation using EW-ELM can satisfy the requirements of direct thrust control in terms of estimation accuracy and computation time.展开更多
Extreme learning machine (ELM) is a feedforward neural network-based machine learning method that has the benefits of short training times, strong generalization capabilities, and will not fall into local minima. Howe...Extreme learning machine (ELM) is a feedforward neural network-based machine learning method that has the benefits of short training times, strong generalization capabilities, and will not fall into local minima. However, due to the traditional ELM shallow architecture, it requires a large number of hidden nodes when dealing with high-dimensional data sets to ensure its classification performance. The other aspect, it is easy to degrade the classification performance in the face of noise interference from noisy data. To improve the above problem, this paper proposes a double pseudo-inverse extreme learning machine (DPELM) based on Sparse Denoising AutoEncoder (SDAE) namely, SDAE-DPELM. The algorithm can directly determine the input weight and output weight of the network by using the pseudo-inverse method. As a result, the algorithm only requires a few hidden layer nodes to produce superior classification results when classifying data. And its combination with SDAE can effectively improve the classification performance and noise resistance. Extensive numerical experiments show that the algorithm has high classification accuracy and good robustness when dealing with high-dimensional noisy data and high-dimensional noiseless data. Furthermore, applying such an algorithm to Miao character recognition substantiates its excellent performance, which further illustrates the practicability of the algorithm.展开更多
Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.Wit...Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.With the increase of the nodes in the hidden layers,the computation cost is greatly increased.In this paper,we propose a novel algorithm,named constrained voting extreme learning machine(CV-ELM).Compared with the traditional ELM,the CV-ELM determines the input weight and bias based on the differences of between-class samples.At the same time,to improve the accuracy of the proposed method,the voting selection is introduced.The proposed method is evaluated on public benchmark datasets.The experimental results show that the proposed algorithm is superior to the original ELM algorithm.Further,we apply the CV-ELM to the classification of superheat degree(SD)state in the aluminum electrolysis industry,and the recognition accuracy rate reaches87.4%,and the experimental results demonstrate that the proposed method is more robust than the existing state-of-the-art identification methods.展开更多
Extreme learning machine(ELM)allows for fast learning and better generalization performance than conventional gradient-based learning.However,the possible inclusion of non-optimal weight and bias due to random selecti...Extreme learning machine(ELM)allows for fast learning and better generalization performance than conventional gradient-based learning.However,the possible inclusion of non-optimal weight and bias due to random selection and the need for more hidden neurons adversely influence network usability.Further,choosing the optimal number of hidden nodes for a network usually requires intensive human intervention,which may lead to an ill-conditioned situation.In this context,chemical reaction optimization(CRO)is a meta-heuristic paradigm with increased success in a large number of application areas.It is characterized by faster convergence capability and requires fewer tunable parameters.This study develops a learning framework combining the advantages of ELM and CRO,called extreme learning with chemical reaction optimization(ELCRO).ELCRO simultaneously optimizes the weight and bias vector and number of hidden neurons of a single layer feed-forward neural network without compromising prediction accuracy.We evaluate its performance by predicting the daily volatility and closing prices of BSE indices.Additionally,its performance is compared with three other similarly developed models—ELM based on particle swarm optimization,genetic algorithm,and gradient descent—and find the performance of the proposed algorithm superior.Wilcoxon signed-rank and Diebold–Mariano tests are then conducted to verify the statistical significance of the proposed model.Hence,this model can be used as a promising tool for financial forecasting.展开更多
Power transformer is one of the most crucial devices in power grid.It is significant to determine incipient faults of power transformers fast and accurately.Input features play critical roles in fault diagnosis accura...Power transformer is one of the most crucial devices in power grid.It is significant to determine incipient faults of power transformers fast and accurately.Input features play critical roles in fault diagnosis accuracy.In order to further improve the fault diagnosis performance of power trans-formers,a random forest feature selection method coupled with optimized kernel extreme learning machine is presented in this study.Firstly,the random forest feature selection approach is adopted to rank 42 related input features derived from gas concentration,gas ratio and energy-weighted dissolved gas analysis.Afterwards,a kernel extreme learning machine tuned by the Aquila optimization algorithm is implemented to adjust crucial parameters and select the optimal feature subsets.The diagnosis accuracy is used to assess the fault diagnosis capability of concerned feature subsets.Finally,the optimal feature subsets are applied to establish fault diagnosis model.According to the experimental results based on two public datasets and comparison with 5 conventional approaches,it can be seen that the average accuracy of the pro-posed method is up to 94.5%,which is superior to that of other conventional approaches.Fault diagnosis performances verify that the optimum feature subset obtained by the presented method can dramatically improve power transformers fault diagnosis accuracy.展开更多
Lithium-ion battery State of Health(SOH)estimation is an essential issue in battery management systems.In order to better estimate battery SOH,Extreme Learning Machine(ELM)is used to establish a model to estimate lith...Lithium-ion battery State of Health(SOH)estimation is an essential issue in battery management systems.In order to better estimate battery SOH,Extreme Learning Machine(ELM)is used to establish a model to estimate lithium-ion battery SOH.The Swarm Optimization algorithm(PSO)is used to automatically adjust and optimize the parameters of ELM to improve estimation accuracy.Firstly,collect cyclic aging data of the battery and extract five characteristic quantities related to battery capacity from the battery charging curve and increment capacity curve.Use Grey Relation Analysis(GRA)method to analyze the correlation between battery capacity and five characteristic quantities.Then,an ELM is used to build the capacity estimation model of the lithium-ion battery based on five characteristics,and a PSO is introduced to optimize the parameters of the capacity estimation model.The proposed method is validated by the degradation experiment of the lithium-ion battery under different conditions.The results show that the battery capacity estimation model based on ELM and PSO has better accuracy and stability in capacity estimation,and the average absolute percentage error is less than 1%.展开更多
Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mos...Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mostly regard software defect prediction as a single objective optimization problem,and multi-objective software defect prediction has not been thoroughly investigated.For the above two reasons,we propose the following solutions in this paper:(1)we leverage an advanced deep neural network-Stacked Contractive AutoEncoder(SCAE)to extract the robust deep semantic features from the original defect features,which has stronger discrimination capacity for different classes(defective or non-defective).(2)we propose a novel multi-objective defect prediction model named SMONGE that utilizes the Multi-Objective NSGAII algorithm to optimize the advanced neural network-Extreme learning machine(ELM)based on state-of-the-art Pareto optimal solutions according to the features extracted by SCAE.We mainly consider two objectives.One objective is to maximize the performance of ELM,which refers to the benefit of the SMONGE model.Another objective is to minimize the output weight norm of ELM,which is related to the cost of the SMONGE model.We compare the SCAE with six state-of-the-art feature extraction methods and compare the SMONGE model with multiple baseline models that contain four classic defect predictors and the MONGE model without SCAE across 20 open source software projects.The experimental results verify that the superiority of SCAE and SMONGE on seven evaluation metrics.展开更多
Many respiratory infections around the world have been caused by coronaviruses.COVID-19 is one of the most serious coronaviruses due to its rapid spread between people and the lowest survival rate.There is a high need...Many respiratory infections around the world have been caused by coronaviruses.COVID-19 is one of the most serious coronaviruses due to its rapid spread between people and the lowest survival rate.There is a high need for computer-assisted diagnostics(CAD)in the area of artificial intelligence to help doctors and radiologists identify COVID-19 patients in cloud systems.Machine learning(ML)has been used to examine chest X-ray frames.In this paper,a new transfer learning-based optimized extreme deep learning paradigm is proposed to identify the chest X-ray picture into three classes,a pneumonia patient,a COVID-19 patient,or a normal person.First,three different pre-trainedConvolutionalNeuralNetwork(CNN)models(resnet18,resnet25,densenet201)are employed for deep feature extraction.Second,each feature vector is passed through the binary Butterfly optimization algorithm(bBOA)to reduce the redundant features and extract the most representative ones,and enhance the performance of the CNN models.These selective features are then passed to an improved Extreme learning machine(ELM)using a BOA to classify the chest X-ray images.The proposed paradigm achieves a 99.48%accuracy in detecting covid-19 cases.展开更多
Design,scaling-up,and optimization of industrial reactors mainly depend on step-by-step experiments and engineering experience,which is usually time-consuming,high cost,and high risk.Although numerical simulation can ...Design,scaling-up,and optimization of industrial reactors mainly depend on step-by-step experiments and engineering experience,which is usually time-consuming,high cost,and high risk.Although numerical simulation can reproduce high resolution details of hydrodynamics,thermal transfer,and reaction process in reactors,it is still challenging for industrial reactors due to huge computational cost.In this study,by combining the numerical simulation and artificial intelligence(AI)technology of machine learning(ML),a method is proposed to efficiently predict and optimize the performance of industrial reactors.A gas–solid fluidization reactor for the methanol to olefins process is taken as an example.1500 cases under different conditions are simulated by the coarse-grain discrete particle method based on the Energy-Minimization Multi-Scale model,and thus,the reactor performance data set is constructed.To develop an efficient reactor performance prediction model influenced by multiple factors,the ML method is established including the ensemble learning strategy and automatic hyperparameter optimization technique,which has better performance than the methods based on the artificial neural network.Furthermore,the operating conditions for highest yield of ethylene and propylene or lowest pressure drop are searched with the particle swarm optimization algorithm due to its strength to solve non-linear optimization problems.Results show that decreasing the methanol inflow rate and increasing the catalyst inventory can maximize the yield,while decreasing methanol the inflow rate and reducing the catalyst inventory can minimize the pressure drop.The two objectives are thus conflicting,and the practical operations need to be compromised under different circumstance.展开更多
Rutting of asphalt pavements is a crucial design criterion in various pavement design guides. A good road transportation base can provide security for the transportation of oil and gas in road transportation. This stu...Rutting of asphalt pavements is a crucial design criterion in various pavement design guides. A good road transportation base can provide security for the transportation of oil and gas in road transportation. This study attempts to develop a robust artificial intelligence model to estimate different asphalt pavements’ rutting depth clips, temperature, and load axes as primary characteristics. The experiment data were obtained from19 asphalt pavements with different crude oil sources on a 2.038km long full-scale field accelerated pavement test track(Road Track Institute, RIOHTrack) in Tongzhou, Beijing. In addition,this paper also proposes to build complex networks with different pavement rutting depths through complex network methods and the Louvain algorithm for community detection. The most critical structural elements can be selected from different asphalt pavement rutting data, and similar structural elements can be found. An extreme learning machine algorithm with residual correction(RELM) is designed and optimized using an independent adaptive particle swarm algorithm. The experimental results of the proposed method are compared with several classical machine learning algorithms, with predictions of average root mean squared error(MSE), average mean absolute error(MAE), and a verage mean absolute percentage error(MAPE) for 19 asphalt pavements reaching 1.742, 1.363, and 1.94% respectively. The experiments demonstrate that the RELM algorithm has an advantage over classical machine learning methods in dealing with non-linear problems in road engineering. Notably, the method ensures the adaptation of the simulated environment to different levels of abstraction through the cognitive analysis of the production environment parameters. It is a promising alternative method that facilitates the rapid assessment of pavement conditions and could be applied in the future to production processes in the oil and gas industry.展开更多
This paper reviews several recently-developed techniques for the minimum-cost optimal design of water-retaining structures (WRSs), which integrate the effects of seepage. These include the incorporation of uncertainty...This paper reviews several recently-developed techniques for the minimum-cost optimal design of water-retaining structures (WRSs), which integrate the effects of seepage. These include the incorporation of uncertainty in heterogeneous soil parameter estimates and quantification of reliability. This review is limited to methods based on coupled simulation-optimization (S-O) models. In this context, the design of WRSs is mainly affected by hydraulic design variables such as seepage quantities, which are difficult to determine from closed-form solutions or approximation theories. An S-O model is built by integrating numerical seepage modeling responses to an optimization algorithm based on efficient surrogate models. The surrogate models (meta-models) are trained on simulated data obtained from finite element numerical code solutions. The proposed methodology is applied using several machine learning techniques and optimization solvers to optimize the design of WRS by incorporating different design variables and boundary conditions. Additionally, the effects of several scenarios of flow domain hydraulic conductivity are integrated into the S-O model. Also, reliability based optimum design concepts are incorporated in the S-O model to quantify uncertainty in seepage quantities due to uncertainty in hydraulic conductivity estimates. We can conclude that the S-O model can efficiently optimize WRS designs. The ANN, SVM, and GPR machine learning technique-based surrogate models are efficiently and expeditiously incorporated into the S-O models to imitate the numerical responses of simulations of various problems.展开更多
In the classification problem,deep kernel extreme learning machine(DKELM)has the characteristics of efficient processing and superior performance,but its parameters optimization is difficult.To improve the classificat...In the classification problem,deep kernel extreme learning machine(DKELM)has the characteristics of efficient processing and superior performance,but its parameters optimization is difficult.To improve the classification accuracy of DKELM,a DKELM algorithm optimized by the improved sparrow search algorithm(ISSA),named as ISSA-DKELM,is proposed in this paper.Aiming at the parameter selection problem of DKELM,the DKELM classifier is constructed by using the optimal parameters obtained by ISSA optimization.In order to make up for the shortcomings of the basic sparrow search algorithm(SSA),the chaotic transformation is first applied to initialize the sparrow position.Then,the position of the discoverer sparrow population is dynamically adjusted.A learning operator in the teaching-learning-based algorithm is fused to improve the position update operation of the joiners.Finally,the Gaussian mutation strategy is added in the later iteration of the algorithm to make the sparrow jump out of local optimum.The experimental results show that the proposed DKELM classifier is feasible and effective,and compared with other classification algorithms,the proposed DKELM algorithm aciheves better test accuracy.展开更多
Photoelectric displacement sensors rarely possess a perfectly linear transfer characteristic, but always have some degree of non-linearity over their range of operation. If the sensor output is nonlinear, it will prod...Photoelectric displacement sensors rarely possess a perfectly linear transfer characteristic, but always have some degree of non-linearity over their range of operation. If the sensor output is nonlinear, it will produce a whole assortment of problems. This paper presents a method to compensate the nonlinearity of the photoelectric displacement sensor based on the extreme learning machine (ELM) method which significantly reduces the amount of time needed to train a neural network with the output voltage of the optical displacement sensor and the measured input displacement to eliminate the nonlinear errors in the training process. The use of this proposed method was demonstrated through computer simulation with the experimental data of the sensor. The results revealed that the proposed method compensated the presence of nonlinearity in the sensor with very low training time, lowest mean squared error (MSE) value, and better linearity. This research work involved less computational complexity, and it behaved a good performance for nonlinearity compensation for the photoelectric displacement sensor and has a good application prospect.展开更多
Modeling and optimization is crucial to smart chemical process operations.However,a large number of nonlinearities must be considered in a typical chemical process according to complex unit operations,chemical reactio...Modeling and optimization is crucial to smart chemical process operations.However,a large number of nonlinearities must be considered in a typical chemical process according to complex unit operations,chemical reactions and separations.This leads to a great challenge of implementing mechanistic models into industrial-scale problems due to the resulting computational complexity.Thus,this paper presents an efficient hybrid framework of integrating machine learning and particle swarm optimization to overcome the aforementioned difficulties.An industrial propane dehydrogenation process was carried out to demonstrate the validity and efficiency of our method.Firstly,a data set was generated based on process mechanistic simulation validated by industrial data,which provides sufficient and reasonable samples for model training and testing.Secondly,four well-known machine learning methods,namely,K-nearest neighbors,decision tree,support vector machine,and artificial neural network,were compared and used to obtain the prediction models of the processes operation.All of these methods achieved highly accurate model by adjusting model parameters on the basis of high-coverage data and properly features.Finally,optimal process operations were obtained by using the particle swarm optimization approach.展开更多
The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this wor...The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this work, a novel mathematic model for the hybrid flow shop scheduling problem with unrelated parallel machine(HFSPUPM) was proposed. Additionally, an effective hybrid estimation of distribution algorithm was proposed to solve the HFSPUPM, taking advantage of the features in the mathematic model. In the optimization algorithm, a new individual representation method was adopted. The(EDA) structure was used for global search while the teaching learning based optimization(TLBO) strategy was used for local search. Based on the structure of the HFSPUPM, this work presents a series of discrete operations. Simulation results show the effectiveness of the proposed hybrid algorithm compared with other algorithms.展开更多
In this paper, we present a method based on self-mixing interferometry combing extreme learning machine for real-time human blood pressure measurement. A signal processing method based on wavelet transform is applied ...In this paper, we present a method based on self-mixing interferometry combing extreme learning machine for real-time human blood pressure measurement. A signal processing method based on wavelet transform is applied to extract reversion point in the self-mixing interference signal, thus the pulse wave profile is successfully reconstructed. Considering the blood pressure values are intrinsically related to characteristic parameters of the pulse wave, 80 samples from the MIMIC-II database are used to train the extreme learning machine blood pressure model. In the experiment, 15 measured samples of pulse wave signal are used as the prediction sets. The results show that the errors of systolic and diastolic blood pressure are both within 5 mm Hg compared with that by the Coriolis method.展开更多
Solar energy has become crucial in producing electrical energy because it is inexhaustible and sustainable.However,its uncertain generation causes problems in power system operation.Therefore,solar irradiance forecast...Solar energy has become crucial in producing electrical energy because it is inexhaustible and sustainable.However,its uncertain generation causes problems in power system operation.Therefore,solar irradiance forecasting is significant for suitable controlling power system operation,organizing the transmission expansion planning,and dispatching power system generation.Nonetheless,the forecasting performance can be decreased due to the unfitted prediction model and lacked preprocessing.To deal with mentioned issues,this paper pro-poses Meta-Learning Extreme Learning Machine optimized with Golden Eagle Optimization and Logistic Map(MGEL-ELM)and the Same Datetime Interval Averaged Imputation algorithm(SAME)for improving the fore-casting performance of incomplete solar irradiance time series datasets.Thus,the proposed method is not only imputing incomplete forecasting data but also achieving forecasting accuracy.The experimental result of fore-casting solar irradiance dataset in Thailand indicates that the proposed method can achieve the highest coeffi-cient of determination value up to 0.9307 compared to state-of-the-art models.Furthermore,the proposed method consumes less forecasting time than the deep learning model.展开更多
The estimation of the fuzzy membership function parameters for interval type 2 fuzzy logic system(IT2-FLS)is a challenging task in the presence of uncertainty and imprecision.Grasshopper optimization algorithm(GOA)is ...The estimation of the fuzzy membership function parameters for interval type 2 fuzzy logic system(IT2-FLS)is a challenging task in the presence of uncertainty and imprecision.Grasshopper optimization algorithm(GOA)is a fresh population based meta-heuristic algorithm that mimics the swarming behavior of grasshoppers in nature,which has good convergence ability towards optima.The main objective of this paper is to apply GOA to estimate the optimal parameters of the Gaussian membership function in an IT2-FLS.The antecedent part parameters(Gaussian membership function parameters)are encoded as a population of artificial swarm of grasshoppers and optimized using its algorithm.Tuning of the consequent part parameters are accomplished using extreme learning machine.The optimized IT2-FLS(GOAIT2FELM)obtained the optimal premise parameters based on tuned consequent part parameters and is then applied on the Australian national electricity market data for the forecasting of electricity loads and prices.The forecasting performance of the proposed model is compared with other population-based optimized IT2-FLS including genetic algorithm and artificial bee colony optimization algorithm.Analysis of the performance,on the same data-sets,reveals that the proposed GOAIT2FELM could be a better approach for improving the accuracy of the IT2-FLS as compared to other variants of the optimized IT2-FLS.展开更多
文摘Extreme Learning Machine(ELM)is popular in batch learning,sequential learning,and progressive learning,due to its speed,easy integration,and generalization ability.While,Traditional ELM cannot train massive data rapidly and efficiently due to its memory residence,high time and space complexity.In ELM,the hidden layer typically necessitates a huge number of nodes.Furthermore,there is no certainty that the arrangement of weights and biases within the hidden layer is optimal.To solve this problem,the traditional ELM has been hybridized with swarm intelligence optimization techniques.This paper displays five proposed hybrid Algorithms“Salp Swarm Algorithm(SSA-ELM),Grasshopper Algorithm(GOA-ELM),Grey Wolf Algorithm(GWO-ELM),Whale optimizationAlgorithm(WOA-ELM)andMoth Flame Optimization(MFO-ELM)”.These five optimizers are hybridized with standard ELM methodology for resolving the tumor type classification using gene expression data.The proposed models applied to the predication of electricity loading data,that describes the energy use of a single residence over a fouryear period.In the hidden layer,Swarm algorithms are used to pick a smaller number of nodes to speed up the execution of ELM.The best weights and preferences were calculated by these algorithms for the hidden layer.Experimental results demonstrated that the proposed MFO-ELM achieved 98.13%accuracy and this is the highest model in accuracy in tumor type classification gene expression data.While in predication,the proposed GOA-ELM achieved 0.397which is least RMSE compared to the other models.
基金supported in part by the National Natural Science Foundation of China(U2001213 and 61971191)in part by the Beijing Natural Science Foundation under Grant L182018 and L201011+2 种基金in part by National Key Research and Development Project(2020YFB1807204)in part by the Key project of Natural Science Foundation of Jiangxi Province(20202ACBL202006)in part by the Innovation Fund Designated for Graduate Students of Jiangxi Province(YC2020-S321)。
文摘Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the received signal strength indication(RSSI)distance is accord with the location distance.Therefore,how to efficiently match the current RSSI of the user with the RSSI in the fingerprint database is the key to achieve high-accuracy localization.In this paper,a particle swarm optimization-extreme learning machine(PSO-ELM)algorithm is proposed on the basis of the original fingerprinting localization.Firstly,we collect the RSSI of the experimental area to construct the fingerprint database,and the ELM algorithm is applied to the online stages to determine the corresponding relation between the location of the terminal and the RSSI it receives.Secondly,PSO algorithm is used to improve the bias and weight of ELM neural network,and the global optimal results are obtained.Finally,extensive simulation results are presented.It is shown that the proposed algorithm can effectively reduce mean error of localization and improve positioning accuracy when compared with K-Nearest Neighbor(KNN),Kmeans and Back-propagation(BP)algorithms.
基金supported by the National Natural Science Foundation of China (Nos.51176075,51576097)the Fouding of Jiangsu Innovation Program for Graduate Education(No.KYLX_0305)
文摘Aero-engine direct thrust control can not only improve the thrust control precision but also save the operating cost by reducing the reserved margin in design and making full use of aircraft engine potential performance.However,it is a big challenge to estimate engine thrust accurately.To tackle this problem,this paper proposes an ensemble of improved wavelet extreme learning machine(EW-ELM)for aircraft engine thrust estimation.Extreme learning machine(ELM)has been proved as an emerging learning technique with high efficiency.Since the combination of ELM and wavelet theory has the both excellent properties,wavelet activation functions are used in the hidden nodes to enhance non-linearity dealing ability.Besides,as original ELM may result in ill-condition and robustness problems due to the random determination of the parameters for hidden nodes,particle swarm optimization(PSO)algorithm is adopted to select the input weights and hidden biases.Furthermore,the ensemble of the improved wavelet ELM is utilized to construct the relationship between the sensor measurements and thrust.The simulation results verify the effectiveness and efficiency of the developed method and show that aero-engine thrust estimation using EW-ELM can satisfy the requirements of direct thrust control in terms of estimation accuracy and computation time.
文摘Extreme learning machine (ELM) is a feedforward neural network-based machine learning method that has the benefits of short training times, strong generalization capabilities, and will not fall into local minima. However, due to the traditional ELM shallow architecture, it requires a large number of hidden nodes when dealing with high-dimensional data sets to ensure its classification performance. The other aspect, it is easy to degrade the classification performance in the face of noise interference from noisy data. To improve the above problem, this paper proposes a double pseudo-inverse extreme learning machine (DPELM) based on Sparse Denoising AutoEncoder (SDAE) namely, SDAE-DPELM. The algorithm can directly determine the input weight and output weight of the network by using the pseudo-inverse method. As a result, the algorithm only requires a few hidden layer nodes to produce superior classification results when classifying data. And its combination with SDAE can effectively improve the classification performance and noise resistance. Extensive numerical experiments show that the algorithm has high classification accuracy and good robustness when dealing with high-dimensional noisy data and high-dimensional noiseless data. Furthermore, applying such an algorithm to Miao character recognition substantiates its excellent performance, which further illustrates the practicability of the algorithm.
基金supported by the National Natural Science Foundation of China(6177340561751312)the Major Scientific and Technological Innovation Projects of Shandong Province(2019JZZY020123)。
文摘Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.With the increase of the nodes in the hidden layers,the computation cost is greatly increased.In this paper,we propose a novel algorithm,named constrained voting extreme learning machine(CV-ELM).Compared with the traditional ELM,the CV-ELM determines the input weight and bias based on the differences of between-class samples.At the same time,to improve the accuracy of the proposed method,the voting selection is introduced.The proposed method is evaluated on public benchmark datasets.The experimental results show that the proposed algorithm is superior to the original ELM algorithm.Further,we apply the CV-ELM to the classification of superheat degree(SD)state in the aluminum electrolysis industry,and the recognition accuracy rate reaches87.4%,and the experimental results demonstrate that the proposed method is more robust than the existing state-of-the-art identification methods.
文摘Extreme learning machine(ELM)allows for fast learning and better generalization performance than conventional gradient-based learning.However,the possible inclusion of non-optimal weight and bias due to random selection and the need for more hidden neurons adversely influence network usability.Further,choosing the optimal number of hidden nodes for a network usually requires intensive human intervention,which may lead to an ill-conditioned situation.In this context,chemical reaction optimization(CRO)is a meta-heuristic paradigm with increased success in a large number of application areas.It is characterized by faster convergence capability and requires fewer tunable parameters.This study develops a learning framework combining the advantages of ELM and CRO,called extreme learning with chemical reaction optimization(ELCRO).ELCRO simultaneously optimizes the weight and bias vector and number of hidden neurons of a single layer feed-forward neural network without compromising prediction accuracy.We evaluate its performance by predicting the daily volatility and closing prices of BSE indices.Additionally,its performance is compared with three other similarly developed models—ELM based on particle swarm optimization,genetic algorithm,and gradient descent—and find the performance of the proposed algorithm superior.Wilcoxon signed-rank and Diebold–Mariano tests are then conducted to verify the statistical significance of the proposed model.Hence,this model can be used as a promising tool for financial forecasting.
基金support of national natural science foundation of China(No.52067021)natural science foundation of Xinjiang(2022D01C35)+1 种基金excellent youth scientific and technological talents plan of Xinjiang(No.2019Q012)major science and technology special project of Xinjiang Uygur Autonomous Region(2022A01002-2).
文摘Power transformer is one of the most crucial devices in power grid.It is significant to determine incipient faults of power transformers fast and accurately.Input features play critical roles in fault diagnosis accuracy.In order to further improve the fault diagnosis performance of power trans-formers,a random forest feature selection method coupled with optimized kernel extreme learning machine is presented in this study.Firstly,the random forest feature selection approach is adopted to rank 42 related input features derived from gas concentration,gas ratio and energy-weighted dissolved gas analysis.Afterwards,a kernel extreme learning machine tuned by the Aquila optimization algorithm is implemented to adjust crucial parameters and select the optimal feature subsets.The diagnosis accuracy is used to assess the fault diagnosis capability of concerned feature subsets.Finally,the optimal feature subsets are applied to establish fault diagnosis model.According to the experimental results based on two public datasets and comparison with 5 conventional approaches,it can be seen that the average accuracy of the pro-posed method is up to 94.5%,which is superior to that of other conventional approaches.Fault diagnosis performances verify that the optimum feature subset obtained by the presented method can dramatically improve power transformers fault diagnosis accuracy.
基金This work was supported by the State Grid Corporation Headquarters Management Technology Project(SGTYHT/19-JS-215)Southwest Jiaotong University new interdisciplinary cultivation project by(YH1500112432273).
文摘Lithium-ion battery State of Health(SOH)estimation is an essential issue in battery management systems.In order to better estimate battery SOH,Extreme Learning Machine(ELM)is used to establish a model to estimate lithium-ion battery SOH.The Swarm Optimization algorithm(PSO)is used to automatically adjust and optimize the parameters of ELM to improve estimation accuracy.Firstly,collect cyclic aging data of the battery and extract five characteristic quantities related to battery capacity from the battery charging curve and increment capacity curve.Use Grey Relation Analysis(GRA)method to analyze the correlation between battery capacity and five characteristic quantities.Then,an ELM is used to build the capacity estimation model of the lithium-ion battery based on five characteristics,and a PSO is introduced to optimize the parameters of the capacity estimation model.The proposed method is validated by the degradation experiment of the lithium-ion battery under different conditions.The results show that the battery capacity estimation model based on ELM and PSO has better accuracy and stability in capacity estimation,and the average absolute percentage error is less than 1%.
基金This work is supported in part by the National Science Foundation of China(Grant Nos.61672392,61373038)in part by the National Key Research and Development Program of China(Grant No.2016YFC1202204).
文摘Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mostly regard software defect prediction as a single objective optimization problem,and multi-objective software defect prediction has not been thoroughly investigated.For the above two reasons,we propose the following solutions in this paper:(1)we leverage an advanced deep neural network-Stacked Contractive AutoEncoder(SCAE)to extract the robust deep semantic features from the original defect features,which has stronger discrimination capacity for different classes(defective or non-defective).(2)we propose a novel multi-objective defect prediction model named SMONGE that utilizes the Multi-Objective NSGAII algorithm to optimize the advanced neural network-Extreme learning machine(ELM)based on state-of-the-art Pareto optimal solutions according to the features extracted by SCAE.We mainly consider two objectives.One objective is to maximize the performance of ELM,which refers to the benefit of the SMONGE model.Another objective is to minimize the output weight norm of ELM,which is related to the cost of the SMONGE model.We compare the SCAE with six state-of-the-art feature extraction methods and compare the SMONGE model with multiple baseline models that contain four classic defect predictors and the MONGE model without SCAE across 20 open source software projects.The experimental results verify that the superiority of SCAE and SMONGE on seven evaluation metrics.
文摘Many respiratory infections around the world have been caused by coronaviruses.COVID-19 is one of the most serious coronaviruses due to its rapid spread between people and the lowest survival rate.There is a high need for computer-assisted diagnostics(CAD)in the area of artificial intelligence to help doctors and radiologists identify COVID-19 patients in cloud systems.Machine learning(ML)has been used to examine chest X-ray frames.In this paper,a new transfer learning-based optimized extreme deep learning paradigm is proposed to identify the chest X-ray picture into three classes,a pneumonia patient,a COVID-19 patient,or a normal person.First,three different pre-trainedConvolutionalNeuralNetwork(CNN)models(resnet18,resnet25,densenet201)are employed for deep feature extraction.Second,each feature vector is passed through the binary Butterfly optimization algorithm(bBOA)to reduce the redundant features and extract the most representative ones,and enhance the performance of the CNN models.These selective features are then passed to an improved Extreme learning machine(ELM)using a BOA to classify the chest X-ray images.The proposed paradigm achieves a 99.48%accuracy in detecting covid-19 cases.
基金supported by the National Natural Science Foundation of China(grant Nos.22293024,22293021,and 22078330)the Youth Innovation Promotion Association,Chinese Academy of Sciences(grant No.2019050).
文摘Design,scaling-up,and optimization of industrial reactors mainly depend on step-by-step experiments and engineering experience,which is usually time-consuming,high cost,and high risk.Although numerical simulation can reproduce high resolution details of hydrodynamics,thermal transfer,and reaction process in reactors,it is still challenging for industrial reactors due to huge computational cost.In this study,by combining the numerical simulation and artificial intelligence(AI)technology of machine learning(ML),a method is proposed to efficiently predict and optimize the performance of industrial reactors.A gas–solid fluidization reactor for the methanol to olefins process is taken as an example.1500 cases under different conditions are simulated by the coarse-grain discrete particle method based on the Energy-Minimization Multi-Scale model,and thus,the reactor performance data set is constructed.To develop an efficient reactor performance prediction model influenced by multiple factors,the ML method is established including the ensemble learning strategy and automatic hyperparameter optimization technique,which has better performance than the methods based on the artificial neural network.Furthermore,the operating conditions for highest yield of ethylene and propylene or lowest pressure drop are searched with the particle swarm optimization algorithm due to its strength to solve non-linear optimization problems.Results show that decreasing the methanol inflow rate and increasing the catalyst inventory can maximize the yield,while decreasing methanol the inflow rate and reducing the catalyst inventory can minimize the pressure drop.The two objectives are thus conflicting,and the practical operations need to be compromised under different circumstance.
基金supported by the Analytical Center for the Government of the Russian Federation (IGK 000000D730321P5Q0002) and Agreement Nos.(70-2021-00141)。
文摘Rutting of asphalt pavements is a crucial design criterion in various pavement design guides. A good road transportation base can provide security for the transportation of oil and gas in road transportation. This study attempts to develop a robust artificial intelligence model to estimate different asphalt pavements’ rutting depth clips, temperature, and load axes as primary characteristics. The experiment data were obtained from19 asphalt pavements with different crude oil sources on a 2.038km long full-scale field accelerated pavement test track(Road Track Institute, RIOHTrack) in Tongzhou, Beijing. In addition,this paper also proposes to build complex networks with different pavement rutting depths through complex network methods and the Louvain algorithm for community detection. The most critical structural elements can be selected from different asphalt pavement rutting data, and similar structural elements can be found. An extreme learning machine algorithm with residual correction(RELM) is designed and optimized using an independent adaptive particle swarm algorithm. The experimental results of the proposed method are compared with several classical machine learning algorithms, with predictions of average root mean squared error(MSE), average mean absolute error(MAE), and a verage mean absolute percentage error(MAPE) for 19 asphalt pavements reaching 1.742, 1.363, and 1.94% respectively. The experiments demonstrate that the RELM algorithm has an advantage over classical machine learning methods in dealing with non-linear problems in road engineering. Notably, the method ensures the adaptation of the simulated environment to different levels of abstraction through the cognitive analysis of the production environment parameters. It is a promising alternative method that facilitates the rapid assessment of pavement conditions and could be applied in the future to production processes in the oil and gas industry.
文摘This paper reviews several recently-developed techniques for the minimum-cost optimal design of water-retaining structures (WRSs), which integrate the effects of seepage. These include the incorporation of uncertainty in heterogeneous soil parameter estimates and quantification of reliability. This review is limited to methods based on coupled simulation-optimization (S-O) models. In this context, the design of WRSs is mainly affected by hydraulic design variables such as seepage quantities, which are difficult to determine from closed-form solutions or approximation theories. An S-O model is built by integrating numerical seepage modeling responses to an optimization algorithm based on efficient surrogate models. The surrogate models (meta-models) are trained on simulated data obtained from finite element numerical code solutions. The proposed methodology is applied using several machine learning techniques and optimization solvers to optimize the design of WRS by incorporating different design variables and boundary conditions. Additionally, the effects of several scenarios of flow domain hydraulic conductivity are integrated into the S-O model. Also, reliability based optimum design concepts are incorporated in the S-O model to quantify uncertainty in seepage quantities due to uncertainty in hydraulic conductivity estimates. We can conclude that the S-O model can efficiently optimize WRS designs. The ANN, SVM, and GPR machine learning technique-based surrogate models are efficiently and expeditiously incorporated into the S-O models to imitate the numerical responses of simulations of various problems.
文摘In the classification problem,deep kernel extreme learning machine(DKELM)has the characteristics of efficient processing and superior performance,but its parameters optimization is difficult.To improve the classification accuracy of DKELM,a DKELM algorithm optimized by the improved sparrow search algorithm(ISSA),named as ISSA-DKELM,is proposed in this paper.Aiming at the parameter selection problem of DKELM,the DKELM classifier is constructed by using the optimal parameters obtained by ISSA optimization.In order to make up for the shortcomings of the basic sparrow search algorithm(SSA),the chaotic transformation is first applied to initialize the sparrow position.Then,the position of the discoverer sparrow population is dynamically adjusted.A learning operator in the teaching-learning-based algorithm is fused to improve the position update operation of the joiners.Finally,the Gaussian mutation strategy is added in the later iteration of the algorithm to make the sparrow jump out of local optimum.The experimental results show that the proposed DKELM classifier is feasible and effective,and compared with other classification algorithms,the proposed DKELM algorithm aciheves better test accuracy.
文摘Photoelectric displacement sensors rarely possess a perfectly linear transfer characteristic, but always have some degree of non-linearity over their range of operation. If the sensor output is nonlinear, it will produce a whole assortment of problems. This paper presents a method to compensate the nonlinearity of the photoelectric displacement sensor based on the extreme learning machine (ELM) method which significantly reduces the amount of time needed to train a neural network with the output voltage of the optical displacement sensor and the measured input displacement to eliminate the nonlinear errors in the training process. The use of this proposed method was demonstrated through computer simulation with the experimental data of the sensor. The results revealed that the proposed method compensated the presence of nonlinearity in the sensor with very low training time, lowest mean squared error (MSE) value, and better linearity. This research work involved less computational complexity, and it behaved a good performance for nonlinearity compensation for the photoelectric displacement sensor and has a good application prospect.
基金This work was supported by the“Zhujiang Talent Program”High Talent Project of Guangdong Province(Grant No.2017GC010614)the National Natural Science Foundation of China(Grant No.22078372).
文摘Modeling and optimization is crucial to smart chemical process operations.However,a large number of nonlinearities must be considered in a typical chemical process according to complex unit operations,chemical reactions and separations.This leads to a great challenge of implementing mechanistic models into industrial-scale problems due to the resulting computational complexity.Thus,this paper presents an efficient hybrid framework of integrating machine learning and particle swarm optimization to overcome the aforementioned difficulties.An industrial propane dehydrogenation process was carried out to demonstrate the validity and efficiency of our method.Firstly,a data set was generated based on process mechanistic simulation validated by industrial data,which provides sufficient and reasonable samples for model training and testing.Secondly,four well-known machine learning methods,namely,K-nearest neighbors,decision tree,support vector machine,and artificial neural network,were compared and used to obtain the prediction models of the processes operation.All of these methods achieved highly accurate model by adjusting model parameters on the basis of high-coverage data and properly features.Finally,optimal process operations were obtained by using the particle swarm optimization approach.
基金Projects(61573144,61773165,61673175,61174040)supported by the National Natural Science Foundation of ChinaProject(222201717006)supported by the Fundamental Research Funds for the Central Universities,China
文摘The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this work, a novel mathematic model for the hybrid flow shop scheduling problem with unrelated parallel machine(HFSPUPM) was proposed. Additionally, an effective hybrid estimation of distribution algorithm was proposed to solve the HFSPUPM, taking advantage of the features in the mathematic model. In the optimization algorithm, a new individual representation method was adopted. The(EDA) structure was used for global search while the teaching learning based optimization(TLBO) strategy was used for local search. Based on the structure of the HFSPUPM, this work presents a series of discrete operations. Simulation results show the effectiveness of the proposed hybrid algorithm compared with other algorithms.
基金supported by the National Natural Science Foundation of China (No.61675174)the Natural Science Foundation of Fujian Province (No.2020J01705)。
文摘In this paper, we present a method based on self-mixing interferometry combing extreme learning machine for real-time human blood pressure measurement. A signal processing method based on wavelet transform is applied to extract reversion point in the self-mixing interference signal, thus the pulse wave profile is successfully reconstructed. Considering the blood pressure values are intrinsically related to characteristic parameters of the pulse wave, 80 samples from the MIMIC-II database are used to train the extreme learning machine blood pressure model. In the experiment, 15 measured samples of pulse wave signal are used as the prediction sets. The results show that the errors of systolic and diastolic blood pressure are both within 5 mm Hg compared with that by the Coriolis method.
文摘Solar energy has become crucial in producing electrical energy because it is inexhaustible and sustainable.However,its uncertain generation causes problems in power system operation.Therefore,solar irradiance forecasting is significant for suitable controlling power system operation,organizing the transmission expansion planning,and dispatching power system generation.Nonetheless,the forecasting performance can be decreased due to the unfitted prediction model and lacked preprocessing.To deal with mentioned issues,this paper pro-poses Meta-Learning Extreme Learning Machine optimized with Golden Eagle Optimization and Logistic Map(MGEL-ELM)and the Same Datetime Interval Averaged Imputation algorithm(SAME)for improving the fore-casting performance of incomplete solar irradiance time series datasets.Thus,the proposed method is not only imputing incomplete forecasting data but also achieving forecasting accuracy.The experimental result of fore-casting solar irradiance dataset in Thailand indicates that the proposed method can achieve the highest coeffi-cient of determination value up to 0.9307 compared to state-of-the-art models.Furthermore,the proposed method consumes less forecasting time than the deep learning model.
文摘The estimation of the fuzzy membership function parameters for interval type 2 fuzzy logic system(IT2-FLS)is a challenging task in the presence of uncertainty and imprecision.Grasshopper optimization algorithm(GOA)is a fresh population based meta-heuristic algorithm that mimics the swarming behavior of grasshoppers in nature,which has good convergence ability towards optima.The main objective of this paper is to apply GOA to estimate the optimal parameters of the Gaussian membership function in an IT2-FLS.The antecedent part parameters(Gaussian membership function parameters)are encoded as a population of artificial swarm of grasshoppers and optimized using its algorithm.Tuning of the consequent part parameters are accomplished using extreme learning machine.The optimized IT2-FLS(GOAIT2FELM)obtained the optimal premise parameters based on tuned consequent part parameters and is then applied on the Australian national electricity market data for the forecasting of electricity loads and prices.The forecasting performance of the proposed model is compared with other population-based optimized IT2-FLS including genetic algorithm and artificial bee colony optimization algorithm.Analysis of the performance,on the same data-sets,reveals that the proposed GOAIT2FELM could be a better approach for improving the accuracy of the IT2-FLS as compared to other variants of the optimized IT2-FLS.