The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ...The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.展开更多
This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intr...This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge.展开更多
The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably a...The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.展开更多
The accurate prediction of the bearing capacity of ring footings,which is crucial for civil engineering projects,has historically posed significant challenges.Previous research in this area has been constrained by con...The accurate prediction of the bearing capacity of ring footings,which is crucial for civil engineering projects,has historically posed significant challenges.Previous research in this area has been constrained by considering only a limited number of parameters or utilizing relatively small datasets.To overcome these limitations,a comprehensive finite element limit analysis(FELA)was conducted to predict the bearing capacity of ring footings.The study considered a range of effective parameters,including clay undrained shear strength,heterogeneity factor of clay,soil friction angle of the sand layer,radius ratio of the ring footing,sand layer thickness,and the interface between the ring footing and the soil.An extensive dataset comprising 80,000 samples was assembled,exceeding the limitations of previous research.The availability of this dataset enabled more robust and statistically significant analyses and predictions of ring footing bearing capacity.In light of the time-intensive nature of gathering a substantial dataset,a customized deep neural network(DNN)was developed specifically to predict the bearing capacity of the dataset rapidly.Both computational and comparative results indicate that the proposed DNN(i.e.DNN-4)can accurately predict the bearing capacity of a soil with an R2 value greater than 0.99 and a mean squared error(MSE)below 0.009 in a fraction of 1 s,reflecting the effectiveness and efficiency of the proposed method.展开更多
Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial ...Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task.展开更多
It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly eval...It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
Blades are essential components of wind turbines.Reducing their fatigue loads during operation helps to extend their lifespan,but it is difficult to quickly and accurately calculate the fatigue loads of blades.To solv...Blades are essential components of wind turbines.Reducing their fatigue loads during operation helps to extend their lifespan,but it is difficult to quickly and accurately calculate the fatigue loads of blades.To solve this problem,this paper innovatively designs a data-driven blade load modeling method based on a deep learning framework through mechanism analysis,feature selection,and model construction.In the mechanism analysis part,the generation mechanism of blade loads and the load theoretical calculationmethod based on material damage theory are analyzed,and four measurable operating state parameters related to blade loads are screened;in the feature extraction part,15 characteristic indicators of each screened parameter are extracted in the time and frequency domain,and feature selection is completed through correlation analysis with blade loads to determine the input parameters of data-driven modeling;in the model construction part,a deep neural network based on feedforward and feedback propagation is designed to construct the nonlinear coupling relationship between the unit operating parameter characteristics and blade loads.The results show that the proposed method mines the wind turbine operating state characteristics highly correlated with the blade load,such as the standard deviation of wind speed.The model built using these characteristics has reasonable calculation and fitting capabilities for the blade load and shows a better fitting level for untrained out-of-sample data than the traditional scheme.Based on the mean absolute percentage error calculation,the modeling accuracy of the two blade loads can reach more than 90%and 80%,respectively,providing a good foundation for the subsequent optimization control to suppress the blade load.展开更多
The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key...The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key role in improving the safety and economic benefits of the power grid.This paper proposes a wind power predicting method based on a convolutional graph attention deep neural network with multi-wind farm data.Based on the graph attention network and attention mechanism,the method extracts spatial-temporal characteristics from the data of multiple wind farms.Then,combined with a deep neural network,a convolutional graph attention deep neural network model is constructed.Finally,the model is trained with the quantile regression loss function to achieve the wind power deterministic and probabilistic prediction based on multi-wind farm spatial-temporal data.A wind power dataset in the U.S.is taken as an example to demonstrate the efficacy of the proposed model.Compared with the selected baseline methods,the proposed model achieves the best prediction performance.The point prediction errors(i.e.,root mean square error(RMSE)and normalized mean absolute percentage error(NMAPE))are 0.304 MW and 1.177%,respectively.And the comprehensive performance of probabilistic prediction(i.e.,con-tinuously ranked probability score(CRPS))is 0.580.Thus,the significance of multi-wind farm data and spatial-temporal feature extraction module is self-evident.展开更多
Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the bes...Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region.展开更多
Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed un...Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed under different icing conditions.Due to the complexity of the icing process,the rapid assessment of ice shape remains an important challenge.In this paper,an efficient prediction model of aircraft icing is established based on the deep belief network(DBN)and the stacked auto-encoder(SAE),which are all deep neural networks.The detailed network structures are designed and then the networks are trained according to the samples obtained by the icing numerical computation.After that the model is applied on the ice shape evaluation of NACA0012 airfoil.The results show that the model can accurately capture the nonlinear behavior of aircraft icing and thus make an excellent ice shape prediction.The model provides an important tool for aircraft icing analysis.展开更多
The composition control of molten steel is one of the main functions in the ladle furnace(LF)refining process.In this study,a feasible model was established to predict the alloying element yield using principal compon...The composition control of molten steel is one of the main functions in the ladle furnace(LF)refining process.In this study,a feasible model was established to predict the alloying element yield using principal component analysis(PCA)and deep neural network(DNN).The PCA was used to eliminate collinearity and reduce the dimension of the input variables,and then the data processed by PCA were used to establish the DNN model.The prediction hit ratios for the Si element yield in the error ranges of±1%,±3%,and±5%are 54.0%,93.8%,and98.8%,respectively,whereas those of the Mn element yield in the error ranges of±1%,±2%,and±3%are 77.0%,96.3%,and 99.5%,respectively,in the PCA-DNN model.The results demonstrate that the PCA-DNN model performs better than the known models,such as the reference heat method,multiple linear regression,modified backpropagation,and DNN model.Meanwhile,the accurate prediction of the alloying element yield can greatly contribute to realizing a“narrow window”control of composition in molten steel.The construction of the prediction model for the element yield can also provide a reference for the development of an alloying control model in LF intelligent refining in the modern iron and steel industry.展开更多
Optical deep learning based on diffractive optical elements offers unique advantages for parallel processing,computational speed,and power efficiency.One landmark method is the diffractive deep neural network(D^(2) NN...Optical deep learning based on diffractive optical elements offers unique advantages for parallel processing,computational speed,and power efficiency.One landmark method is the diffractive deep neural network(D^(2) NN)based on three-dimensional printing technology operated in the terahertz spectral range.Since the terahertz bandwidth involves limited interparticle coupling and material losses,this paper extends D^(2) NN to visible wavelengths.A general theory including a revised formula is proposed to solve any contradictions between wavelength,neuron size,and fabrication limitations.A novel visible light D^(2) NN classifier is used to recognize unchanged targets(handwritten digits ranging from 0 to 9)and targets that have been changed(i.e.,targets that have been covered or altered)at a visible wavelength of 632.8 nm.The obtained experimental classification accuracy(84%)and numerical classification accuracy(91.57%)quantify the match between the theoretical design and fabricated system performance.The presented framework can be used to apply a D^(2) NN to various practical applications and design other new applications.展开更多
Based on the CNN-LSTM fusion deep neural network,this paper proposes a seismic velocity model building method that can simultaneously estimate the root mean square(RMS)velocity and interval velocity from the common-mi...Based on the CNN-LSTM fusion deep neural network,this paper proposes a seismic velocity model building method that can simultaneously estimate the root mean square(RMS)velocity and interval velocity from the common-midpoint(CMP)gather.In the proposed method,a convolutional neural network(CNN)Encoder and two long short-term memory networks(LSTMs)are used to extract spatial and temporal features from seismic signals,respectively,and a CNN Decoder is used to recover RMS velocity and interval velocity of underground media from various feature vectors.To address the problems of unstable gradients and easily fall into a local minimum in the deep neural network training process,we propose to use Kaiming normal initialization with zero negative slopes of rectifi ed units and to adjust the network learning process by optimizing the mean square error(MSE)loss function with the introduction of a freezing factor.The experiments on testing dataset show that CNN-LSTM fusion deep neural network can predict RMS velocity as well as interval velocity more accurately,and its inversion accuracy is superior to that of single neural network models.The predictions on the complex structures and Marmousi model are consistent with the true velocity variation trends,and the predictions on fi eld data can eff ectively correct the phase axis,improve the lateral continuity of phase axis and quality of stack section,indicating the eff ectiveness and decent generalization capability of the proposed method.展开更多
The evolution and expansion of IoT devices reduced human efforts,increased resource utilization, and saved time;however, IoT devices createsignificant challenges such as lack of security and privacy, making them morev...The evolution and expansion of IoT devices reduced human efforts,increased resource utilization, and saved time;however, IoT devices createsignificant challenges such as lack of security and privacy, making them morevulnerable to IoT-based botnet attacks. There is a need to develop efficientand faster models which can work in real-time with efficiency and stability. The present investigation developed two novels, Deep Neural Network(DNN) models, DNNBoT1 and DNNBoT2, to detect and classify well-knownIoT botnet attacks such as Mirai and BASHLITE from nine compromisedindustrial-grade IoT devices. The utilization of PCA was made to featureextraction and improve effectual and accurate Botnet classification in IoTenvironments. The models were designed based on rigorous hyperparameterstuning with GridsearchCV. Early stopping was utilized to avoid the effects ofoverfitting and underfitting for both DNN models. The in-depth assessmentand evaluation of the developed models demonstrated that accuracy andefficiency are some of the best-performed models. The novelty of the presentinvestigation, with developed models, bridge the gaps by using a real datasetwith high accuracy and a significantly lower false alarm rate. The results wereevaluated based on earlier studies and deemed efficient at detecting botnetattacks using the real dataset.展开更多
Sheet metal forming technologies have been intensively studied for decades to meet the increasing demand for lightweight metal components.To surmount the springback occurring in sheet metal forming processes,numerous ...Sheet metal forming technologies have been intensively studied for decades to meet the increasing demand for lightweight metal components.To surmount the springback occurring in sheet metal forming processes,numerous studies have been performed to develop compensation methods.However,for most existing methods,the development cycle is still considerably time-consumptive and demands high computational or capital cost.In this paper,a novel theory-guided regularization method for training of deep neural networks(DNNs),implanted in a learning system,is introduced to learn the intrinsic relationship between the workpiece shape after springback and the required process parameter,e.g.,loading stroke,in sheet metal bending processes.By directly bridging the workpiece shape to the process parameter,issues concerning springback in the process design would be circumvented.The novel regularization method utilizes the well-recognized theories in material mechanics,Swift’s law,by penalizing divergence from this law throughout the network training process.The regularization is implemented by a multi-task learning network architecture,with the learning of extra tasks regularized during training.The stress-strain curve describing the material properties and the prior knowledge used to guide learning are stored in the database and the knowledge base,respectively.One can obtain the predicted loading stroke for a new workpiece shape by importing the target geometry through the user interface.In this research,the neural models were found to outperform a traditional machine learning model,support vector regression model,in experiments with different amount of training data.Through a series of studies with varying conditions of training data structure and amount,workpiece material and applied bending processes,the theory-guided DNN has been shown to achieve superior generalization and learning consistency than the data-driven DNNs,especially when only scarce and scattered experiment data are available for training which is often the case in practice.The theory-guided DNN could also be applicable to other sheet metal forming processes.It provides an alternative method for compensating springback with significantly shorter development cycle and less capital cost and computational requirement than traditional compensation methods in sheet metal forming industry.展开更多
Recently,due to the availability of big data and the rapid growth of computing power,artificial intelligence(AI)has regained tremendous attention and investment.Machine learning(ML)approaches have been successfully ap...Recently,due to the availability of big data and the rapid growth of computing power,artificial intelligence(AI)has regained tremendous attention and investment.Machine learning(ML)approaches have been successfully applied to solve many problems in academia and in industry.Although the explosion of big data applications is driving the development of ML,it also imposes severe challenges of data processing speed and scalability on conventional computer systems.Computing platforms that are dedicatedly designed for AI applications have been considered,ranging from a complement to von Neumann platforms to a“must-have”and stand-alone technical solution.These platforms,which belong to a larger category named“domain-specific computing,”focus on specific customization for AI.In this article,we focus on summarizing the recent advances in accelerator designs for deep neural networks(DNNs)-that is,DNN accelerators.We discuss various architectures that support DNN executions in terms of computing units,dataflow optimization,targeted network topologies,architectures on emerging technologies,and accelerators for emerging applications.We also provide our visions on the future trend of AI chip designs.展开更多
Training deep neural networks(DNNs)requires a significant amount of time and resources to obtain acceptable results,which severely limits its deployment in resource-limited platforms.This paper proposes DarkFPGA,a nov...Training deep neural networks(DNNs)requires a significant amount of time and resources to obtain acceptable results,which severely limits its deployment in resource-limited platforms.This paper proposes DarkFPGA,a novel customizable framework to efficiently accelerate the entire DNN training on a single FPGA platform.First,we explore batch-level parallelism to enable efficient FPGA-based DNN training.Second,we devise a novel hardware architecture optimised by a batch-oriented data pattern and tiling techniques to effectively exploit parallelism.Moreover,an analytical model is developed to determine the optimal design parameters for the DarkFPGA accelerator with respect to a specific network specification and FPGA resource constraints.Our results show that the accelerator is able to perform about 10 times faster than CPU training and about a third of the energy consumption than GPU training using 8-bit integers for training VGG-like networks on the CIFAR dataset for the Maxeler MAX5 platform.展开更多
An anomaly-based intrusion detection system(A-IDS)provides a critical aspect in a modern computing infrastructure since new types of attacks can be discovered.It prevalently utilizes several machine learning algorithm...An anomaly-based intrusion detection system(A-IDS)provides a critical aspect in a modern computing infrastructure since new types of attacks can be discovered.It prevalently utilizes several machine learning algorithms(ML)for detecting and classifying network traffic.To date,lots of algorithms have been proposed to improve the detection performance of A-IDS,either using individual or ensemble learners.In particular,ensemble learners have shown remarkable performance over individual learners in many applications,including in cybersecurity domain.However,most existing works still suffer from unsatisfactory results due to improper ensemble design.The aim of this study is to emphasize the effectiveness of stacking ensemble-based model for A-IDS,where deep learning(e.g.,deep neural network[DNN])is used as base learner model.The effectiveness of the proposed model and base DNN model are benchmarked empirically in terms of several performance metrics,i.e.,Matthew’s correlation coefficient,accuracy,and false alarm rate.The results indicate that the proposed model is superior to the base DNN model as well as other existing ML algorithms found in the literature.展开更多
This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matchi...This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matching.The proposed method consists of a DNN-based inverse model with SAE-encoded static data and iterative updates of supervised-learning data are based on distance-based clustering schemes.DNN functions as an inverse model and results in encoded flattened data,while SAE,as a pre-trained neural network,successfully reduces dimensionality and reliably reconstructs geomodels.The iterative-learning method can improve the training data for DNN by showing the error reduction achieved with each iteration step.The proposed workflow shows the small mean absolute percentage error below 4%for all objective functions,while a typical multi-objective evolutionary algorithm fails to significantly reduce the initial population uncertainty.Iterative learning-based manyobjective history matching estimates the trends in water cuts that are not reliably included in dynamicdata matching.This confirms the proposed workflow constructs more plausible geo-models.The workflow would be a reliable alternative to overcome the less-convergent Pareto-based multi-objective evolutionary algorithm in the presence of geological uncertainty and varying objective functions.展开更多
基金financially supported by the National Natural Science Foundation of China (Nos.51974023 and52374321)the funding of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing,China (No.41620007)。
文摘The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.
基金Princess Nourah bint Abdulrahman University for funding this project through the Researchers Supporting Project(PNURSP2024R319)funded by the Prince Sultan University,Riyadh,Saudi Arabia.
文摘This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62375140 and 62001249)the Open Research Fund of National Laboratory of Solid State Microstructures(Grant No.M36055).
文摘The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.
文摘The accurate prediction of the bearing capacity of ring footings,which is crucial for civil engineering projects,has historically posed significant challenges.Previous research in this area has been constrained by considering only a limited number of parameters or utilizing relatively small datasets.To overcome these limitations,a comprehensive finite element limit analysis(FELA)was conducted to predict the bearing capacity of ring footings.The study considered a range of effective parameters,including clay undrained shear strength,heterogeneity factor of clay,soil friction angle of the sand layer,radius ratio of the ring footing,sand layer thickness,and the interface between the ring footing and the soil.An extensive dataset comprising 80,000 samples was assembled,exceeding the limitations of previous research.The availability of this dataset enabled more robust and statistically significant analyses and predictions of ring footing bearing capacity.In light of the time-intensive nature of gathering a substantial dataset,a customized deep neural network(DNN)was developed specifically to predict the bearing capacity of the dataset rapidly.Both computational and comparative results indicate that the proposed DNN(i.e.DNN-4)can accurately predict the bearing capacity of a soil with an R2 value greater than 0.99 and a mean squared error(MSE)below 0.009 in a fraction of 1 s,reflecting the effectiveness and efficiency of the proposed method.
基金Shenzhen Science and Technology Program,Grant/Award Number:ZDSYS20211021111415025Shenzhen Institute of Artificial Intelligence and Robotics for SocietyYouth Science and Technology Talents Development Project of Guizhou Education Department,Grant/Award Number:QianJiaoheKYZi[2018]459。
文摘Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task.
基金supported by the National Natural Science Foundation of China (12072365)the Natural Science Foundation of Hunan Province of China (2020JJ4657)。
文摘It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金supported by Science and Technology Project funding from China Southern Power Grid Corporation No.GDKJXM20230245(031700KC23020003).
文摘Blades are essential components of wind turbines.Reducing their fatigue loads during operation helps to extend their lifespan,but it is difficult to quickly and accurately calculate the fatigue loads of blades.To solve this problem,this paper innovatively designs a data-driven blade load modeling method based on a deep learning framework through mechanism analysis,feature selection,and model construction.In the mechanism analysis part,the generation mechanism of blade loads and the load theoretical calculationmethod based on material damage theory are analyzed,and four measurable operating state parameters related to blade loads are screened;in the feature extraction part,15 characteristic indicators of each screened parameter are extracted in the time and frequency domain,and feature selection is completed through correlation analysis with blade loads to determine the input parameters of data-driven modeling;in the model construction part,a deep neural network based on feedforward and feedback propagation is designed to construct the nonlinear coupling relationship between the unit operating parameter characteristics and blade loads.The results show that the proposed method mines the wind turbine operating state characteristics highly correlated with the blade load,such as the standard deviation of wind speed.The model built using these characteristics has reasonable calculation and fitting capabilities for the blade load and shows a better fitting level for untrained out-of-sample data than the traditional scheme.Based on the mean absolute percentage error calculation,the modeling accuracy of the two blade loads can reach more than 90%and 80%,respectively,providing a good foundation for the subsequent optimization control to suppress the blade load.
基金supported by the Science and Technology Project of State Grid Corporation of China(4000-202122070A-0-0-00).
文摘The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key role in improving the safety and economic benefits of the power grid.This paper proposes a wind power predicting method based on a convolutional graph attention deep neural network with multi-wind farm data.Based on the graph attention network and attention mechanism,the method extracts spatial-temporal characteristics from the data of multiple wind farms.Then,combined with a deep neural network,a convolutional graph attention deep neural network model is constructed.Finally,the model is trained with the quantile regression loss function to achieve the wind power deterministic and probabilistic prediction based on multi-wind farm spatial-temporal data.A wind power dataset in the U.S.is taken as an example to demonstrate the efficacy of the proposed model.Compared with the selected baseline methods,the proposed model achieves the best prediction performance.The point prediction errors(i.e.,root mean square error(RMSE)and normalized mean absolute percentage error(NMAPE))are 0.304 MW and 1.177%,respectively.And the comprehensive performance of probabilistic prediction(i.e.,con-tinuously ranked probability score(CRPS))is 0.580.Thus,the significance of multi-wind farm data and spatial-temporal feature extraction module is self-evident.
基金supported by the National Natural Science Foundation of China(41601369)the Young Talents Program of Institute of Crop Sciences,Chinese Academy of Agricultural Sciences(S2019YC04)
文摘Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region.
基金supported in part by the National Natural Science Foundation of China(No.51606213)the National Major Science and Technology Projects(No.J2019-III-0010-0054)。
文摘Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed under different icing conditions.Due to the complexity of the icing process,the rapid assessment of ice shape remains an important challenge.In this paper,an efficient prediction model of aircraft icing is established based on the deep belief network(DBN)and the stacked auto-encoder(SAE),which are all deep neural networks.The detailed network structures are designed and then the networks are trained according to the samples obtained by the icing numerical computation.After that the model is applied on the ice shape evaluation of NACA0012 airfoil.The results show that the model can accurately capture the nonlinear behavior of aircraft icing and thus make an excellent ice shape prediction.The model provides an important tool for aircraft icing analysis.
基金supported by the National Natural Science Foundation of China(No.51974023)State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing(No.41621005)。
文摘The composition control of molten steel is one of the main functions in the ladle furnace(LF)refining process.In this study,a feasible model was established to predict the alloying element yield using principal component analysis(PCA)and deep neural network(DNN).The PCA was used to eliminate collinearity and reduce the dimension of the input variables,and then the data processed by PCA were used to establish the DNN model.The prediction hit ratios for the Si element yield in the error ranges of±1%,±3%,and±5%are 54.0%,93.8%,and98.8%,respectively,whereas those of the Mn element yield in the error ranges of±1%,±2%,and±3%are 77.0%,96.3%,and 99.5%,respectively,in the PCA-DNN model.The results demonstrate that the PCA-DNN model performs better than the known models,such as the reference heat method,multiple linear regression,modified backpropagation,and DNN model.Meanwhile,the accurate prediction of the alloying element yield can greatly contribute to realizing a“narrow window”control of composition in molten steel.The construction of the prediction model for the element yield can also provide a reference for the development of an alloying control model in LF intelligent refining in the modern iron and steel industry.
基金This research was supported in part by National Natural Science Foundation of China(61675056 and 61875048).
文摘Optical deep learning based on diffractive optical elements offers unique advantages for parallel processing,computational speed,and power efficiency.One landmark method is the diffractive deep neural network(D^(2) NN)based on three-dimensional printing technology operated in the terahertz spectral range.Since the terahertz bandwidth involves limited interparticle coupling and material losses,this paper extends D^(2) NN to visible wavelengths.A general theory including a revised formula is proposed to solve any contradictions between wavelength,neuron size,and fabrication limitations.A novel visible light D^(2) NN classifier is used to recognize unchanged targets(handwritten digits ranging from 0 to 9)and targets that have been changed(i.e.,targets that have been covered or altered)at a visible wavelength of 632.8 nm.The obtained experimental classification accuracy(84%)and numerical classification accuracy(91.57%)quantify the match between the theoretical design and fabricated system performance.The presented framework can be used to apply a D^(2) NN to various practical applications and design other new applications.
基金financially supported by the Key Project of National Natural Science Foundation of China (No. 41930431)the Project of National Natural Science Foundation of China (Nos. 41904121, 41804133, and 41974116)Joint Guidance Project of Natural Science Foundation of Heilongjiang Province (No. LH2020D006)
文摘Based on the CNN-LSTM fusion deep neural network,this paper proposes a seismic velocity model building method that can simultaneously estimate the root mean square(RMS)velocity and interval velocity from the common-midpoint(CMP)gather.In the proposed method,a convolutional neural network(CNN)Encoder and two long short-term memory networks(LSTMs)are used to extract spatial and temporal features from seismic signals,respectively,and a CNN Decoder is used to recover RMS velocity and interval velocity of underground media from various feature vectors.To address the problems of unstable gradients and easily fall into a local minimum in the deep neural network training process,we propose to use Kaiming normal initialization with zero negative slopes of rectifi ed units and to adjust the network learning process by optimizing the mean square error(MSE)loss function with the introduction of a freezing factor.The experiments on testing dataset show that CNN-LSTM fusion deep neural network can predict RMS velocity as well as interval velocity more accurately,and its inversion accuracy is superior to that of single neural network models.The predictions on the complex structures and Marmousi model are consistent with the true velocity variation trends,and the predictions on fi eld data can eff ectively correct the phase axis,improve the lateral continuity of phase axis and quality of stack section,indicating the eff ectiveness and decent generalization capability of the proposed method.
基金Authors would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2021-220.
文摘The evolution and expansion of IoT devices reduced human efforts,increased resource utilization, and saved time;however, IoT devices createsignificant challenges such as lack of security and privacy, making them morevulnerable to IoT-based botnet attacks. There is a need to develop efficientand faster models which can work in real-time with efficiency and stability. The present investigation developed two novels, Deep Neural Network(DNN) models, DNNBoT1 and DNNBoT2, to detect and classify well-knownIoT botnet attacks such as Mirai and BASHLITE from nine compromisedindustrial-grade IoT devices. The utilization of PCA was made to featureextraction and improve effectual and accurate Botnet classification in IoTenvironments. The models were designed based on rigorous hyperparameterstuning with GridsearchCV. Early stopping was utilized to avoid the effects ofoverfitting and underfitting for both DNN models. The in-depth assessmentand evaluation of the developed models demonstrated that accuracy andefficiency are some of the best-performed models. The novelty of the presentinvestigation, with developed models, bridge the gaps by using a real datasetwith high accuracy and a significantly lower false alarm rate. The results wereevaluated based on earlier studies and deemed efficient at detecting botnetattacks using the real dataset.
基金supported by Aviation Industry Corporation of China(AVIC)Manufacturing Technology Institute(MTI)and in part by China Scholarship Council(CSC)(201908060236)。
文摘Sheet metal forming technologies have been intensively studied for decades to meet the increasing demand for lightweight metal components.To surmount the springback occurring in sheet metal forming processes,numerous studies have been performed to develop compensation methods.However,for most existing methods,the development cycle is still considerably time-consumptive and demands high computational or capital cost.In this paper,a novel theory-guided regularization method for training of deep neural networks(DNNs),implanted in a learning system,is introduced to learn the intrinsic relationship between the workpiece shape after springback and the required process parameter,e.g.,loading stroke,in sheet metal bending processes.By directly bridging the workpiece shape to the process parameter,issues concerning springback in the process design would be circumvented.The novel regularization method utilizes the well-recognized theories in material mechanics,Swift’s law,by penalizing divergence from this law throughout the network training process.The regularization is implemented by a multi-task learning network architecture,with the learning of extra tasks regularized during training.The stress-strain curve describing the material properties and the prior knowledge used to guide learning are stored in the database and the knowledge base,respectively.One can obtain the predicted loading stroke for a new workpiece shape by importing the target geometry through the user interface.In this research,the neural models were found to outperform a traditional machine learning model,support vector regression model,in experiments with different amount of training data.Through a series of studies with varying conditions of training data structure and amount,workpiece material and applied bending processes,the theory-guided DNN has been shown to achieve superior generalization and learning consistency than the data-driven DNNs,especially when only scarce and scattered experiment data are available for training which is often the case in practice.The theory-guided DNN could also be applicable to other sheet metal forming processes.It provides an alternative method for compensating springback with significantly shorter development cycle and less capital cost and computational requirement than traditional compensation methods in sheet metal forming industry.
基金the National Science Foundations(NSFs)(1822085,1725456,1816833,1500848,1719160,and 1725447)the NSF Computing and Communication Foundations(1740352)+1 种基金the Nanoelectronics COmputing REsearch Program in the Semiconductor Research Corporation(NC-2766-A)the Center for Research in Intelligent Storage and Processing-in-Memory,one of six centers in the Joint University Microelectronics Program,a SRC program sponsored by Defense Advanced Research Projects Agency.
文摘Recently,due to the availability of big data and the rapid growth of computing power,artificial intelligence(AI)has regained tremendous attention and investment.Machine learning(ML)approaches have been successfully applied to solve many problems in academia and in industry.Although the explosion of big data applications is driving the development of ML,it also imposes severe challenges of data processing speed and scalability on conventional computer systems.Computing platforms that are dedicatedly designed for AI applications have been considered,ranging from a complement to von Neumann platforms to a“must-have”and stand-alone technical solution.These platforms,which belong to a larger category named“domain-specific computing,”focus on specific customization for AI.In this article,we focus on summarizing the recent advances in accelerator designs for deep neural networks(DNNs)-that is,DNN accelerators.We discuss various architectures that support DNN executions in terms of computing units,dataflow optimization,targeted network topologies,architectures on emerging technologies,and accelerators for emerging applications.We also provide our visions on the future trend of AI chip designs.
文摘Training deep neural networks(DNNs)requires a significant amount of time and resources to obtain acceptable results,which severely limits its deployment in resource-limited platforms.This paper proposes DarkFPGA,a novel customizable framework to efficiently accelerate the entire DNN training on a single FPGA platform.First,we explore batch-level parallelism to enable efficient FPGA-based DNN training.Second,we devise a novel hardware architecture optimised by a batch-oriented data pattern and tiling techniques to effectively exploit parallelism.Moreover,an analytical model is developed to determine the optimal design parameters for the DarkFPGA accelerator with respect to a specific network specification and FPGA resource constraints.Our results show that the accelerator is able to perform about 10 times faster than CPU training and about a third of the energy consumption than GPU training using 8-bit integers for training VGG-like networks on the CIFAR dataset for the Maxeler MAX5 platform.
基金the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2019R1F1A1059346)This work was supported by the 2020 Research Fund(Project No.1.180090.01)of UNIST(Ulsan National Institute of Science and Technology).
文摘An anomaly-based intrusion detection system(A-IDS)provides a critical aspect in a modern computing infrastructure since new types of attacks can be discovered.It prevalently utilizes several machine learning algorithms(ML)for detecting and classifying network traffic.To date,lots of algorithms have been proposed to improve the detection performance of A-IDS,either using individual or ensemble learners.In particular,ensemble learners have shown remarkable performance over individual learners in many applications,including in cybersecurity domain.However,most existing works still suffer from unsatisfactory results due to improper ensemble design.The aim of this study is to emphasize the effectiveness of stacking ensemble-based model for A-IDS,where deep learning(e.g.,deep neural network[DNN])is used as base learner model.The effectiveness of the proposed model and base DNN model are benchmarked empirically in terms of several performance metrics,i.e.,Matthew’s correlation coefficient,accuracy,and false alarm rate.The results indicate that the proposed model is superior to the base DNN model as well as other existing ML algorithms found in the literature.
基金supported by the basic science research program through the National Research Foundation of Korea(NRF)(2020R1F1A1073395)the basic research project of the Korea Institute of Geoscience and Mineral Resources(KIGAM)(GP2021-011,GP2020-031,21-3117)funded by the Ministry of Science and ICT,Korea。
文摘This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matching.The proposed method consists of a DNN-based inverse model with SAE-encoded static data and iterative updates of supervised-learning data are based on distance-based clustering schemes.DNN functions as an inverse model and results in encoded flattened data,while SAE,as a pre-trained neural network,successfully reduces dimensionality and reliably reconstructs geomodels.The iterative-learning method can improve the training data for DNN by showing the error reduction achieved with each iteration step.The proposed workflow shows the small mean absolute percentage error below 4%for all objective functions,while a typical multi-objective evolutionary algorithm fails to significantly reduce the initial population uncertainty.Iterative learning-based manyobjective history matching estimates the trends in water cuts that are not reliably included in dynamicdata matching.This confirms the proposed workflow constructs more plausible geo-models.The workflow would be a reliable alternative to overcome the less-convergent Pareto-based multi-objective evolutionary algorithm in the presence of geological uncertainty and varying objective functions.