期刊文献+
共找到1,480篇文章
< 1 2 74 >
每页显示 20 50 100
Extended Deep Learning Algorithm for Improved Brain Tumor Diagnosis System
1
作者 M.Adimoolam K.Maithili +7 位作者 N.M.Balamurugan R.Rajkumar S.Leelavathy Raju Kannadasan Mohd Anul Haq Ilyas Khan ElSayed M.Tag El Din Arfat Ahmad Khan 《Intelligent Automation & Soft Computing》 2024年第1期33-55,共23页
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st... At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated. 展开更多
关键词 Brain tumor extended deep learning algorithm convolution neural network tumor detection deep learning
下载PDF
Some Features of Neural Networks as Nonlinearly Parameterized Models of Unknown Systems Using an Online Learning Algorithm
2
作者 Leonid S. Zhiteckii Valerii N. Azarskov +1 位作者 Sergey A. Nikolaienko Klaudia Yu. Solovchuk 《Journal of Applied Mathematics and Physics》 2018年第1期247-263,共17页
This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm f... This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm for online training the three-layer neural networks in stochastic environment is studied. A special case where an unknown nonlinearity can exactly be approximated by some neural network with a nonlinear activation function for its output layer is considered. To analyze the asymptotic behavior of the learning processes, the so-called Lyapunov-like approach is utilized. As the Lyapunov function, the expected value of the square of approximation error depending on network parameters is chosen. Within this approach, sufficient conditions guaranteeing the convergence of learning algorithm with probability 1 are derived. Simulation results are presented to support the theoretical analysis. 展开更多
关键词 neural network Nonlinear Model Online learning algorithm LYAPUNOV Func-tion PROBABILISTIC CONVERGENCE
下载PDF
A Review of Computing with Spiking Neural Networks
3
作者 Jiadong Wu Yinan Wang +2 位作者 Zhiwei Li Lun Lu Qingjiang Li 《Computers, Materials & Continua》 SCIE EI 2024年第3期2909-2939,共31页
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces... Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing. 展开更多
关键词 Spiking neural networks neural networks brain-like computing artificial intelligence learning algorithm
下载PDF
Hybrid Deep Learning-Improved BAT Optimization Algorithm for Soil Classification Using Hyperspectral Features
4
作者 S.Prasanna Bharathi S.Srinivasan +1 位作者 G.Chamundeeswari B.Ramesh 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期579-594,共16页
Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids ... Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids in instantaneous measurement of soil’s minerals and its characteristics.There are a few challenges that is present in soil classification using image enhancement such as,locating and plotting soil boundaries,slopes,hazardous areas,drainage condition,land use,vegetation etc.There are some traditional approaches which involves few drawbacks such as,manual involvement which results in inaccuracy due to human interference,time consuming,inconsistent prediction etc.To overcome these draw backs and to improve the predictive analysis of soil characteristics,we propose a Hybrid Deep Learning improved BAT optimization algorithm(HDIB)for soil classification using remote sensing hyperspectral features.In HDIB,we propose a spontaneous BAT optimization algorithm for feature extraction of both spectral-spatial features by choosing pure pixels from the Hyper Spectral(HS)image.Spectral-spatial vector as training illustrations is attained by merging spatial and spectral vector by means of priority stacking methodology.Then,a recurring Deep Learning(DL)Neural Network(NN)is used for classifying the HS images,considering the datasets of Pavia University,Salinas and Tamil Nadu Hill Scene,which in turn improves the reliability of classification.Finally,the performance of the proposed HDIB based soil classifier is compared and analyzed with existing methodologies like Single Layer Perceptron(SLP),Convolutional Neural Networks(CNN)and Deep Metric Learning(DML)and it shows an improved classification accuracy of 99.87%,98.34%and 99.9%for Tamil Nadu Hills dataset,Pavia University and Salinas scene datasets respectively. 展开更多
关键词 HDIB bat optimization algorithm recurrent deep learning neural network convolutional neural network single layer perceptron hyperspectral images deep metric learning
下载PDF
Application of Depth Learning Algorithm in Automatic Processing and Analysis of Sports Images
5
作者 Kai Yang 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期317-332,共16页
With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to qui... With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to quickly search and access moving images but also facilitate staff to store and manage moving image data and contribute to the intellectual development of the sports industry.In this paper,a method of table tennis identification and positioning based on a convolutional neural network is proposed,which solves the problem that the identification and positioning method based on color features and contour features is not adaptable in various environments.At the same time,the learning methods and techniques of table tennis detection,positioning,and trajectory prediction are studied.A deep learning framework for recognition learning of rotating flying table tennis is put forward.The mechanism and methods of positioning,trajectory prediction,and intelligent automatic processing of moving images are studied,and the self-built data sets are trained and verified. 展开更多
关键词 Deep learning algorithm convolutional neural network moving image TRAJECTORY intelligent processing
下载PDF
Applying Neural-Network-Based Machine Learning to Additive Manufacturing:Current Applications,Challenges,and Future Perspectives 被引量:19
6
作者 Xinbo Qi Guofeng Chen +2 位作者 Yong Li Xuan Cheng Changpeng Li 《Engineering》 SCIE EI 2019年第4期721-729,共9页
Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturi... Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturing.However,AM processing parameters are difficult to tune,since they can exert a huge impact on the printed microstructure and on the performance of the subsequent products.It is a difficult task to build a process-structure-property-performance(PSPP)relationship for AM using traditional numerical and analytical models.Today,the machine learning(ML)method has been demonstrated to be a valid way to perform complex pattern recognition and regression analysis without an explicit need to construct and solve the underlying physical models.Among ML algorithms,the neural network(NN)is the most widely used model due to the large dataset that is currently available,strong computational power,and sophisticated algorithm architecture.This paper overviews the progress of applying the NN algorithm to several aspects of the AM whole chain,including model design,in situ monitoring,and quality evaluation.Current challenges in applying NNs to AM and potential solutions for these problems are then outlined.Finally,future trends are proposed in order to provide an overall discussion of this interdisciplinary area. 展开更多
关键词 ADDITIVE manufacturing 3D PRINTING neural network Machine learning algorithm
下载PDF
STUDIES OF THE DYNAMIC BEHAVIORS OF A CLASS OF LEARNING ASSOCIATIVE NEURAL NETWORKS
7
作者 曾黄麟 《Journal of Electronics(China)》 1994年第3期208-216,共9页
This paper investigates exponential stability and trajectory bounds of motions of equilibria of a class of associative neural networks under structural variations as learning a new pattern. Some conditions for the pos... This paper investigates exponential stability and trajectory bounds of motions of equilibria of a class of associative neural networks under structural variations as learning a new pattern. Some conditions for the possible maximum estimate of the domain of structural exponential stability are determined. The filtering ability of the associative neural networks contaminated by input noises is analyzed. Employing the obtained results as valuable guidelines, a systematic synthesis procedure for constructing a dynamical associative neural network that stores a given set of vectors as the stable equilibrium points as well as learns new patterns can be developed. Some new concepts defined here are expected to be the instruction for further studies of learning associative neural networks. 展开更多
关键词 ASSOCIATIVE neural network learning algorithm Dynamic characteristics Structure EXPONENTIAL STABILITY
下载PDF
Parameter Self - Learning of Generalized Predictive Control Using BP Neural Network
8
作者 陈增强 袁著祉 王群仙 《Journal of China Textile University(English Edition)》 EI CAS 2000年第3期54-56,共3页
This paper describes the self—adjustment of some tuning-knobs of the generalized predictive controller(GPC).A three feedforward neural network was utilized to on line learn two key tuning-knobs of GPC,and BP algorith... This paper describes the self—adjustment of some tuning-knobs of the generalized predictive controller(GPC).A three feedforward neural network was utilized to on line learn two key tuning-knobs of GPC,and BP algorithm was used for the training of the linking-weights of the neural network.Hence it gets rid of the difficulty of choosing these tuning-knobs manually and provides easier condition for the wide applications of GPC on industrial plants.Simulation results illustrated the effectiveness of the method. 展开更多
关键词 generalized PREDICTIVE CONTROL SELF - tuning CONTROL SELF - learning CONTROL neural networks BP algorithm .
下载PDF
An Optimized Convolutional Neural Network Architecture Based on Evolutionary Ensemble Learning
9
作者 Qasim M.Zainel Murad B.K.horsheed +1 位作者 Saad Darwish Amr A.Ahmed 《Computers, Materials & Continua》 SCIE EI 2021年第12期3813-3828,共16页
Convolutional Neural Networks(CNNs)models succeed in vast domains.CNNs are available in a variety of topologies and sizes.The challenge in this area is to develop the optimal CNN architecture for a particular issue in... Convolutional Neural Networks(CNNs)models succeed in vast domains.CNNs are available in a variety of topologies and sizes.The challenge in this area is to develop the optimal CNN architecture for a particular issue in order to achieve high results by using minimal computational resources to train the architecture.Our proposed framework to automated design is aimed at resolving this problem.The proposed framework is focused on a genetic algorithm that develops a population of CNN models in order to find the architecture that is the best fit.In comparison to the co-authored work,our proposed framework is concerned with creating lightweight architectures with a limited number of parameters while retaining a high degree of validity accuracy utilizing an ensemble learning technique.This architecture is intended to operate on low-resource machines,rendering it ideal for implementation in a number of environments.Four common benchmark image datasets are used to test the proposed framework,and it is compared to peer competitors’work utilizing a range of parameters,including accuracy,the number of model parameters used,the number of GPUs used,and the number of GPU days needed to complete the method.Our experimental findings demonstrated a significant advantage in terms of GPU days,accuracy,and the number of parameters in the discovered model. 展开更多
关键词 Convolutional neural networks genetic algorithm automatic model design ensemble learning
下载PDF
Predicting the daily return direction of the stock market using hybrid machine learning algorithms 被引量:10
10
作者 Xiao Zhong David Enke 《Financial Innovation》 2019年第1期435-454,共20页
Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on f... Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on forecasting daily stock market returns,especially when using powerful machine learning techniques,such as deep neural networks(DNNs),to perform the analyses.DNNs employ various deep learning algorithms based on the combination of network structure,activation function,and model parameters,with their performance depending on the format of the data representation.This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF(ticker symbol:SPY)based on 60 financial and economic features.DNNs and traditional artificial neural networks(ANNs)are then deployed over the entire preprocessed but untransformed dataset,along with two datasets transformed via principal component analysis(PCA),to predict the daily direction of future stock market index returns.While controlling for overfitting,a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000.Moreover,a set of hypothesis testing procedures are implemented on the classification,and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset,as well as several other hybrid machine learning algorithms.In addition,the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested,including in a comparison against two standard benchmarks. 展开更多
关键词 Daily stock return forecasting Return direction classification Data representation Hybrid machine learning algorithms Deep neural networks(DNNs) Trading strategies
下载PDF
Structural Damage Identification Using Ensemble Deep Convolutional Neural Network Models
11
作者 Mohammad Sadegh Barkhordari Danial Jahed Armaghani Panagiotis G.Asteris 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第2期835-855,共21页
The existing strategy for evaluating the damage condition of structures mostly focuses on feedback supplied by traditional visualmethods,which may result in an unreliable damage characterization due to inspector subje... The existing strategy for evaluating the damage condition of structures mostly focuses on feedback supplied by traditional visualmethods,which may result in an unreliable damage characterization due to inspector subjectivity or insufficient level of expertise.As a result,a robust,reliable,and repeatable method of damage identification is required.Ensemble learning algorithms for identifying structural damage are evaluated in this article,which use deep convolutional neural networks,including simple averaging,integrated stacking,separate stacking,and hybridweighted averaging ensemble and differential evolution(WAE-DE)ensemblemodels.Damage identification is carried out on three types of damage.The proposed algorithms are used to analyze the damage of 4585 structural images.The effectiveness of the ensemble learning techniques is evaluated using the confusion matrix.For the testing dataset,the confusion matrix achieved an accuracy of 94 percent and a minimum recall of 92 percent for the best model(WAE-DE)in distinguishing damage types as flexural,shear,combined,or undamaged. 展开更多
关键词 Machine learning ensemble learning algorithms convolutional neural network damage assessment structural damage
下载PDF
Signal Conducting System with Effective Optimization Using Deep Learning for Schizophrenia Classification
12
作者 V.Divya S.Sendil Kumar +1 位作者 V.Gokula Krishnan Manoj Kumar 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期1869-1886,共18页
Signal processing based research was adopted with Electroencephalogram(EEG)for predicting the abnormality and cerebral activities.The proposed research work is intended to provide an automatic diagnostic system to det... Signal processing based research was adopted with Electroencephalogram(EEG)for predicting the abnormality and cerebral activities.The proposed research work is intended to provide an automatic diagnostic system to determine the EEG signal in order to classify the brain function which shows whether a person is affected with schizophrenia or not.Early detection and intervention are vital for better prognosis.However,the diagnosis of schizophrenia still depends on clinical observation to date.Without reliable biomarkers,schizophrenia is difficult to detect in its early phase and hence we have proposed this idea.In this work,the EEG signal series are divided into non-linear feature mining,classification and validation,and t-test integrated feature selection process.For this work,19-channel EEG signals are utilized from schizophrenia class and normal pattern.Here,the datasets initially execute the splitting process based on raw 19-channel EEG into 6250 sample point’s sequences.With this process,1142 features of normal and schizophrenia class patterns can be obtained.In other hand,157 features from each EEG patterns are utilized based on Non-linear feature extraction process where 14 principal features can be identified in terms of considering the essential features.At last,the Deep Learning(DL)technique incorporated with an effective optimization technique is adopted for classification process called a Deep Convolutional Neural Network(DCNN)with mayfly optimization algorithm.The proposed technique is implemented into the platform of MATLAB in order to obtain better results and is analyzed based on the performance analysis framework such as accuracy,Signal to Noise Ratio(SNR),Mean Square Error,Normalized Mean Square Error(NMSE)and Loss.Through comparison,the proposed technique is proved to a better technique than other existing techniques. 展开更多
关键词 Deep learning optimization algorithm signal conducting system SCHIZOPHRENIA convolutional neural network mayfly optimization algorithm
下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
13
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 Deep learning(DL) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
下载PDF
Volterra Feedforward Neural Networks:Theory and Algorithms 被引量:3
14
作者 Jiao Lichengl Liu Fang & Xie Qin(National Lab. for Radar Signal Processing and Center for Neural Networks,Xidian University, Xian 710071, P.R.China) 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 1996年第4期1-12,共12页
The Volterra feedforward neural network with nonlinear interconnections and related homotopy learning algorithm are proposed in the paper. It is shown that Volterra neural network and the homolopy learning algorithms ... The Volterra feedforward neural network with nonlinear interconnections and related homotopy learning algorithm are proposed in the paper. It is shown that Volterra neural network and the homolopy learning algorithms are significant potentials in nonlinear approximation ability,convergent speeds and global optimization than the classical neural networks and the standard BP algorithm, and related computer simulations and theoretical analysis are given too. 展开更多
关键词 Volterra neural networks Homotopy learning algorithm.
下载PDF
Automatic Image Annotation Using Adaptive Convolutional Deep Learning Model
15
作者 R.Jayaraj S.Lokesh 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期481-497,共17页
Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of ... Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled. 展开更多
关键词 Deep learning model J-image segmentation honey badger algorithm convolutional neural network image annotation
下载PDF
Weed Classification Using Particle Swarm Optimization and Deep Learning Models
16
作者 M.Manikandakumar P.Karthikeyan 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期913-927,共15页
Weed is a plant that grows along with nearly allfield crops,including rice,wheat,cotton,millets and sugar cane,affecting crop yield and quality.Classification and accurate identification of all types of weeds is a cha... Weed is a plant that grows along with nearly allfield crops,including rice,wheat,cotton,millets and sugar cane,affecting crop yield and quality.Classification and accurate identification of all types of weeds is a challenging task for farmers in earlier stage of crop growth because of similarity.To address this issue,an efficient weed classification model is proposed with the Deep Convolutional Neural Network(CNN)that implements automatic feature extraction and performs complex feature learning for image classification.Throughout this work,weed images were trained using the proposed CNN model with evolutionary computing approach to classify the weeds based on the two publicly available weed datasets.The Tamil Nadu Agricultural University(TNAU)dataset used as afirst dataset that consists of 40 classes of weed images and the other dataset is from Indian Council of Agriculture Research–Directorate of Weed Research(ICAR-DWR)which contains 50 classes of weed images.An effective Particle Swarm Optimization(PSO)technique is applied in the proposed CNN to automa-tically evolve and improve its classification accuracy.The proposed model was evaluated and compared with pre-trained transfer learning models such as GoogLeNet,AlexNet,Residual neural Network(ResNet)and Visual Geometry Group Network(VGGNet)for weed classification.This work shows that the performance of the PSO assisted proposed CNN model is significantly improved the success rate by 98.58%for TNAU and 97.79%for ICAR-DWR weed datasets. 展开更多
关键词 Deep learning convolutional neural network weed classification transfer learning particle swarm optimization evolutionary computing algorithm 1:Metrics Evaluation
下载PDF
Improved prediction of clay soil expansion using machine learning algorithms and meta-heuristic dichotomous ensemble classifiers 被引量:1
17
作者 E.U.Eyo S.J.Abbey +1 位作者 T.T.Lawrence F.K.Tetteh 《Geoscience Frontiers》 SCIE CAS CSCD 2022年第1期268-284,共17页
Soil swelling-related disaster is considered as one of the most devastating geo-hazards in modern history.Hence,proper determination of a soil’s ability to expand is very vital for achieving a secure and safe ground ... Soil swelling-related disaster is considered as one of the most devastating geo-hazards in modern history.Hence,proper determination of a soil’s ability to expand is very vital for achieving a secure and safe ground for infrastructures.Accordingly,this study has provided a novel and intelligent approach that enables an improved estimation of swelling by using kernelised machines(Bayesian linear regression(BLR)&bayes point machine(BPM)support vector machine(SVM)and deep-support vector machine(D-SVM));(multiple linear regressor(REG),logistic regressor(LR)and artificial neural network(ANN)),tree-based algorithms such as decision forest(RDF)&boosted trees(BDT).Also,and for the first time,meta-heuristic classifiers incorporating the techniques of voting(VE)and stacking(SE)were utilised.Different independent scenarios of explanatory features’combination that influence soil behaviour in swelling were investigated.Preliminary results indicated BLR as possessing the highest amount of deviation from the predictor variable(the actual swell-strain).REG and BLR performed slightly better than ANN while the meta-heuristic learners(VE and SE)produced the best overall performance(greatest R2 value of 0.94 and RMSE of 0.06%exhibited by VE).CEC,plasticity index and moisture content were the features considered to have the highest level of importance.Kernelized binary classifiers(SVM,D-SVM and BPM)gave better accuracy(average accuracy and recall rate of 0.93 and 0.60)compared to ANN,LR and RDF.Sensitivity-driven diagnostic test indicated that the meta-heuristic models’best performance occurred when ML training was conducted using k-fold validation technique.Finally,it is recommended that the concepts developed herein be deployed during the preliminary phases of a geotechnical or geological site characterisation by using the best performing meta-heuristic models via their background coding resource. 展开更多
关键词 Artificial neural networks Machine learning Clays algorithm Soil swelling Soil plasticity
下载PDF
Power System Resiliency and Wide Area Control Employing Deep Learning Algorithm 被引量:1
18
作者 Pandia Rajan Jeyaraj Aravind Chellachi Kathiresan +3 位作者 Siva Prakash Asokan Edward Rajan Samuel Nadar Hegazy Rezk Thanikanti Sudhakar Babu 《Computers, Materials & Continua》 SCIE EI 2021年第7期553-567,共15页
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ... The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions. 展开更多
关键词 neural network deep learning algorithm low-frequency oscillation resiliency assessment smart grid wide-area control
下载PDF
Machine Learning Algorithms and Their Application to Ore Reserve Estimation of Sparse and Imprecise Data 被引量:2
19
作者 Sridhar Dutta Sukumar Bandopadhyay +1 位作者 Rajive Ganguli Debasmita Misra 《Journal of Intelligent Learning Systems and Applications》 2010年第2期86-96,共11页
Traditional geostatistical estimation techniques have been used predominantly by the mining industry for ore reserve estimation. Determination of mineral reserve has posed considerable challenge to mining engineers du... Traditional geostatistical estimation techniques have been used predominantly by the mining industry for ore reserve estimation. Determination of mineral reserve has posed considerable challenge to mining engineers due to the geological complexities of ore body formation. Extensive research over the years has resulted in the development of several state-of-the-art methods for predictive spatial mapping, which could be used for ore reserve estimation;and recent advances in the use of machine learning algorithms (MLA) have provided a new approach for solving the prob-lem of ore reserve estimation. The focus of the present study was on the use of two MLA for estimating ore reserve: namely, neural networks (NN) and support vector machines (SVM). Application of MLA and the various issues involved with using them for reserve estimation have been elaborated with the help of a complex drill-hole dataset that exhibits the typical properties of sparseness and impreciseness that might be associated with a mining dataset. To investigate the accuracy and applicability of MLA for ore reserve estimation, the generalization ability of NN and SVM was compared with the geostatistical ordinary kriging (OK) method. 展开更多
关键词 MACHINE learning algorithmS neural networks Support VECTOR MACHINE GENETIC algorithmS Supervised
下载PDF
A Second Order Training Algorithm for Multilayer Feedforward Neural Networks
20
作者 谭营 何振亚 邓超 《Journal of Southeast University(English Edition)》 EI CAS 1997年第1期32-36,共5页
ASecondOrderTrainingAlgorithmforMultilayerFeedforwardNeuralNetworksTanYing(谭营)HeZhenya(何振亚)(DepartmentofRad... ASecondOrderTrainingAlgorithmforMultilayerFeedforwardNeuralNetworksTanYing(谭营)HeZhenya(何振亚)(DepartmentofRadioEngineering,Sou... 展开更多
关键词 MULTILAYER FEEDFORWARD neural networks SECOND order TRAINING algorithm BP algorithm learning factors XOR problem
下载PDF
上一页 1 2 74 下一页 到第
使用帮助 返回顶部