期刊文献+
共找到12篇文章
< 1 >
每页显示 20 50 100
Automated machine learning for rainfall-induced landslide hazard mapping in Luhe County of Guangdong Province,China
1
作者 Tao Li Chen-chen Xie +3 位作者 Chong Xu Wen-wen Qi Yuan-dong Huang Lei Li 《China Geology》 CAS CSCD 2024年第2期315-329,共15页
Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machin... Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machine learning framework(AutoGluon).A total of 2241 landslides were identified from satellite images before and after the rainfall event,and 10 impact factors including elevation,slope,aspect,normalized difference vegetation index(NDVI),topographic wetness index(TWI),lithology,land cover,distance to roads,distance to rivers,and rainfall were selected as indicators.The WeightedEnsemble model,which is an ensemble of 13 basic machine learning models weighted together,was used to output the landslide hazard assessment results.The results indicate that landslides mainly occurred in the central part of the study area,especially in Hetian and Shanghu.Totally 102.44 s were spent to train all the models,and the ensemble model WeightedEnsemble has an Area Under the Curve(AUC)value of92.36%in the test set.In addition,14.95%of the study area was determined to be at very high hazard,with a landslide density of 12.02 per square kilometer.This study serves as a significant reference for the prevention and mitigation of geological hazards and land use planning in Luhe County. 展开更多
关键词 Landslide hazard Heavy rainfall Harzard mapping Hazard assessment automated machine learning Shallow landslide Visual interpretation Luhe County Geological hazards survey engineering
下载PDF
Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting
2
作者 Ying Su Morgan C.Wang Shuai Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3529-3549,共21页
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ... Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance. 展开更多
关键词 automated machine learning autoregressive integrated moving average neural networks time series analysis
下载PDF
AID4I:An Intrusion Detection Framework for Industrial Internet of Things Using Automated Machine Learning
3
作者 Anil Sezgin Aytug Boyacı 《Computers, Materials & Continua》 SCIE EI 2023年第8期2121-2143,共23页
By identifying and responding to any malicious behavior that could endanger the system,the Intrusion Detection System(IDS)is crucial for preserving the security of the Industrial Internet of Things(IIoT)network.The be... By identifying and responding to any malicious behavior that could endanger the system,the Intrusion Detection System(IDS)is crucial for preserving the security of the Industrial Internet of Things(IIoT)network.The benefit of anomaly-based IDS is that they are able to recognize zeroday attacks due to the fact that they do not rely on a signature database to identify abnormal activity.In order to improve control over datasets and the process,this study proposes using an automated machine learning(AutoML)technique to automate the machine learning processes for IDS.Our groundbreaking architecture,known as AID4I,makes use of automatic machine learning methods for intrusion detection.Through automation of preprocessing,feature selection,model selection,and hyperparameter tuning,the objective is to identify an appropriate machine learning model for intrusion detection.Experimental studies demonstrate that the AID4I framework successfully proposes a suitablemodel.The integrity,security,and confidentiality of data transmitted across the IIoT network can be ensured by automating machine learning processes in the IDS to enhance its capacity to identify and stop threatening activities.With a comprehensive solution that takes advantage of the latest advances in automated machine learning methods to improve network security,AID4I is a powerful and effective instrument for intrusion detection.In preprocessing module,three distinct imputation methods are utilized to handle missing data,ensuring the robustness of the intrusion detection system in the presence of incomplete information.Feature selection module adopts a hybrid approach that combines Shapley values and genetic algorithm.The Parameter Optimization module encompasses a diverse set of 14 classification methods,allowing for thorough exploration and optimization of the parameters associated with each algorithm.By carefully tuning these parameters,the framework enhances its adaptability and accuracy in identifying potential intrusions.Experimental results demonstrate that the AID4I framework can achieve high levels of accuracy in detecting network intrusions up to 14.39%on public datasets,outperforming traditional intrusion detection methods while concurrently reducing the elapsed time for training and testing. 展开更多
关键词 automated machine learning intrusion detection system industrial internet of things parameter optimization
下载PDF
Automated Machine Learning for Epileptic Seizure Detection Based on EEG Signals
4
作者 Jian Liu Yipeng Du +2 位作者 Xiang Wang Wuguang Yue Jim Feng 《Computers, Materials & Continua》 SCIE EI 2022年第10期1995-2011,共17页
Epilepsy is a common neurological disease and severely affects the daily life of patients.The automatic detection and diagnosis system of epilepsy based on electroencephalogram(EEG)is of great significance to help pat... Epilepsy is a common neurological disease and severely affects the daily life of patients.The automatic detection and diagnosis system of epilepsy based on electroencephalogram(EEG)is of great significance to help patients with epilepsy return to normal life.With the development of deep learning technology and the increase in the amount of EEG data,the performance of deep learning based automatic detection algorithm for epilepsy EEG has gradually surpassed the traditional hand-crafted approaches.However,the neural architecture design for epilepsy EEG analysis is time-consuming and laborious,and the designed structure is difficult to adapt to the changing EEG collection environment,which limits the application of the epilepsy EEG automatic detection system.In this paper,we explore the possibility of Automated Machine Learning(AutoML)playing a role in the task of epilepsy EEG detection.We apply the neural architecture search(NAS)algorithm in the AutoKeras platform to design the model for epilepsy EEG analysis and utilize feature interpretability methods to ensure the reliability of the searched model.The experimental results show that the model obtained through NAS outperforms the baseline model in performance.The searched model improves classification accuracy,F1-score and Cohen’s kappa coefficient by 7.68%,7.82%and 9.60%respectively than the baseline model.Furthermore,NASbased model is capable of extracting EEG features related to seizures for classification. 展开更多
关键词 Deep learning automated machine learning EEG seizure detection
下载PDF
Groundwater contaminant source identification considering unknown boundary condition based on an automated machine learning surrogate
5
作者 Yaning Xu Wenxi Lu +3 位作者 Zidong Pan Chengming Luo Yukun Bai Shuwei Qiu 《Geoscience Frontiers》 SCIE CAS CSCD 2024年第1期402-416,共15页
Groundwater contamination source identification(GCSI)is a prerequisite for contamination risk evaluation and efficient groundwater contamination remediation programs.The boundary condition generally is set as known va... Groundwater contamination source identification(GCSI)is a prerequisite for contamination risk evaluation and efficient groundwater contamination remediation programs.The boundary condition generally is set as known variables in previous GCSI studies.However,in many practical cases,the boundary condition is complicated and cannot be estimated accurately in advance.Setting the boundary condition as known variables may seriously deviate from the actual situation and lead to distorted identification results.And the results of GCSI are affected by multiple factors,including contaminant source information,model parameters,boundary condition,etc.Therefore,if the boundary condition is not estimated accurately,other factors will also be estimated inaccurately.This study focuses on the unknown boundary condition and proposed to identify three types of unknown variables(contaminant source information,model parameters and boundary condition)innovatively.When simulation-optimization(S-O)method is applied to GCSI,the huge computational load is usually reduced by building surrogate models.However,when building surrogate models,the researchers need to select the models and optimize the hyperparameters to make the model powerful,which can be a lengthy process.The automated machine learning(AutoML)method was used to build surrogate model,which automates the model selection and hyperparameter optimization in machine learning engineering,largely reducing human operations and saving time.The accuracy of AutoML surrogate model is compared with the surrogate model used in eXtreme Gradient Boosting method(XGBoost),random forest method(RF),extra trees regressor method(ETR)and elasticnet method(EN)respectively,which are automatically selected in AutoML engineering.The results show that the surrogate model constructed by AutoML method has the best accuracy compared with the other four methods.This study provides reliable and strong support for GCSI. 展开更多
关键词 Groundwater contamination source Boundary condition automated machine learning Surrogate model
原文传递
AutoRhythmAI: A Hybrid Machine and Deep Learning Approach for Automated Diagnosis of Arrhythmias
6
作者 S.Jayanthi S.Prasanna Devi 《Computers, Materials & Continua》 SCIE EI 2024年第2期2137-2158,共22页
In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and... In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics. 展开更多
关键词 automated machine learning neural networks deep learning ARRHYTHMIAS
下载PDF
Automated deep learning system for power line inspection image analysis and processing: architecture and design issues
7
作者 Daoxing Li Xiaohui Wang +1 位作者 Jie Zhang Zhixiang Ji 《Global Energy Interconnection》 EI CSCD 2023年第5期614-633,共20页
The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its... The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its excellent performance in computer vision, deep learning has been applied to UAV inspection image processing tasks such as power line identification and insulator defect detection. Despite their excellent performance, electric power UAV inspection image processing models based on deep learning face several problems such as a small application scope, the need for constant retraining and optimization, and high R&D monetary and time costs due to the black-box and scene data-driven characteristics of deep learning. In this study, an automated deep learning system for electric power UAV inspection image analysis and processing is proposed as a solution to the aforementioned problems. This system design is based on the three critical design principles of generalizability, extensibility, and automation. Pre-trained models, fine-tuning (downstream task adaptation), and automated machine learning, which are closely related to these design principles, are reviewed. In addition, an automated deep learning system architecture for electric power UAV inspection image analysis and processing is presented. A prototype system was constructed and experiments were conducted on the two electric power UAV inspection image analysis and processing tasks of insulator self-detonation and bird nest recognition. The models constructed using the prototype system achieved 91.36% and 86.13% mAP for insulator self-detonation and bird nest recognition, respectively. This demonstrates that the system design concept is reasonable and the system architecture feasible . 展开更多
关键词 Transmission line inspection Deep learning automated machine learning Image analysis and processing
下载PDF
AIPerf: Automated Machine Learning as an AI-HPC Benchmark
8
作者 Zhixiang Ren Yongheng Liu +6 位作者 Tianhui Shi Lei Xie Yue Zhou Jidong Zhai Youhui Zhang Yunquan Zhang Wenguang Chen 《Big Data Mining and Analytics》 EI 2021年第3期208-220,共13页
The plethora of complex Artificial Intelligence(AI)algorithms and available High-Performance Computing(HPC)power stimulates the expeditious development of AI components with heterogeneous designs.Consequently,the need... The plethora of complex Artificial Intelligence(AI)algorithms and available High-Performance Computing(HPC)power stimulates the expeditious development of AI components with heterogeneous designs.Consequently,the need for cross-stack performance benchmarking of AI-HPC systems has rapidly emerged.In particular,the de facto HPC benchmark,LINPACK,cannot reflect the AI computing power and input/output performance without a representative workload.Current popular AI benchmarks,such as MLPerf,have a fixed problem size and therefore limited scalability.To address these issues,we propose an end-to-end benchmark suite utilizing automated machine learning,which not only represents real AI scenarios,but also is auto-adaptively scalable to various scales of machines.We implement the algorithms in a highly parallel and flexible way to ensure the efficiency and optimization potential on diverse systems with customizable configurations.We utilize Operations Per Second(OPS),which is measured in an analytical and systematic approach,as a major metric to quantify the AI performance.We perform evaluations on various systems to ensure the benchmark’s stability and scalability,from 4 nodes with 32 NVIDIA Tesla T4(56.1 Tera-OPS measured)up to 512 nodes with 4096 Huawei Ascend 910(194.53 Peta-OPS measured),and the results show near-linear weak scalability.With a flexible workload and single metric,AIPerf can easily scale on and rank AI-HPC,providing a powerful benchmark suite for the coming supercomputing era. 展开更多
关键词 High-Performance Computing(HPC) Artificial Intelligence(AI) automated machine learning
原文传递
Interpretable machine learning analysis and automated modeling to simulate fluid-particle flows
9
作者 Bo Ouyang Litao Zhu Zhenghong Luo 《Particuology》 SCIE EI CAS CSCD 2023年第9期42-52,共11页
The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient... The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient(MIC),interpretable ML,and automated ML.It is previously shown that the solidvolume fraction has the greatest effect on the drag force.The present study aims to quantitativelyinvestigate the influence of flow properties on mesoscale drag correction(H_(d)).The MIC results showstrong correlations between the features(i.e.,slip velocity(u^(*)_(sy))and particle volume fraction(εs))and thelabel H_(d).The interpretable ML analysis confirms this conclusion,and quantifies the contribution of u^(*)_(sy),εs and gas pressure gradient to the model as 71.9%,27.2%and 0.9%,respectively.Automated ML without theneed to select the model structure and hyperparameters is used for modeling,improving the predictionaccuracy over our previous model(Zhu et al.,2020;Ouyang,Zhu,Su,&Luo,2021). 展开更多
关键词 Filtered two-fluid model Fluid-particle flow Mesoscale closure Interpretable machine learning automated machine learning Maximal information coefficient
原文传递
An AutoML based trajectory optimization method for long-distance spacecraft pursuit-evasion game
10
作者 YANG Fuyunxiang YANG Leping ZHU Yanwei 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第3期754-765,共12页
Current successes in artificial intelligence domain have revitalized interest in spacecraft pursuit-evasion game,which is an interception problem with a non-cooperative maneuvering target.The paper presents an automat... Current successes in artificial intelligence domain have revitalized interest in spacecraft pursuit-evasion game,which is an interception problem with a non-cooperative maneuvering target.The paper presents an automated machine learning(AutoML)based method to generate optimal trajectories in long-distance scenarios.Compared with conventional deep neural network(DNN)methods,the proposed method dramatically reduces the reliance on manual intervention and machine learning expertise.Firstly,based on differential game theory and costate normalization technique,the trajectory optimization problem is formulated under the assumption of continuous thrust.Secondly,the AutoML technique based on sequential model-based optimization(SMBO)framework is introduced to automate DNN design in deep learning process.If recommended DNN architecture exists,the tree-structured Parzen estimator(TPE)is used,otherwise the efficient neural architecture search(NAS)with network morphism is used.Thus,a novel trajectory optimization method with high computational efficiency is achieved.Finally,numerical results demonstrate the feasibility and efficiency of the proposed method. 展开更多
关键词 PURSUIT-EVASION different game trajectory optimization automated machine learning(AutoML)
下载PDF
Effective Model Compression via Stage-wise Pruning
11
作者 Ming-Yang Zhang Xin-Yi Yu Lin-Lin Ou 《Machine Intelligence Research》 EI CSCD 2023年第6期937-951,共15页
Automated machine learning(AutoML)pruning methods aim at searching for a pruning strategy automatically to reduce the computational complexity of deep convolutional neural networks(deep CNNs).However,some previous wor... Automated machine learning(AutoML)pruning methods aim at searching for a pruning strategy automatically to reduce the computational complexity of deep convolutional neural networks(deep CNNs).However,some previous work found that the results of many Auto-ML pruning methods cannot even surpass the results of the uniformly pruning method.In this paper,the ineffectiveness of Auto-ML pruning,which is caused by unfull and unfair training of the supernet,is shown.A deep supernet suffers from unfull training because it contains too many candidates.To overcome the unfull training,a stage-wise pruning(SWP)method is proposed,which splits a deep supernet into several stage-wise supernets to reduce the candidate number and utilize inplace distillation to supervise the stage training.Besides,a wide supernet is hit by unfair training since the sampling probability of each channel is unequal.Therefore,the fullnet and the tinynet are sampled in each training iteration to ensure that each channel can be overtrained.Remarkably,the proxy performance of the subnets trained with SWP is closer to the actual performance than that of most of the previous AutoML pruning work.Furthermore,experiments show that SWP achieves the state-of-the-art in both CIFAR-10 and ImageNet under the mobile setting. 展开更多
关键词 automated machine learning(AutoML) channel pruning model compression DISTILLATION convolutional neural networks(CNN)
原文传递
Graph Neural Architecture Search:A Survey 被引量:1
12
作者 Babatounde Moctard Oloulade Jianliang Gao +2 位作者 Jiamin Chen Tengfei Lyu Raeed Al-Sabri 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第4期692-708,共17页
In academia and industries,graph neural networks(GNNs)have emerged as a powerful approach to graph data processing ranging from node classification and link prediction tasks to graph clustering tasks.GNN models are us... In academia and industries,graph neural networks(GNNs)have emerged as a powerful approach to graph data processing ranging from node classification and link prediction tasks to graph clustering tasks.GNN models are usually handcrafted.However,building handcrafted GNN models is difficult and requires expert experience because GNN model components are complex and sensitive to variations.The complexity of GNN model components has brought significant challenges to the existing efficiencies of GNNs.Hence,many studies have focused on building automated machine learning frameworks to search for the best GNN models for targeted tasks.In this work,we provide a comprehensive review of automatic GNN model building frameworks to summarize the status of the field to facilitate future progress.We categorize the components of automatic GNN model building frameworks into three dimensions according to the challenges of building them.After reviewing the representative works for each dimension,we discuss promising future research directions in this rapidly growing field. 展开更多
关键词 graph neural network neural architecture search automated machine learning geometric deep learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部