In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and...In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics.展开更多
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ...Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance.展开更多
Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machin...Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machine learning framework(AutoGluon).A total of 2241 landslides were identified from satellite images before and after the rainfall event,and 10 impact factors including elevation,slope,aspect,normalized difference vegetation index(NDVI),topographic wetness index(TWI),lithology,land cover,distance to roads,distance to rivers,and rainfall were selected as indicators.The WeightedEnsemble model,which is an ensemble of 13 basic machine learning models weighted together,was used to output the landslide hazard assessment results.The results indicate that landslides mainly occurred in the central part of the study area,especially in Hetian and Shanghu.Totally 102.44 s were spent to train all the models,and the ensemble model WeightedEnsemble has an Area Under the Curve(AUC)value of92.36%in the test set.In addition,14.95%of the study area was determined to be at very high hazard,with a landslide density of 12.02 per square kilometer.This study serves as a significant reference for the prevention and mitigation of geological hazards and land use planning in Luhe County.展开更多
File labeling techniques have a long history in analyzing the anthological trends in computational linguistics.The situation becomes worse in the case of files downloaded into systems from the Internet.Currently,most ...File labeling techniques have a long history in analyzing the anthological trends in computational linguistics.The situation becomes worse in the case of files downloaded into systems from the Internet.Currently,most users either have to change file names manually or leave a meaningless name of the files,which increases the time to search required files and results in redundancy and duplications of user files.Currently,no significant work is done on automated file labeling during the organization of heterogeneous user files.A few attempts have been made in topic modeling.However,one major drawback of current topic modeling approaches is better results.They rely on specific language types and domain similarity of the data.In this research,machine learning approaches have been employed to analyze and extract the information from heterogeneous corpus.A different file labeling technique has also been used to get the meaningful and`cohesive topic of the files.The results show that the proposed methodology can generate relevant and context-sensitive names for heterogeneous data files and provide additional insight into automated file labeling in operating systems.展开更多
By identifying and responding to any malicious behavior that could endanger the system,the Intrusion Detection System(IDS)is crucial for preserving the security of the Industrial Internet of Things(IIoT)network.The be...By identifying and responding to any malicious behavior that could endanger the system,the Intrusion Detection System(IDS)is crucial for preserving the security of the Industrial Internet of Things(IIoT)network.The benefit of anomaly-based IDS is that they are able to recognize zeroday attacks due to the fact that they do not rely on a signature database to identify abnormal activity.In order to improve control over datasets and the process,this study proposes using an automated machine learning(AutoML)technique to automate the machine learning processes for IDS.Our groundbreaking architecture,known as AID4I,makes use of automatic machine learning methods for intrusion detection.Through automation of preprocessing,feature selection,model selection,and hyperparameter tuning,the objective is to identify an appropriate machine learning model for intrusion detection.Experimental studies demonstrate that the AID4I framework successfully proposes a suitablemodel.The integrity,security,and confidentiality of data transmitted across the IIoT network can be ensured by automating machine learning processes in the IDS to enhance its capacity to identify and stop threatening activities.With a comprehensive solution that takes advantage of the latest advances in automated machine learning methods to improve network security,AID4I is a powerful and effective instrument for intrusion detection.In preprocessing module,three distinct imputation methods are utilized to handle missing data,ensuring the robustness of the intrusion detection system in the presence of incomplete information.Feature selection module adopts a hybrid approach that combines Shapley values and genetic algorithm.The Parameter Optimization module encompasses a diverse set of 14 classification methods,allowing for thorough exploration and optimization of the parameters associated with each algorithm.By carefully tuning these parameters,the framework enhances its adaptability and accuracy in identifying potential intrusions.Experimental results demonstrate that the AID4I framework can achieve high levels of accuracy in detecting network intrusions up to 14.39%on public datasets,outperforming traditional intrusion detection methods while concurrently reducing the elapsed time for training and testing.展开更多
Machine tool technologies, especially Computer Numerical Control (CNC) High Speed Machining (HSM) have emerged as effective mechanisms for Rapid Tooling and Manufacturing applications. These new technologies are a...Machine tool technologies, especially Computer Numerical Control (CNC) High Speed Machining (HSM) have emerged as effective mechanisms for Rapid Tooling and Manufacturing applications. These new technologies are attractive for competitive manufacturing because of their technical advantages, i.e. a significant reduction in lead-time, high product accuracy, and good surface finish. However, HSM not only stimulates advancements in cutting tools and materials, it also demands increasingly sophisticated CAD^CAM software, and powerful CNC controllers that require more support technologies. This paper explores the computational requirement and impact of HSM on CNC controller, wear detection, look ahead programming, simulation, and tool management.展开更多
The paper shows, how the quality of workpieces to be heat treated can be improved by using the technology of fixture-quenching. Different fixture systems, like fixed mandrel, expanding mandrel and lamellar mandrel are...The paper shows, how the quality of workpieces to be heat treated can be improved by using the technology of fixture-quenching. Different fixture systems, like fixed mandrel, expanding mandrel and lamellar mandrel are described. In the next section there is a comparison between manual operation of hardening machines vs. automated lines. Since there are applications for each, manual and fully automated hardening systems, HEESS has not only focused to develop automated lines, but also refined manual operated hardening machines (SP-Series). These take advantage of the latest technology, like for example quick tool change and PLC-control with workpiece parameter database. An overview over different machine types is given.展开更多
Epilepsy is a common neurological disease and severely affects the daily life of patients.The automatic detection and diagnosis system of epilepsy based on electroencephalogram(EEG)is of great significance to help pat...Epilepsy is a common neurological disease and severely affects the daily life of patients.The automatic detection and diagnosis system of epilepsy based on electroencephalogram(EEG)is of great significance to help patients with epilepsy return to normal life.With the development of deep learning technology and the increase in the amount of EEG data,the performance of deep learning based automatic detection algorithm for epilepsy EEG has gradually surpassed the traditional hand-crafted approaches.However,the neural architecture design for epilepsy EEG analysis is time-consuming and laborious,and the designed structure is difficult to adapt to the changing EEG collection environment,which limits the application of the epilepsy EEG automatic detection system.In this paper,we explore the possibility of Automated Machine Learning(AutoML)playing a role in the task of epilepsy EEG detection.We apply the neural architecture search(NAS)algorithm in the AutoKeras platform to design the model for epilepsy EEG analysis and utilize feature interpretability methods to ensure the reliability of the searched model.The experimental results show that the model obtained through NAS outperforms the baseline model in performance.The searched model improves classification accuracy,F1-score and Cohen’s kappa coefficient by 7.68%,7.82%and 9.60%respectively than the baseline model.Furthermore,NASbased model is capable of extracting EEG features related to seizures for classification.展开更多
Current successes in artificial intelligence domain have revitalized interest in spacecraft pursuit-evasion game,which is an interception problem with a non-cooperative maneuvering target.The paper presents an automat...Current successes in artificial intelligence domain have revitalized interest in spacecraft pursuit-evasion game,which is an interception problem with a non-cooperative maneuvering target.The paper presents an automated machine learning(AutoML)based method to generate optimal trajectories in long-distance scenarios.Compared with conventional deep neural network(DNN)methods,the proposed method dramatically reduces the reliance on manual intervention and machine learning expertise.Firstly,based on differential game theory and costate normalization technique,the trajectory optimization problem is formulated under the assumption of continuous thrust.Secondly,the AutoML technique based on sequential model-based optimization(SMBO)framework is introduced to automate DNN design in deep learning process.If recommended DNN architecture exists,the tree-structured Parzen estimator(TPE)is used,otherwise the efficient neural architecture search(NAS)with network morphism is used.Thus,a novel trajectory optimization method with high computational efficiency is achieved.Finally,numerical results demonstrate the feasibility and efficiency of the proposed method.展开更多
Cluster analysis is a crucial technique in unsupervised machine learning,pattern recognition,and data analysis.However,current clustering algorithms suffer from the need for manual determination of parameter values,lo...Cluster analysis is a crucial technique in unsupervised machine learning,pattern recognition,and data analysis.However,current clustering algorithms suffer from the need for manual determination of parameter values,low accuracy,and inconsistent performance concerning data size and structure.To address these challenges,a novel clustering algorithm called the fully automated density-based clustering method(FADBC)is proposed.The FADBC method consists of two stages:parameter selection and cluster extraction.In the first stage,a proposed method extracts optimal parameters for the dataset,including the epsilon size and a minimum number of points thresholds.These parameters are then used in a density-based technique to scan each point in the dataset and evaluate neighborhood densities to find clusters.The proposed method was evaluated on different benchmark datasets andmetrics,and the experimental results demonstrate its competitive performance without requiring manual inputs.The results show that the FADBC method outperforms well-known clustering methods such as the agglomerative hierarchical method,k-means,spectral clustering,DBSCAN,FCDCSD,Gaussian mixtures,and density-based spatial clustering methods.It can handle any kind of data set well and perform excellently.展开更多
Farming is cultivating the soil,producing crops,and keeping livestock.The agricultural sector plays a crucial role in a country’s economic growth.This research proposes a two-stage machine learning framework for agri...Farming is cultivating the soil,producing crops,and keeping livestock.The agricultural sector plays a crucial role in a country’s economic growth.This research proposes a two-stage machine learning framework for agriculture to improve efficiency and increase crop yield.In the first stage,machine learning algorithms generate data for extensive and far-flung agricultural areas and forecast crops.The recommended crops are based on various factors such as weather conditions,soil analysis,and the amount of fertilizers and pesticides required.In the second stage,a transfer learningbased model for plant seedlings,pests,and plant leaf disease datasets is used to detect weeds,pesticides,and diseases in the crop.The proposed model achieved an average accuracy of 95%,97%,and 98% in plant seedlings,pests,and plant leaf disease detection,respectively.The system can help farmers pinpoint the precise measures required at the right time to increase yields.展开更多
The aim of this article is to assist farmers in making better crop selection decisions based on soil fertility and weather forecast through the use of IoT and AI (smart farming). To accomplish this, a prototype was de...The aim of this article is to assist farmers in making better crop selection decisions based on soil fertility and weather forecast through the use of IoT and AI (smart farming). To accomplish this, a prototype was developed capable of predicting the best suitable crop for a specific plot of land based on soil fertility and making recommendations based on weather forecast. Random Forest machine learning algorithm was used and trained with Jupyter in the Anaconda framework to achieve an accuracy of about 99%. Based on this process, IoT with the Message Queuing Telemetry Transport (MQTT) protocol, a machine learning algorithm, based on Random Forest, and weather forecast API for crop prediction and recommendations were used. The prototype accepts nitrogen, phosphorus, potassium, humidity, temperature and pH as input parameters from the IoT sensors, as well as the weather API for data forecasting. The approach was tested in a suburban area of Yaounde (Cameroon). Taking into account future meteorological parameters (rainfall, wind and temperature) in this project produced better recommendations and therefore better crop selection. All necessary results can be accessed from anywhere and at any time using the IoT system via a web browser.展开更多
The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its...The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its excellent performance in computer vision, deep learning has been applied to UAV inspection image processing tasks such as power line identification and insulator defect detection. Despite their excellent performance, electric power UAV inspection image processing models based on deep learning face several problems such as a small application scope, the need for constant retraining and optimization, and high R&D monetary and time costs due to the black-box and scene data-driven characteristics of deep learning. In this study, an automated deep learning system for electric power UAV inspection image analysis and processing is proposed as a solution to the aforementioned problems. This system design is based on the three critical design principles of generalizability, extensibility, and automation. Pre-trained models, fine-tuning (downstream task adaptation), and automated machine learning, which are closely related to these design principles, are reviewed. In addition, an automated deep learning system architecture for electric power UAV inspection image analysis and processing is presented. A prototype system was constructed and experiments were conducted on the two electric power UAV inspection image analysis and processing tasks of insulator self-detonation and bird nest recognition. The models constructed using the prototype system achieved 91.36% and 86.13% mAP for insulator self-detonation and bird nest recognition, respectively. This demonstrates that the system design concept is reasonable and the system architecture feasible .展开更多
文摘In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics.
文摘Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance.
基金supported by the State Administration of Science,Technology and Industry for National Defence,PRC(KJSP2020020303)the National Institute of Natural Hazards,Ministry of Emergency Management of China(ZDJ2021-12)。
文摘Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machine learning framework(AutoGluon).A total of 2241 landslides were identified from satellite images before and after the rainfall event,and 10 impact factors including elevation,slope,aspect,normalized difference vegetation index(NDVI),topographic wetness index(TWI),lithology,land cover,distance to roads,distance to rivers,and rainfall were selected as indicators.The WeightedEnsemble model,which is an ensemble of 13 basic machine learning models weighted together,was used to output the landslide hazard assessment results.The results indicate that landslides mainly occurred in the central part of the study area,especially in Hetian and Shanghu.Totally 102.44 s were spent to train all the models,and the ensemble model WeightedEnsemble has an Area Under the Curve(AUC)value of92.36%in the test set.In addition,14.95%of the study area was determined to be at very high hazard,with a landslide density of 12.02 per square kilometer.This study serves as a significant reference for the prevention and mitigation of geological hazards and land use planning in Luhe County.
文摘File labeling techniques have a long history in analyzing the anthological trends in computational linguistics.The situation becomes worse in the case of files downloaded into systems from the Internet.Currently,most users either have to change file names manually or leave a meaningless name of the files,which increases the time to search required files and results in redundancy and duplications of user files.Currently,no significant work is done on automated file labeling during the organization of heterogeneous user files.A few attempts have been made in topic modeling.However,one major drawback of current topic modeling approaches is better results.They rely on specific language types and domain similarity of the data.In this research,machine learning approaches have been employed to analyze and extract the information from heterogeneous corpus.A different file labeling technique has also been used to get the meaningful and`cohesive topic of the files.The results show that the proposed methodology can generate relevant and context-sensitive names for heterogeneous data files and provide additional insight into automated file labeling in operating systems.
文摘By identifying and responding to any malicious behavior that could endanger the system,the Intrusion Detection System(IDS)is crucial for preserving the security of the Industrial Internet of Things(IIoT)network.The benefit of anomaly-based IDS is that they are able to recognize zeroday attacks due to the fact that they do not rely on a signature database to identify abnormal activity.In order to improve control over datasets and the process,this study proposes using an automated machine learning(AutoML)technique to automate the machine learning processes for IDS.Our groundbreaking architecture,known as AID4I,makes use of automatic machine learning methods for intrusion detection.Through automation of preprocessing,feature selection,model selection,and hyperparameter tuning,the objective is to identify an appropriate machine learning model for intrusion detection.Experimental studies demonstrate that the AID4I framework successfully proposes a suitablemodel.The integrity,security,and confidentiality of data transmitted across the IIoT network can be ensured by automating machine learning processes in the IDS to enhance its capacity to identify and stop threatening activities.With a comprehensive solution that takes advantage of the latest advances in automated machine learning methods to improve network security,AID4I is a powerful and effective instrument for intrusion detection.In preprocessing module,three distinct imputation methods are utilized to handle missing data,ensuring the robustness of the intrusion detection system in the presence of incomplete information.Feature selection module adopts a hybrid approach that combines Shapley values and genetic algorithm.The Parameter Optimization module encompasses a diverse set of 14 classification methods,allowing for thorough exploration and optimization of the parameters associated with each algorithm.By carefully tuning these parameters,the framework enhances its adaptability and accuracy in identifying potential intrusions.Experimental results demonstrate that the AID4I framework can achieve high levels of accuracy in detecting network intrusions up to 14.39%on public datasets,outperforming traditional intrusion detection methods while concurrently reducing the elapsed time for training and testing.
文摘Machine tool technologies, especially Computer Numerical Control (CNC) High Speed Machining (HSM) have emerged as effective mechanisms for Rapid Tooling and Manufacturing applications. These new technologies are attractive for competitive manufacturing because of their technical advantages, i.e. a significant reduction in lead-time, high product accuracy, and good surface finish. However, HSM not only stimulates advancements in cutting tools and materials, it also demands increasingly sophisticated CAD^CAM software, and powerful CNC controllers that require more support technologies. This paper explores the computational requirement and impact of HSM on CNC controller, wear detection, look ahead programming, simulation, and tool management.
文摘The paper shows, how the quality of workpieces to be heat treated can be improved by using the technology of fixture-quenching. Different fixture systems, like fixed mandrel, expanding mandrel and lamellar mandrel are described. In the next section there is a comparison between manual operation of hardening machines vs. automated lines. Since there are applications for each, manual and fully automated hardening systems, HEESS has not only focused to develop automated lines, but also refined manual operated hardening machines (SP-Series). These take advantage of the latest technology, like for example quick tool change and PLC-control with workpiece parameter database. An overview over different machine types is given.
基金This work is supported by Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-19-006A3).
文摘Epilepsy is a common neurological disease and severely affects the daily life of patients.The automatic detection and diagnosis system of epilepsy based on electroencephalogram(EEG)is of great significance to help patients with epilepsy return to normal life.With the development of deep learning technology and the increase in the amount of EEG data,the performance of deep learning based automatic detection algorithm for epilepsy EEG has gradually surpassed the traditional hand-crafted approaches.However,the neural architecture design for epilepsy EEG analysis is time-consuming and laborious,and the designed structure is difficult to adapt to the changing EEG collection environment,which limits the application of the epilepsy EEG automatic detection system.In this paper,we explore the possibility of Automated Machine Learning(AutoML)playing a role in the task of epilepsy EEG detection.We apply the neural architecture search(NAS)algorithm in the AutoKeras platform to design the model for epilepsy EEG analysis and utilize feature interpretability methods to ensure the reliability of the searched model.The experimental results show that the model obtained through NAS outperforms the baseline model in performance.The searched model improves classification accuracy,F1-score and Cohen’s kappa coefficient by 7.68%,7.82%and 9.60%respectively than the baseline model.Furthermore,NASbased model is capable of extracting EEG features related to seizures for classification.
基金supported by the National Defense Science and Technology Innovation program(18-163-15-LZ-001-004-13).
文摘Current successes in artificial intelligence domain have revitalized interest in spacecraft pursuit-evasion game,which is an interception problem with a non-cooperative maneuvering target.The paper presents an automated machine learning(AutoML)based method to generate optimal trajectories in long-distance scenarios.Compared with conventional deep neural network(DNN)methods,the proposed method dramatically reduces the reliance on manual intervention and machine learning expertise.Firstly,based on differential game theory and costate normalization technique,the trajectory optimization problem is formulated under the assumption of continuous thrust.Secondly,the AutoML technique based on sequential model-based optimization(SMBO)framework is introduced to automate DNN design in deep learning process.If recommended DNN architecture exists,the tree-structured Parzen estimator(TPE)is used,otherwise the efficient neural architecture search(NAS)with network morphism is used.Thus,a novel trajectory optimization method with high computational efficiency is achieved.Finally,numerical results demonstrate the feasibility and efficiency of the proposed method.
基金the Deanship of Scientific Research at Umm Al-Qura University,Grant Code:(23UQU4361009DSR001).
文摘Cluster analysis is a crucial technique in unsupervised machine learning,pattern recognition,and data analysis.However,current clustering algorithms suffer from the need for manual determination of parameter values,low accuracy,and inconsistent performance concerning data size and structure.To address these challenges,a novel clustering algorithm called the fully automated density-based clustering method(FADBC)is proposed.The FADBC method consists of two stages:parameter selection and cluster extraction.In the first stage,a proposed method extracts optimal parameters for the dataset,including the epsilon size and a minimum number of points thresholds.These parameters are then used in a density-based technique to scan each point in the dataset and evaluate neighborhood densities to find clusters.The proposed method was evaluated on different benchmark datasets andmetrics,and the experimental results demonstrate its competitive performance without requiring manual inputs.The results show that the FADBC method outperforms well-known clustering methods such as the agglomerative hierarchical method,k-means,spectral clustering,DBSCAN,FCDCSD,Gaussian mixtures,and density-based spatial clustering methods.It can handle any kind of data set well and perform excellently.
基金funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Farming is cultivating the soil,producing crops,and keeping livestock.The agricultural sector plays a crucial role in a country’s economic growth.This research proposes a two-stage machine learning framework for agriculture to improve efficiency and increase crop yield.In the first stage,machine learning algorithms generate data for extensive and far-flung agricultural areas and forecast crops.The recommended crops are based on various factors such as weather conditions,soil analysis,and the amount of fertilizers and pesticides required.In the second stage,a transfer learningbased model for plant seedlings,pests,and plant leaf disease datasets is used to detect weeds,pesticides,and diseases in the crop.The proposed model achieved an average accuracy of 95%,97%,and 98% in plant seedlings,pests,and plant leaf disease detection,respectively.The system can help farmers pinpoint the precise measures required at the right time to increase yields.
文摘The aim of this article is to assist farmers in making better crop selection decisions based on soil fertility and weather forecast through the use of IoT and AI (smart farming). To accomplish this, a prototype was developed capable of predicting the best suitable crop for a specific plot of land based on soil fertility and making recommendations based on weather forecast. Random Forest machine learning algorithm was used and trained with Jupyter in the Anaconda framework to achieve an accuracy of about 99%. Based on this process, IoT with the Message Queuing Telemetry Transport (MQTT) protocol, a machine learning algorithm, based on Random Forest, and weather forecast API for crop prediction and recommendations were used. The prototype accepts nitrogen, phosphorus, potassium, humidity, temperature and pH as input parameters from the IoT sensors, as well as the weather API for data forecasting. The approach was tested in a suburban area of Yaounde (Cameroon). Taking into account future meteorological parameters (rainfall, wind and temperature) in this project produced better recommendations and therefore better crop selection. All necessary results can be accessed from anywhere and at any time using the IoT system via a web browser.
基金This work was supported by Science and Technology Project of State Grid Corporation“Research on Key Technologies of Power Artificial Intelligence Open Platform”(5700-202155260A-0-0-00).
文摘The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its excellent performance in computer vision, deep learning has been applied to UAV inspection image processing tasks such as power line identification and insulator defect detection. Despite their excellent performance, electric power UAV inspection image processing models based on deep learning face several problems such as a small application scope, the need for constant retraining and optimization, and high R&D monetary and time costs due to the black-box and scene data-driven characteristics of deep learning. In this study, an automated deep learning system for electric power UAV inspection image analysis and processing is proposed as a solution to the aforementioned problems. This system design is based on the three critical design principles of generalizability, extensibility, and automation. Pre-trained models, fine-tuning (downstream task adaptation), and automated machine learning, which are closely related to these design principles, are reviewed. In addition, an automated deep learning system architecture for electric power UAV inspection image analysis and processing is presented. A prototype system was constructed and experiments were conducted on the two electric power UAV inspection image analysis and processing tasks of insulator self-detonation and bird nest recognition. The models constructed using the prototype system achieved 91.36% and 86.13% mAP for insulator self-detonation and bird nest recognition, respectively. This demonstrates that the system design concept is reasonable and the system architecture feasible .