Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experi...Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
Data sharing and privacy protection are made possible by federated learning,which allows for continuous model parameter sharing between several clients and a central server.Multiple reliable and high-quality clients m...Data sharing and privacy protection are made possible by federated learning,which allows for continuous model parameter sharing between several clients and a central server.Multiple reliable and high-quality clients must participate in practical applications for the federated learning global model to be accurate,but because the clients are independent,the central server cannot fully control their behavior.The central server has no way of knowing the correctness of the model parameters provided by each client in this round,so clients may purposefully or unwittingly submit anomalous data,leading to abnormal behavior,such as becoming malicious attackers or defective clients.To reduce their negative consequences,it is crucial to quickly detect these abnormalities and incentivize them.In this paper,we propose a Federated Learning framework for Detecting and Incentivizing Abnormal Clients(FL-DIAC)to accomplish efficient and security federated learning.We build a detector that introduces an auto-encoder for anomaly detection and use it to perform anomaly identification and prevent the involvement of abnormal clients,in particular for the anomaly client detection problem.Among them,before the model parameters are input to the detector,we propose a Fourier transform-based anomaly data detectionmethod for dimensionality reduction in order to reduce the computational complexity.Additionally,we create a credit scorebased incentive structure to encourage clients to participate in training in order tomake clients actively participate.Three training models(CNN,MLP,and ResNet-18)and three datasets(MNIST,Fashion MNIST,and CIFAR-10)have been used in experiments.According to theoretical analysis and experimental findings,the FL-DIAC is superior to other federated learning schemes of the same type in terms of effectiveness.展开更多
Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and ...Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats.展开更多
Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intr...Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.展开更多
Brain tumors are a pressing public health concern, characterized by their high mortality and morbidity rates.Nevertheless, the manual segmentation of brain tumors remains a laborious and error-prone task, necessitatin...Brain tumors are a pressing public health concern, characterized by their high mortality and morbidity rates.Nevertheless, the manual segmentation of brain tumors remains a laborious and error-prone task, necessitatingthe development of more precise and efficient methodologies. To address this formidable challenge, we proposean advanced approach for segmenting brain tumorMagnetic Resonance Imaging (MRI) images that harnesses theformidable capabilities of deep learning and convolutional neural networks (CNNs). While CNN-based methodshave displayed promise in the realm of brain tumor segmentation, the intricate nature of these tumors, markedby irregular shapes, varying sizes, uneven distribution, and limited available data, poses substantial obstacles toachieving accurate semantic segmentation. In our study, we introduce a pioneering Hybrid U-Net framework thatseamlessly integrates the U-Net and CNN architectures to surmount these challenges. Our proposed approachencompasses preprocessing steps that enhance image visualization, a customized layered U-Net model tailoredfor precise segmentation, and the inclusion of dropout layers to mitigate overfitting during the training process.Additionally, we leverage the CNN mechanism to exploit contextual information within brain tumorMRI images,resulting in a substantial enhancement in segmentation accuracy.Our experimental results attest to the exceptionalperformance of our framework, with accuracy rates surpassing 97% across diverse datasets, showcasing therobustness and effectiveness of our approach. Furthermore, we conduct a comprehensive assessment of ourmethod’s capabilities by evaluating various performance measures, including the sensitivity, Jaccard-index, andspecificity. Our proposed model achieved 99% accuracy. The implications of our findings are profound. Theproposed Hybrid U-Net model emerges as a highly promising diagnostic tool, poised to revolutionize brain tumorimage segmentation for radiologists and clinicians.展开更多
The bioinspired nacre or bone structure represents a remarkable example of tough,strong,lightweight,and multifunctional structures in biological materials that can be an inspiration to design bioinspired high-performa...The bioinspired nacre or bone structure represents a remarkable example of tough,strong,lightweight,and multifunctional structures in biological materials that can be an inspiration to design bioinspired high-performance materials.The bioinspired structure consists of hard grains and soft material interfaces.While the material interface has a very low volume percentage,its property has the ability to determine the bulk material response.Machine learning technology nowadays is widely used in material science.A machine learning model was utilized to predict the material response based on the material interface properties in a bioinspired nanocomposite.This model was trained on a comprehensive dataset of material response and interface properties,allowing it to make accurate predictions.The results of this study demonstrate the efficiency and high accuracy of the machine learning model.The successful application of machine learning into the material property prediction process has the potential to greatly enhance both the efficiency and accuracy of the material design process.展开更多
Understanding the correlation between the fundamental descriptors and catalytic performance is meaningful to guide the design of high-performance electrochemical catalysts.However,exploring key factors that affect cat...Understanding the correlation between the fundamental descriptors and catalytic performance is meaningful to guide the design of high-performance electrochemical catalysts.However,exploring key factors that affect catalytic performance in the vast catalyst space remains challenging for people.Herein,to accurately identify the factors that affect the performance of N2 reduction,we apply interpretable machine learning(ML)to analyze high-throughput screening results,which is also suited to other surface reactions in catalysis.To expound on the paradigm,33 promising catalysts are screened from 168 carbon-supported candidates,specifically single-atom catalysts(SACs)supported by a BC_(3)monolayer(TM@V_(B/C)-N_(n)=_(0-3)-BC_(3))via high-throughput screening.Subsequently,the hybrid sampling method and XGBoost model are selected to classify eligible and non-eligible catalysts.Through feature interpretation using Shapley Additive Explanations(SHAP)analysis,two crucial features,that is,the number of valence electrons(N_(v))and nitrogen substitution(N_(n)),are screened out.Combining SHAP analysis and electronic structure calculations,the synergistic effect between an active center with low valence electron numbers and reasonable C-N coordination(a medium fraction of nitrogen substitution)can exhibit high catalytic performance.Finally,six superior catalysts with a limiting potential lower than-0.4 V are predicted.Our workflow offers a rational approach to obtaining key information on catalytic performance from high-throughput screening results to design efficient catalysts that can be applied to other materials and reactions.展开更多
Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually imp...Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually impaired.This study presents an innovative approach employing deep reinforcement learning to enhance the accuracy of natural language descriptions of images.Our method focuses on refining the reward function in deep reinforcement learning,facilitating the generation of precise descriptions by aligning visual and textual features more closely.Our approach comprises three key architectures.Firstly,it utilizes Residual Network 101(ResNet-101)and Faster Region-based Convolutional Neural Network(Faster R-CNN)to extract average and local image features,respectively,followed by the implementation of a dual attention mechanism for intricate feature fusion.Secondly,the Transformer model is engaged to derive contextual semantic features from textual data.Finally,the generation of descriptive text is executed through a two-layer long short-term memory network(LSTM),directed by the value and reward functions.Compared with the image description method that relies on deep learning,the score of Bilingual Evaluation Understudy(BLEU-1)is 0.762,which is 1.6%higher,and the score of BLEU-4 is 0.299.Consensus-based Image Description Evaluation(CIDEr)scored 0.998,Recall-Oriented Understudy for Gisting Evaluation(ROUGE)scored 0.552,the latter improved by 0.36%.These results not only attest to the viability of our approach but also highlight its superiority in the realm of image description.Future research can explore the integration of our method with other artificial intelligence(AI)domains,such as emotional AI,to create more nuanced and context-aware systems.展开更多
Due to various technical issues,existing numerical weather prediction(NWP)models often perform poorly at forecasting rainfall in the first several hours.To correct the bias of an NWP model and improve the accuracy of ...Due to various technical issues,existing numerical weather prediction(NWP)models often perform poorly at forecasting rainfall in the first several hours.To correct the bias of an NWP model and improve the accuracy of short-range precipitation forecasting,we propose a deep learning-based approach called UNet Mask,which combines NWP forecasts with the output of a convolutional neural network called UNet.The UNet Mask involves training the UNet on historical data from the NWP model and gridded rainfall observations for 6-hour precipitation forecasting.The overlap of the UNet output and the NWP forecasts at the same rainfall threshold yields a mask.The UNet Mask blends the UNet output and the NWP forecasts by taking the maximum between them and passing through the mask,which provides the corrected 6-hour rainfall forecasts.We evaluated UNet Mask on a test set and in real-time verification.The results showed that UNet Mask outperforms the NWP model in 6-hour precipitation prediction by reducing the FAR and improving CSI scores.Sensitivity tests also showed that different small rainfall thresholds applied to the UNet and the NWP model have different effects on UNet Mask's forecast performance.This study shows that UNet Mask is a promising approach for improving rainfall forecasting of NWP models.展开更多
In response to the United Nations Sustainable Development Goals and China’s“Dual Carbon”Goals(DCGs means the goals of“Carbon Peak and carbon neutrality”),this paper from the perspective of the construction of Ch...In response to the United Nations Sustainable Development Goals and China’s“Dual Carbon”Goals(DCGs means the goals of“Carbon Peak and carbon neutrality”),this paper from the perspective of the construction of China’s Innovation Demonstration Zones for Sustainable Development Agenda(IDZSDAs),combines carbon emission-related metrics to construct a comprehensive assessment system for Urban Sustainable Development Capacity(USDC).After obtaining USDC assessment results through the assessment system,an approach combining Least Absolute Shrinkage and Selection Operator(LASSO)regression and Random Forest(RF)based on machine learning is proposed for identifying influencing factors and characterizing key issues.Combining Coupling Coordination Degree(CCD)analysis,the study further summarizes the systemic patterns and future directions of urban sustainable development.A case study on the IDZSDAs from 2015 to 2022 reveals that:(1)the combined identification method based on machine learning and CCD models effectively quantifies influencing factors and key issues in the urban sustainable development process;(2)the correspondence between influencing factors and key subsystems identified by the LASSO-RF combination model is generally consistent with the development situations in various cities;and(3)the machine learning-based combined recognition method is scalable and dynamic.It enables decision-makers to accurately identify influencing factors and characterize key issues based on actual urban development needs.展开更多
This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning ap...This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.展开更多
This editorial discusses an article recently published in the World Journal of Clinical Cases,focusing on risk factors associated with intensive care unit-acquired weak-ness(ICU-AW).ICU-AW is a serious neuromuscular c...This editorial discusses an article recently published in the World Journal of Clinical Cases,focusing on risk factors associated with intensive care unit-acquired weak-ness(ICU-AW).ICU-AW is a serious neuromuscular complication seen in criti-cally ill patients,characterized by muscle dysfunction,weakness,and sensory impairments.Post-discharge,patients may encounter various obstacles impacting their quality of life.The pathogenesis involves intricate changes in muscle and nerve function,potentially leading to significant disabilities.Given its global significance,ICU-AW has become a key research area.The study identified critical risk factors using a multilayer perceptron neural network model,highlighting the impact of intensive care unit stay duration and mechanical ventilation duration on ICU-AW.Recommendations were provided for preventing ICU-AW,empha-sizing comprehensive interventions and risk factor mitigation.This editorial stresses the importance of external validation,cross-validation,and model tran-sparency to enhance model reliability.Moreover,the application of machine learning in clinical medicine has demonstrated clear benefits in improving disease understanding and treatment decisions.While machine learning presents oppor-tunities,challenges such as model reliability and data management necessitate thorough validation and ethical considerations.In conclusion,integrating ma-chine learning into healthcare offers significant potential and challenges.Enhan-cing data management,validating models,and upholding ethical standards are crucial for maximizing the benefits of machine learning in clinical practice.展开更多
This article presents an exhaustive comparative investigation into the accuracy of gender identification across diverse geographical regions,employing a deep learning classification algorithm for speech signal analysi...This article presents an exhaustive comparative investigation into the accuracy of gender identification across diverse geographical regions,employing a deep learning classification algorithm for speech signal analysis.In this study,speech samples are categorized for both training and testing purposes based on their geographical origin.Category 1 comprises speech samples from speakers outside of India,whereas Category 2 comprises live-recorded speech samples from Indian speakers.Testing speech samples are likewise classified into four distinct sets,taking into consideration both geographical origin and the language spoken by the speakers.Significantly,the results indicate a noticeable difference in gender identification accuracy among speakers from different geographical areas.Indian speakers,utilizing 52 Hindi and 26 English phonemes in their speech,demonstrate a notably higher gender identification accuracy of 85.75%compared to those speakers who predominantly use 26 English phonemes in their conversations when the system is trained using speech samples from Indian speakers.The gender identification accuracy of the proposed model reaches 83.20%when the system is trained using speech samples from speakers outside of India.In the analysis of speech signals,Mel Frequency Cepstral Coefficients(MFCCs)serve as relevant features for the speech data.The deep learning classification algorithm utilized in this research is based on a Bidirectional Long Short-Term Memory(BiLSTM)architecture within a Recurrent Neural Network(RNN)model.展开更多
Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’...Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing.展开更多
This paper addresses the challenge of integrating priority passage for emergency vehicles with optimal intersection control in modern urban traffic. It proposes an innovative strategy based on deep learning to enable ...This paper addresses the challenge of integrating priority passage for emergency vehicles with optimal intersection control in modern urban traffic. It proposes an innovative strategy based on deep learning to enable emergency vehicles to pass through intersections efficiently and safely. The research aims to develop a deep learning model that utilizes intersection violation monitoring cameras to identify emergency vehicles in real time. This system adjusts traffic signals to ensure the rapid passage of emergency vehicles while simultaneously optimizing the overall efficiency of the traffic system. In this study, OpenCV is used in combination with Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to jointly complete complex image processing and analysis tasks, to realize the purpose of fast travel of emergency vehicles. At the end of this study, the principle of the You Only Look Once (YOLO) algorithm can be used to design a website and a mobile phone application (app) to enable private vehicles with emergency needs to realize emergency passage through the application, which is also of great significance to improve the overall level of urban traffic management, reduce traffic congestion and promote the development of related technologies.展开更多
The performance of the metal halide perovskite solar cells(PSCs)highly relies on the experimental parameters,including the fabrication processes and the compositions of the perovskites;tremendous experimental work has...The performance of the metal halide perovskite solar cells(PSCs)highly relies on the experimental parameters,including the fabrication processes and the compositions of the perovskites;tremendous experimental work has been done to optimize these factors.However,predicting the device performance of the PSCs from the fabrication parameters before experiments is still challenging.Herein,we bridge this gap by machine learning(ML)based on a dataset including 1072 devices from peer-reviewed publications.The optimized ML model accurately predicts the PCE from the experimental parameters with a root mean square error of 1.28%and a Pearson coefficientr of 0.768.Moreover,the factors governing the device performance are ranked by shapley additive explanations(SHAP),among which,A-site cation is crucial to getting highly efficient PSCs.Experiments and density functional theory calculations are employed to validate and help explain the predicting results by the ML model.Our work reveals the feasibility of ML in predicting the device performance from the experimental parameters before experiments,which enables the reverse experimental design toward highly efficient PSCs.展开更多
Detecting a pipeline's abnormal status,which is typically a blockage and leakage accident,is important for the continuity and safety of mine backfill.The pipeline system for gravity-transport high-density backfill...Detecting a pipeline's abnormal status,which is typically a blockage and leakage accident,is important for the continuity and safety of mine backfill.The pipeline system for gravity-transport high-density backfill(GHB)is complex.Specifically designed,efficient,and accurate abnormal pipeline detection methods for GHB are rare.This work presents a long short-term memory-based deep learning(LSTM-DL)model for GHB pipeline blockage and leakage diagnosis.First,an industrial pipeline monitoring system was introduced using pressure and flow sensors.Second,blockage and leakage field experiments were designed to solve the problem of negative sample deficiency.The pipeline's statistical characteristics with different working statuses were analyzed to show their complexity.Third,the architecture of the LSTM-DL model was elaborated on and evaluated.Finally,the LSTM-DL model was compared with state-of-the-art(SOTA)learning algorithms.The results show that the backfilling cycle comprises multiple working phases and is intermittent.Although pressure and flow signals fluctuate stably in a normal cycle,their values are diverse in different cycles.Plugging causes a sudden change in interval signal features;leakage results in long variation duration and a wide fluctuation range.Among the SOTA models,the LSTM-DL model has the highest detection accuracy of98.31%for all states and the lowest misjudgment or false positive rate of 3.21%for blockage and leakage states.The proposed model can accurately recognize various pipeline statuses of complex GHB systems.展开更多
Single atomic catalysts(SACs),especially metal-nitrogen doped carbon(M-NC)catalysts,have been extensively explored for the electrochemical oxygen reduction reaction(ORR),owing to their high activity and atomic utiliza...Single atomic catalysts(SACs),especially metal-nitrogen doped carbon(M-NC)catalysts,have been extensively explored for the electrochemical oxygen reduction reaction(ORR),owing to their high activity and atomic utilization efficiency.However,there is still a lack of systematic screening and optimization of local structures surrounding active centers of SACs for ORR as the local coordination has an essential impact on their electronic structures and catalytic performance.Herein,we systematic study the ORR catalytic performance of M-NC SACs with different central metals and environmental atoms in the first and second coordination sphere by using density functional theory(DFT)calculation and machine learning(ML).The geometric and electronic informed overpotential model(GEIOM)based on random forest algorithm showed the highest accuracy,and its R^(2) and root mean square errors(RMSE)were 0.96 and 0.21,respectively.30 potential high-performance catalysts were screened out by GEIOM,and the RMSE of the predicted result was only 0.12 V.This work not only helps us fast screen high-performance catalysts,but also provides a low-cost way to improve the accuracy of ML models.展开更多
基金financially supported by the National Key Research and Development Program of China(No.2016YFB0701202,No.2017YFB0701500 and No.2020YFB1505901)National Natural Science Foundation of China(General Program No.51474149,52072240)+3 种基金Shanghai Science and Technology Committee(No.18511109300)Science and Technology Commission of the CMC(2019JCJQZD27300)financial support from the University of Michigan and Shanghai Jiao Tong University joint funding,China(AE604401)Science and Technology Commission of Shanghai Municipality(No.18511109302).
文摘Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金supported by Key Research and Development Program of China (No.2022YFC3005401)Key Research and Development Program of Yunnan Province,China (Nos.202203AA080009,202202AF080003)+1 种基金Science and Technology Achievement Transformation Program of Jiangsu Province,China (BA2021002)Fundamental Research Funds for the Central Universities (Nos.B220203006,B210203024).
文摘Data sharing and privacy protection are made possible by federated learning,which allows for continuous model parameter sharing between several clients and a central server.Multiple reliable and high-quality clients must participate in practical applications for the federated learning global model to be accurate,but because the clients are independent,the central server cannot fully control their behavior.The central server has no way of knowing the correctness of the model parameters provided by each client in this round,so clients may purposefully or unwittingly submit anomalous data,leading to abnormal behavior,such as becoming malicious attackers or defective clients.To reduce their negative consequences,it is crucial to quickly detect these abnormalities and incentivize them.In this paper,we propose a Federated Learning framework for Detecting and Incentivizing Abnormal Clients(FL-DIAC)to accomplish efficient and security federated learning.We build a detector that introduces an auto-encoder for anomaly detection and use it to perform anomaly identification and prevent the involvement of abnormal clients,in particular for the anomaly client detection problem.Among them,before the model parameters are input to the detector,we propose a Fourier transform-based anomaly data detectionmethod for dimensionality reduction in order to reduce the computational complexity.Additionally,we create a credit scorebased incentive structure to encourage clients to participate in training in order tomake clients actively participate.Three training models(CNN,MLP,and ResNet-18)and three datasets(MNIST,Fashion MNIST,and CIFAR-10)have been used in experiments.According to theoretical analysis and experimental findings,the FL-DIAC is superior to other federated learning schemes of the same type in terms of effectiveness.
基金the Deanship of Scientific Research,Najran University,Kingdom of Saudi Arabia,for funding this work under the Research Groups Funding Program Grant Code Number(NU/RG/SERC/12/43).
文摘Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats.
文摘Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.
基金Institutional Fund Projects under Grant No.(IFPIP:801-830-1443)The author gratefully acknowledges technical and financial support provided by the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Brain tumors are a pressing public health concern, characterized by their high mortality and morbidity rates.Nevertheless, the manual segmentation of brain tumors remains a laborious and error-prone task, necessitatingthe development of more precise and efficient methodologies. To address this formidable challenge, we proposean advanced approach for segmenting brain tumorMagnetic Resonance Imaging (MRI) images that harnesses theformidable capabilities of deep learning and convolutional neural networks (CNNs). While CNN-based methodshave displayed promise in the realm of brain tumor segmentation, the intricate nature of these tumors, markedby irregular shapes, varying sizes, uneven distribution, and limited available data, poses substantial obstacles toachieving accurate semantic segmentation. In our study, we introduce a pioneering Hybrid U-Net framework thatseamlessly integrates the U-Net and CNN architectures to surmount these challenges. Our proposed approachencompasses preprocessing steps that enhance image visualization, a customized layered U-Net model tailoredfor precise segmentation, and the inclusion of dropout layers to mitigate overfitting during the training process.Additionally, we leverage the CNN mechanism to exploit contextual information within brain tumorMRI images,resulting in a substantial enhancement in segmentation accuracy.Our experimental results attest to the exceptionalperformance of our framework, with accuracy rates surpassing 97% across diverse datasets, showcasing therobustness and effectiveness of our approach. Furthermore, we conduct a comprehensive assessment of ourmethod’s capabilities by evaluating various performance measures, including the sensitivity, Jaccard-index, andspecificity. Our proposed model achieved 99% accuracy. The implications of our findings are profound. Theproposed Hybrid U-Net model emerges as a highly promising diagnostic tool, poised to revolutionize brain tumorimage segmentation for radiologists and clinicians.
文摘The bioinspired nacre or bone structure represents a remarkable example of tough,strong,lightweight,and multifunctional structures in biological materials that can be an inspiration to design bioinspired high-performance materials.The bioinspired structure consists of hard grains and soft material interfaces.While the material interface has a very low volume percentage,its property has the ability to determine the bulk material response.Machine learning technology nowadays is widely used in material science.A machine learning model was utilized to predict the material response based on the material interface properties in a bioinspired nanocomposite.This model was trained on a comprehensive dataset of material response and interface properties,allowing it to make accurate predictions.The results of this study demonstrate the efficiency and high accuracy of the machine learning model.The successful application of machine learning into the material property prediction process has the potential to greatly enhance both the efficiency and accuracy of the material design process.
基金supported by the National Key R&D Program of China(2022YFA1503103)the National Natural Science Foundation of China(22033002,92261112,22203046)+2 种基金the Natural Science Research Start-up Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications(Grant No.NY221128)the Six Talent Peaks Project in Jiangsu Province(XCL-104)the open research fund of Key Laboratory of Quantum Materials and Devices(Southeast University)
文摘Understanding the correlation between the fundamental descriptors and catalytic performance is meaningful to guide the design of high-performance electrochemical catalysts.However,exploring key factors that affect catalytic performance in the vast catalyst space remains challenging for people.Herein,to accurately identify the factors that affect the performance of N2 reduction,we apply interpretable machine learning(ML)to analyze high-throughput screening results,which is also suited to other surface reactions in catalysis.To expound on the paradigm,33 promising catalysts are screened from 168 carbon-supported candidates,specifically single-atom catalysts(SACs)supported by a BC_(3)monolayer(TM@V_(B/C)-N_(n)=_(0-3)-BC_(3))via high-throughput screening.Subsequently,the hybrid sampling method and XGBoost model are selected to classify eligible and non-eligible catalysts.Through feature interpretation using Shapley Additive Explanations(SHAP)analysis,two crucial features,that is,the number of valence electrons(N_(v))and nitrogen substitution(N_(n)),are screened out.Combining SHAP analysis and electronic structure calculations,the synergistic effect between an active center with low valence electron numbers and reasonable C-N coordination(a medium fraction of nitrogen substitution)can exhibit high catalytic performance.Finally,six superior catalysts with a limiting potential lower than-0.4 V are predicted.Our workflow offers a rational approach to obtaining key information on catalytic performance from high-throughput screening results to design efficient catalysts that can be applied to other materials and reactions.
基金This research was funded by the Natural Science Foundation of Gansu Province with Approval Numbers 20JR10RA334 and 21JR7RA570Funding is provided for the 2021 Longyuan Youth Innovation and Entrepreneurship Talent Project with Approval Number 2021LQGR20+1 种基金the University Level Innovation Project with Approval NumbersGZF2020XZD18jbzxyb2018-01 of Gansu University of Political Science and Law.
文摘Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually impaired.This study presents an innovative approach employing deep reinforcement learning to enhance the accuracy of natural language descriptions of images.Our method focuses on refining the reward function in deep reinforcement learning,facilitating the generation of precise descriptions by aligning visual and textual features more closely.Our approach comprises three key architectures.Firstly,it utilizes Residual Network 101(ResNet-101)and Faster Region-based Convolutional Neural Network(Faster R-CNN)to extract average and local image features,respectively,followed by the implementation of a dual attention mechanism for intricate feature fusion.Secondly,the Transformer model is engaged to derive contextual semantic features from textual data.Finally,the generation of descriptive text is executed through a two-layer long short-term memory network(LSTM),directed by the value and reward functions.Compared with the image description method that relies on deep learning,the score of Bilingual Evaluation Understudy(BLEU-1)is 0.762,which is 1.6%higher,and the score of BLEU-4 is 0.299.Consensus-based Image Description Evaluation(CIDEr)scored 0.998,Recall-Oriented Understudy for Gisting Evaluation(ROUGE)scored 0.552,the latter improved by 0.36%.These results not only attest to the viability of our approach but also highlight its superiority in the realm of image description.Future research can explore the integration of our method with other artificial intelligence(AI)domains,such as emotional AI,to create more nuanced and context-aware systems.
基金jointly supported by the National Natural Science Foundation of China(Grant No.U1811464)the Hydraulic Innovation Project of Science and Technology of Guangdong Province of China(Grant No.2022-01)the Guangzhou Basic and Applied Basic Research Foundation(Grant No.202201011472)。
文摘Due to various technical issues,existing numerical weather prediction(NWP)models often perform poorly at forecasting rainfall in the first several hours.To correct the bias of an NWP model and improve the accuracy of short-range precipitation forecasting,we propose a deep learning-based approach called UNet Mask,which combines NWP forecasts with the output of a convolutional neural network called UNet.The UNet Mask involves training the UNet on historical data from the NWP model and gridded rainfall observations for 6-hour precipitation forecasting.The overlap of the UNet output and the NWP forecasts at the same rainfall threshold yields a mask.The UNet Mask blends the UNet output and the NWP forecasts by taking the maximum between them and passing through the mask,which provides the corrected 6-hour rainfall forecasts.We evaluated UNet Mask on a test set and in real-time verification.The results showed that UNet Mask outperforms the NWP model in 6-hour precipitation prediction by reducing the FAR and improving CSI scores.Sensitivity tests also showed that different small rainfall thresholds applied to the UNet and the NWP model have different effects on UNet Mask's forecast performance.This study shows that UNet Mask is a promising approach for improving rainfall forecasting of NWP models.
基金supported by the National Key Research and Development Program of China under the sub-theme“Research on the Path of Enhancing the Sustainable Development Capacity of Cities and Towns under the Carbon Neutral Goal”[Grant No.2022YFC3802902-04].
文摘In response to the United Nations Sustainable Development Goals and China’s“Dual Carbon”Goals(DCGs means the goals of“Carbon Peak and carbon neutrality”),this paper from the perspective of the construction of China’s Innovation Demonstration Zones for Sustainable Development Agenda(IDZSDAs),combines carbon emission-related metrics to construct a comprehensive assessment system for Urban Sustainable Development Capacity(USDC).After obtaining USDC assessment results through the assessment system,an approach combining Least Absolute Shrinkage and Selection Operator(LASSO)regression and Random Forest(RF)based on machine learning is proposed for identifying influencing factors and characterizing key issues.Combining Coupling Coordination Degree(CCD)analysis,the study further summarizes the systemic patterns and future directions of urban sustainable development.A case study on the IDZSDAs from 2015 to 2022 reveals that:(1)the combined identification method based on machine learning and CCD models effectively quantifies influencing factors and key issues in the urban sustainable development process;(2)the correspondence between influencing factors and key subsystems identified by the LASSO-RF combination model is generally consistent with the development situations in various cities;and(3)the machine learning-based combined recognition method is scalable and dynamic.It enables decision-makers to accurately identify influencing factors and characterize key issues based on actual urban development needs.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU),Grant Number IMSIU-RG23151.
文摘This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.
文摘This editorial discusses an article recently published in the World Journal of Clinical Cases,focusing on risk factors associated with intensive care unit-acquired weak-ness(ICU-AW).ICU-AW is a serious neuromuscular complication seen in criti-cally ill patients,characterized by muscle dysfunction,weakness,and sensory impairments.Post-discharge,patients may encounter various obstacles impacting their quality of life.The pathogenesis involves intricate changes in muscle and nerve function,potentially leading to significant disabilities.Given its global significance,ICU-AW has become a key research area.The study identified critical risk factors using a multilayer perceptron neural network model,highlighting the impact of intensive care unit stay duration and mechanical ventilation duration on ICU-AW.Recommendations were provided for preventing ICU-AW,empha-sizing comprehensive interventions and risk factor mitigation.This editorial stresses the importance of external validation,cross-validation,and model tran-sparency to enhance model reliability.Moreover,the application of machine learning in clinical medicine has demonstrated clear benefits in improving disease understanding and treatment decisions.While machine learning presents oppor-tunities,challenges such as model reliability and data management necessitate thorough validation and ethical considerations.In conclusion,integrating ma-chine learning into healthcare offers significant potential and challenges.Enhan-cing data management,validating models,and upholding ethical standards are crucial for maximizing the benefits of machine learning in clinical practice.
文摘This article presents an exhaustive comparative investigation into the accuracy of gender identification across diverse geographical regions,employing a deep learning classification algorithm for speech signal analysis.In this study,speech samples are categorized for both training and testing purposes based on their geographical origin.Category 1 comprises speech samples from speakers outside of India,whereas Category 2 comprises live-recorded speech samples from Indian speakers.Testing speech samples are likewise classified into four distinct sets,taking into consideration both geographical origin and the language spoken by the speakers.Significantly,the results indicate a noticeable difference in gender identification accuracy among speakers from different geographical areas.Indian speakers,utilizing 52 Hindi and 26 English phonemes in their speech,demonstrate a notably higher gender identification accuracy of 85.75%compared to those speakers who predominantly use 26 English phonemes in their conversations when the system is trained using speech samples from Indian speakers.The gender identification accuracy of the proposed model reaches 83.20%when the system is trained using speech samples from speakers outside of India.In the analysis of speech signals,Mel Frequency Cepstral Coefficients(MFCCs)serve as relevant features for the speech data.The deep learning classification algorithm utilized in this research is based on a Bidirectional Long Short-Term Memory(BiLSTM)architecture within a Recurrent Neural Network(RNN)model.
文摘Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing.
文摘This paper addresses the challenge of integrating priority passage for emergency vehicles with optimal intersection control in modern urban traffic. It proposes an innovative strategy based on deep learning to enable emergency vehicles to pass through intersections efficiently and safely. The research aims to develop a deep learning model that utilizes intersection violation monitoring cameras to identify emergency vehicles in real time. This system adjusts traffic signals to ensure the rapid passage of emergency vehicles while simultaneously optimizing the overall efficiency of the traffic system. In this study, OpenCV is used in combination with Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to jointly complete complex image processing and analysis tasks, to realize the purpose of fast travel of emergency vehicles. At the end of this study, the principle of the You Only Look Once (YOLO) algorithm can be used to design a website and a mobile phone application (app) to enable private vehicles with emergency needs to realize emergency passage through the application, which is also of great significance to improve the overall level of urban traffic management, reduce traffic congestion and promote the development of related technologies.
基金the National Natural Science Foundation of China(Grant No.62075006)the National Key Research and Development Program of China(Grant No.2021YFB3600403)the Natural Science Talents Foundation(Grant No.KSRC22001532)。
文摘The performance of the metal halide perovskite solar cells(PSCs)highly relies on the experimental parameters,including the fabrication processes and the compositions of the perovskites;tremendous experimental work has been done to optimize these factors.However,predicting the device performance of the PSCs from the fabrication parameters before experiments is still challenging.Herein,we bridge this gap by machine learning(ML)based on a dataset including 1072 devices from peer-reviewed publications.The optimized ML model accurately predicts the PCE from the experimental parameters with a root mean square error of 1.28%and a Pearson coefficientr of 0.768.Moreover,the factors governing the device performance are ranked by shapley additive explanations(SHAP),among which,A-site cation is crucial to getting highly efficient PSCs.Experiments and density functional theory calculations are employed to validate and help explain the predicting results by the ML model.Our work reveals the feasibility of ML in predicting the device performance from the experimental parameters before experiments,which enables the reverse experimental design toward highly efficient PSCs.
基金financially supported by the China Postdoctoral Science Foundation (No.2021M690362)the National Natural Science Foundation of China (Nos.51974014 and U2034206)。
文摘Detecting a pipeline's abnormal status,which is typically a blockage and leakage accident,is important for the continuity and safety of mine backfill.The pipeline system for gravity-transport high-density backfill(GHB)is complex.Specifically designed,efficient,and accurate abnormal pipeline detection methods for GHB are rare.This work presents a long short-term memory-based deep learning(LSTM-DL)model for GHB pipeline blockage and leakage diagnosis.First,an industrial pipeline monitoring system was introduced using pressure and flow sensors.Second,blockage and leakage field experiments were designed to solve the problem of negative sample deficiency.The pipeline's statistical characteristics with different working statuses were analyzed to show their complexity.Third,the architecture of the LSTM-DL model was elaborated on and evaluated.Finally,the LSTM-DL model was compared with state-of-the-art(SOTA)learning algorithms.The results show that the backfilling cycle comprises multiple working phases and is intermittent.Although pressure and flow signals fluctuate stably in a normal cycle,their values are diverse in different cycles.Plugging causes a sudden change in interval signal features;leakage results in long variation duration and a wide fluctuation range.Among the SOTA models,the LSTM-DL model has the highest detection accuracy of98.31%for all states and the lowest misjudgment or false positive rate of 3.21%for blockage and leakage states.The proposed model can accurately recognize various pipeline statuses of complex GHB systems.
基金financially supported by the National Key Research and Development Program of China (2018YFA0702002)the Beijing Natural Science Foundation (Z210016)the National Natural Science Foundation of China (21935001)。
文摘Single atomic catalysts(SACs),especially metal-nitrogen doped carbon(M-NC)catalysts,have been extensively explored for the electrochemical oxygen reduction reaction(ORR),owing to their high activity and atomic utilization efficiency.However,there is still a lack of systematic screening and optimization of local structures surrounding active centers of SACs for ORR as the local coordination has an essential impact on their electronic structures and catalytic performance.Herein,we systematic study the ORR catalytic performance of M-NC SACs with different central metals and environmental atoms in the first and second coordination sphere by using density functional theory(DFT)calculation and machine learning(ML).The geometric and electronic informed overpotential model(GEIOM)based on random forest algorithm showed the highest accuracy,and its R^(2) and root mean square errors(RMSE)were 0.96 and 0.21,respectively.30 potential high-performance catalysts were screened out by GEIOM,and the RMSE of the predicted result was only 0.12 V.This work not only helps us fast screen high-performance catalysts,but also provides a low-cost way to improve the accuracy of ML models.