The issue of opacity within data-driven artificial intelligence(AI)algorithms has become an impediment to these algorithms’extensive utilization,especially within sensitive domains concerning health,safety,and high p...The issue of opacity within data-driven artificial intelligence(AI)algorithms has become an impediment to these algorithms’extensive utilization,especially within sensitive domains concerning health,safety,and high profitability,such as chemical engineering(CE).In order to promote reliable AI utilization in CE,this review discusses the concept of transparency within AI utilizations,which is defined based on both explainable AI(XAI)concepts and key features from within the CE field.This review also highlights the requirements of reliable AI from the aspects of causality(i.e.,the correlations between the predictions and inputs of an AI),explainability(i.e.,the operational rationales of the workflows),and informativeness(i.e.,the mechanistic insights of the investigating systems).Related techniques are evaluated together with state-of-the-art applications to highlight the significance of establishing reliable AI applications in CE.Furthermore,a comprehensive transparency analysis case study is provided as an example to enhance understanding.Overall,this work provides a thorough discussion of this subject matter in a way that—for the first time—is particularly geared toward chemical engineers in order to raise awareness of responsible AI utilization.With this vital missing link,AI is anticipated to serve as a novel and powerful tool that can tremendously aid chemical engineers in solving bottleneck challenges in CE.展开更多
Fault detection and diagnosis(FDD)plays a significant role in ensuring the safety and stability of chemical processes.With the development of artificial intelligence(AI)and big data technologies,data-driven approaches...Fault detection and diagnosis(FDD)plays a significant role in ensuring the safety and stability of chemical processes.With the development of artificial intelligence(AI)and big data technologies,data-driven approaches with excellent performance are widely used for FDD in chemical processes.However,improved predictive accuracy has often been achieved through increased model complexity,which turns models into black-box methods and causes uncertainty regarding their decisions.In this study,a causal temporal graph attention network(CTGAN)is proposed for fault diagnosis of chemical processes.A chemical causal graph is built by causal inference to represent the propagation path of faults.The attention mechanism and chemical causal graph were combined to help us notice the key variables relating to fault fluctuations.Experiments in the Tennessee Eastman(TE)process and the green ammonia(GA)process showed that CTGAN achieved high performance and good explainability.展开更多
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ...The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.展开更多
Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control par...Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control parameters,an efficient solution that can perform a reliable sensitivity analysis of the production terms of interest and forecast key battery properties in the early production phase is urgently required.This paper performs detailed sensitivity analysis of key production terms on determining the properties of manufactured battery electrode via advanced data-driven modelling.To be specific,an explainable neural network named generalized additive model with structured interaction(GAM-SI)is designed to predict two key battery properties,including electrode mass loading and porosity,while the effects of four early production terms on manufactured batteries are explained and analysed.The experimental results reveal that the proposed method is able to accurately predict battery electrode properties in the mixing and coating stages.In addition,the importance ratio ranking,global interpretation and local interpretation of both the main effects and pairwise interactions can be effectively visualized by the designed neural network.Due to the merits of interpretability,the proposed GAM-SI can help engineers gain important insights for understanding complicated production behavior,further benefitting smart battery production.展开更多
In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at proc...In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.展开更多
In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.De...In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19.展开更多
A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a...A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model.展开更多
Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Mag...Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors.展开更多
Accurate prediction of the rate of penetration(ROP)is significant for drilling optimization.While the intelligent ROP prediction model based on fully connected neural networks(FNN)outperforms traditional ROP equations...Accurate prediction of the rate of penetration(ROP)is significant for drilling optimization.While the intelligent ROP prediction model based on fully connected neural networks(FNN)outperforms traditional ROP equations and machine learning algorithms,its lack of interpretability undermines its credibility.This study proposes a novel interpretation and characterization method for the FNN ROP prediction model using the Rectified Linear Unit(ReLU)activation function.By leveraging the derivative of the ReLU function,the FNN function calculation process is transformed into vector operations.The FNN model is linearly characterized through further simplification,enabling its interpretation and analysis.The proposed method is applied in ROP prediction scenarios using drilling data from three vertical wells in the Tarim Oilfield.The results demonstrate that the FNN ROP prediction model with ReLU as the activation function performs exceptionally well.The relative activation frequency curve of hidden layer neurons aids in analyzing the overfitting of the FNN ROP model and determining drilling data similarity.In the well sections with similar drilling data,averaging the weight parameters enables linear characterization of the FNN ROP prediction model,leading to the establishment of a corresponding linear representation equation.Furthermore,the quantitative analysis of each feature's influence on ROP facilitates the proposal of drilling parameter optimization schemes for the current well section.The established linear characterization equation exhibits high precision,strong stability,and adaptability through the application and validation across multiple well sections.展开更多
Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of mu...Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world.展开更多
Bigeye tuna Thunnus obesus is an important migratory species that forages deeply,and El Niño events highly influence its distribution in the eastern Pacific Ocean.While sea surface temperature is widely recognize...Bigeye tuna Thunnus obesus is an important migratory species that forages deeply,and El Niño events highly influence its distribution in the eastern Pacific Ocean.While sea surface temperature is widely recognized as the main factor affecting bigeye tuna(BET)distribution during El Niño events,the roles of different types of El Niño and subsurface oceanic signals,such as ocean heat content and mixed layer depth,remain unclear.We conducted A spatial-temporal analysis to investigate the relationship among BET distribution,El Niño events,and the underlying oceanic signals to address this knowledge gap.We used monthly purse seine fisheries data of BET in the eastern tropical Pacific Ocean(ETPO)from 1994 to 2012 and extracted the central-Pacific El Niño(CPEN)indices based on Niño 3 and Niño 4indexes.Furthermore,we employed Explainable Artificial Intelligence(XAI)models to identify the main patterns and feature importance of the six environmental variables and used information flow analysis to determine the causality between the selected factors and BET distribution.Finally,we analyzed Argo datasets to calculate the vertical,horizontal,and zonal mean temperature differences during CPEN and normal years to clarify the oceanic thermodynamic structure differences between the two types of years.Our findings reveal that BET distribution during the CPEN years is mainly driven by advection feedback of subsurface warmer thermal signals and vertically warmer habitats in the CPEN domain area,especially in high-yield fishing areas.The high frequency of CPEN events will likely lead to the westward shift of fisheries centers.展开更多
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi...In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications.展开更多
Crohn's disease(CD)is a chronic inflammatory bowel disease of unknown origin that can cause significant disability and morbidity with its progression.Due to the unique nature of CD,surgery is often necessary for m...Crohn's disease(CD)is a chronic inflammatory bowel disease of unknown origin that can cause significant disability and morbidity with its progression.Due to the unique nature of CD,surgery is often necessary for many patients during their lifetime,and the incidence of postoperative complications is high,which can affect the prognosis of patients.Therefore,it is essential to identify and manage post-operative complications.Machine learning(ML)has become increasingly im-portant in the medical field,and ML-based models can be used to predict post-operative complications of intestinal resection for CD.Recently,a valuable article titled“Predicting short-term major postoperative complications in intestinal resection for Crohn's disease:A machine learning-based study”was published by Wang et al.We appreciate the authors'creative work,and we are willing to share our views and discuss them with the authors.展开更多
Despite low-income countries producing only a quarter of per capita plastic waste compared to high-income countries,the related environmental,health,and economic costs of plastic could be up to 10 times higher than in...Despite low-income countries producing only a quarter of per capita plastic waste compared to high-income countries,the related environmental,health,and economic costs of plastic could be up to 10 times higher than in wealthier countries,declared Who Pays for Plastic Pollution?Enabling Global Equity in the Plastic Value Chain,a report released by World Wild Fund(WWF).The report highlighted significant inequalities within the global plastic value chain and explained how cost disparities exert a substantial impact on low and middle-income countries.展开更多
Read a conversation between a biology student and his friend.So,Simon,you’re studying biology.Can you explain a little bit about it?Biology is about all the things on our world that are alive-plants,animals,as well a...Read a conversation between a biology student and his friend.So,Simon,you’re studying biology.Can you explain a little bit about it?Biology is about all the things on our world that are alive-plants,animals,as well as very small living things that we cannot see.Biology tries to explain why life is like it is.It sounds complicated.There are so many different kinds of plants and animals.展开更多
MHC class I and II molecules bind peptides that are either recognized as self or as foreign to the immune system via interaction with T-cell receptors.The T-cell receptor makes molecular contacts with the peptide and ...MHC class I and II molecules bind peptides that are either recognized as self or as foreign to the immune system via interaction with T-cell receptors.The T-cell receptor makes molecular contacts with the peptide and the MHC molecule in the region of the MHC peptide binding cleft.The MHC class I and II molecules are highly polymorphic,which presumably allows for great diversity of antigen-binding sites over the population,leading to a species that is relatively fit to withstand foreign pathogens.In MHC class I molecules,this allelic variation predicts extensive variation in the sequence of peptides able to bind MHC class I molecules,and this is indeed the case.展开更多
There are two airports in Beijing.Every time Guo Shuangchao goes on a business trip,he chooses to fly via the Beijing Daxing International Airport,nicknamed the Starfish of Beijing.That’ s because he is among its bui...There are two airports in Beijing.Every time Guo Shuangchao goes on a business trip,he chooses to fly via the Beijing Daxing International Airport,nicknamed the Starfish of Beijing.That’ s because he is among its builders.展开更多
Deep learning(DL) is progressively popular as a viable alternative to traditional signal processing(SP) based methods for fault diagnosis. However, the lack of explainability makes DL-based fault diagnosis methods dif...Deep learning(DL) is progressively popular as a viable alternative to traditional signal processing(SP) based methods for fault diagnosis. However, the lack of explainability makes DL-based fault diagnosis methods difficult to be trusted and understood by industrial users. In addition, the extraction of weak fault features from signals with heavy noise is imperative in industrial applications. To address these limitations, inspired by the Filterbank-Feature-Decision methodology, we propose a new Signal Processing Informed Neural Network(SPINN) framework by embedding SP knowledge into the DL model. As one of the practical implementations for SPINN, a denoising fault-aware wavelet network(DFAWNet) is developed, which consists of fused wavelet convolution(FWConv), dynamic hard thresholding(DHT),index-based soft filtering(ISF), and a classifier. Taking advantage of wavelet transform, FWConv extracts multiscale features while learning wavelet scales and selecting important wavelet bases automatically;DHT dynamically eliminates noise-related components via point-wise hard thresholding;inspired by index-based filtering, ISF optimizes and selects optimal filters for diagnostic feature extraction. It’s worth noting that SPINN may be readily applied to different deep learning networks by simply adding filterbank and feature modules in front. Experiments results demonstrate a significant diagnostic performance improvement over other explainable or denoising deep learning networks. The corresponding code is available at https://github. com/alber tszg/DFAWn et.展开更多
Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In ...Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.展开更多
文摘The issue of opacity within data-driven artificial intelligence(AI)algorithms has become an impediment to these algorithms’extensive utilization,especially within sensitive domains concerning health,safety,and high profitability,such as chemical engineering(CE).In order to promote reliable AI utilization in CE,this review discusses the concept of transparency within AI utilizations,which is defined based on both explainable AI(XAI)concepts and key features from within the CE field.This review also highlights the requirements of reliable AI from the aspects of causality(i.e.,the correlations between the predictions and inputs of an AI),explainability(i.e.,the operational rationales of the workflows),and informativeness(i.e.,the mechanistic insights of the investigating systems).Related techniques are evaluated together with state-of-the-art applications to highlight the significance of establishing reliable AI applications in CE.Furthermore,a comprehensive transparency analysis case study is provided as an example to enhance understanding.Overall,this work provides a thorough discussion of this subject matter in a way that—for the first time—is particularly geared toward chemical engineers in order to raise awareness of responsible AI utilization.With this vital missing link,AI is anticipated to serve as a novel and powerful tool that can tremendously aid chemical engineers in solving bottleneck challenges in CE.
基金support of the National Key Research and Development Program of China(2021YFB4000505).
文摘Fault detection and diagnosis(FDD)plays a significant role in ensuring the safety and stability of chemical processes.With the development of artificial intelligence(AI)and big data technologies,data-driven approaches with excellent performance are widely used for FDD in chemical processes.However,improved predictive accuracy has often been achieved through increased model complexity,which turns models into black-box methods and causes uncertainty regarding their decisions.In this study,a causal temporal graph attention network(CTGAN)is proposed for fault diagnosis of chemical processes.A chemical causal graph is built by causal inference to represent the propagation path of faults.The attention mechanism and chemical causal graph were combined to help us notice the key variables relating to fault fluctuations.Experiments in the Tennessee Eastman(TE)process and the green ammonia(GA)process showed that CTGAN achieved high performance and good explainability.
基金supported by theCONAHCYT(Consejo Nacional deHumanidades,Ciencias y Tecnologias).
文摘The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.
基金supported by the National Natural Science Foundation of China (62373224,62333013,U23A20327)。
文摘Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control parameters,an efficient solution that can perform a reliable sensitivity analysis of the production terms of interest and forecast key battery properties in the early production phase is urgently required.This paper performs detailed sensitivity analysis of key production terms on determining the properties of manufactured battery electrode via advanced data-driven modelling.To be specific,an explainable neural network named generalized additive model with structured interaction(GAM-SI)is designed to predict two key battery properties,including electrode mass loading and porosity,while the effects of four early production terms on manufactured batteries are explained and analysed.The experimental results reveal that the proposed method is able to accurately predict battery electrode properties in the mixing and coating stages.In addition,the importance ratio ranking,global interpretation and local interpretation of both the main effects and pairwise interactions can be effectively visualized by the designed neural network.Due to the merits of interpretability,the proposed GAM-SI can help engineers gain important insights for understanding complicated production behavior,further benefitting smart battery production.
基金supported by the National Natural Science Foundation of China(Grant Numbers:62372083,62072074,62076054,62027827,62002047)the Sichuan Provincial Science and Technology Innovation Platform and Talent Program(Grant Number:2022JDJQ0039)+1 种基金the Sichuan Provincial Science and Technology Support Program(Grant Numbers:2022YFQ0045,2022YFS0220,2021YFG0131,2023YFS0020,2023YFS0197,2023YFG0148)the CCF-Baidu Open Fund(Grant Number:202312).
文摘In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.
基金the Deanship for Research Innovation,Ministry of Education in Saudi Arabia,for funding this research work through project number IFKSUDR-H122.
文摘In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19.
文摘A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model.
基金This research is funded by the Researchers Supporting Project Number(RSPD2024R1027),King Saud University,Riyadh,Saudi Arabia.
文摘Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors.
基金The authors greatly thanked the financial support from the National Key Research and Development Program of China(funded by National Natural Science Foundation of China,No.2019YFA0708300)the Strategic Cooperation Technology Projects of CNPC and CUPB(funded by China National Petroleum Corporation,No.ZLZX2020-03)+1 种基金the National Science Fund for Distinguished Young Scholars(funded by National Natural Science Foundation of China,No.52125401)Science Foundation of China University of Petroleum,Beijing(funded by China University of petroleum,Beijing,No.2462022SZBH002).
文摘Accurate prediction of the rate of penetration(ROP)is significant for drilling optimization.While the intelligent ROP prediction model based on fully connected neural networks(FNN)outperforms traditional ROP equations and machine learning algorithms,its lack of interpretability undermines its credibility.This study proposes a novel interpretation and characterization method for the FNN ROP prediction model using the Rectified Linear Unit(ReLU)activation function.By leveraging the derivative of the ReLU function,the FNN function calculation process is transformed into vector operations.The FNN model is linearly characterized through further simplification,enabling its interpretation and analysis.The proposed method is applied in ROP prediction scenarios using drilling data from three vertical wells in the Tarim Oilfield.The results demonstrate that the FNN ROP prediction model with ReLU as the activation function performs exceptionally well.The relative activation frequency curve of hidden layer neurons aids in analyzing the overfitting of the FNN ROP model and determining drilling data similarity.In the well sections with similar drilling data,averaging the weight parameters enables linear characterization of the FNN ROP prediction model,leading to the establishment of a corresponding linear representation equation.Furthermore,the quantitative analysis of each feature's influence on ROP facilitates the proposal of drilling parameter optimization schemes for the current well section.The established linear characterization equation exhibits high precision,strong stability,and adaptability through the application and validation across multiple well sections.
文摘Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world.
基金Supported by the Marine S&T Fund of Laoshan Laboratory(Qingdao)(No.LSKJ202204302)the National Natural Science Foundation of China(Nos.42090044,42376175,U2006211)。
文摘Bigeye tuna Thunnus obesus is an important migratory species that forages deeply,and El Niño events highly influence its distribution in the eastern Pacific Ocean.While sea surface temperature is widely recognized as the main factor affecting bigeye tuna(BET)distribution during El Niño events,the roles of different types of El Niño and subsurface oceanic signals,such as ocean heat content and mixed layer depth,remain unclear.We conducted A spatial-temporal analysis to investigate the relationship among BET distribution,El Niño events,and the underlying oceanic signals to address this knowledge gap.We used monthly purse seine fisheries data of BET in the eastern tropical Pacific Ocean(ETPO)from 1994 to 2012 and extracted the central-Pacific El Niño(CPEN)indices based on Niño 3 and Niño 4indexes.Furthermore,we employed Explainable Artificial Intelligence(XAI)models to identify the main patterns and feature importance of the six environmental variables and used information flow analysis to determine the causality between the selected factors and BET distribution.Finally,we analyzed Argo datasets to calculate the vertical,horizontal,and zonal mean temperature differences during CPEN and normal years to clarify the oceanic thermodynamic structure differences between the two types of years.Our findings reveal that BET distribution during the CPEN years is mainly driven by advection feedback of subsurface warmer thermal signals and vertically warmer habitats in the CPEN domain area,especially in high-yield fishing areas.The high frequency of CPEN events will likely lead to the westward shift of fisheries centers.
文摘In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications.
基金the Natural Science Foundation of Sichuan Province,No.2022NSFSC0819.
文摘Crohn's disease(CD)is a chronic inflammatory bowel disease of unknown origin that can cause significant disability and morbidity with its progression.Due to the unique nature of CD,surgery is often necessary for many patients during their lifetime,and the incidence of postoperative complications is high,which can affect the prognosis of patients.Therefore,it is essential to identify and manage post-operative complications.Machine learning(ML)has become increasingly im-portant in the medical field,and ML-based models can be used to predict post-operative complications of intestinal resection for CD.Recently,a valuable article titled“Predicting short-term major postoperative complications in intestinal resection for Crohn's disease:A machine learning-based study”was published by Wang et al.We appreciate the authors'creative work,and we are willing to share our views and discuss them with the authors.
文摘Despite low-income countries producing only a quarter of per capita plastic waste compared to high-income countries,the related environmental,health,and economic costs of plastic could be up to 10 times higher than in wealthier countries,declared Who Pays for Plastic Pollution?Enabling Global Equity in the Plastic Value Chain,a report released by World Wild Fund(WWF).The report highlighted significant inequalities within the global plastic value chain and explained how cost disparities exert a substantial impact on low and middle-income countries.
文摘Read a conversation between a biology student and his friend.So,Simon,you’re studying biology.Can you explain a little bit about it?Biology is about all the things on our world that are alive-plants,animals,as well as very small living things that we cannot see.Biology tries to explain why life is like it is.It sounds complicated.There are so many different kinds of plants and animals.
文摘MHC class I and II molecules bind peptides that are either recognized as self or as foreign to the immune system via interaction with T-cell receptors.The T-cell receptor makes molecular contacts with the peptide and the MHC molecule in the region of the MHC peptide binding cleft.The MHC class I and II molecules are highly polymorphic,which presumably allows for great diversity of antigen-binding sites over the population,leading to a species that is relatively fit to withstand foreign pathogens.In MHC class I molecules,this allelic variation predicts extensive variation in the sequence of peptides able to bind MHC class I molecules,and this is indeed the case.
文摘There are two airports in Beijing.Every time Guo Shuangchao goes on a business trip,he chooses to fly via the Beijing Daxing International Airport,nicknamed the Starfish of Beijing.That’ s because he is among its builders.
基金National Natural Science Foundation of China (Grant Nos. 51835009, 52105116)China Postdoctoral Science Foundation (Grant Nos. 2021M692557, 2021TQ0263)。
文摘Deep learning(DL) is progressively popular as a viable alternative to traditional signal processing(SP) based methods for fault diagnosis. However, the lack of explainability makes DL-based fault diagnosis methods difficult to be trusted and understood by industrial users. In addition, the extraction of weak fault features from signals with heavy noise is imperative in industrial applications. To address these limitations, inspired by the Filterbank-Feature-Decision methodology, we propose a new Signal Processing Informed Neural Network(SPINN) framework by embedding SP knowledge into the DL model. As one of the practical implementations for SPINN, a denoising fault-aware wavelet network(DFAWNet) is developed, which consists of fused wavelet convolution(FWConv), dynamic hard thresholding(DHT),index-based soft filtering(ISF), and a classifier. Taking advantage of wavelet transform, FWConv extracts multiscale features while learning wavelet scales and selecting important wavelet bases automatically;DHT dynamically eliminates noise-related components via point-wise hard thresholding;inspired by index-based filtering, ISF optimizes and selects optimal filters for diagnostic feature extraction. It’s worth noting that SPINN may be readily applied to different deep learning networks by simply adding filterbank and feature modules in front. Experiments results demonstrate a significant diagnostic performance improvement over other explainable or denoising deep learning networks. The corresponding code is available at https://github. com/alber tszg/DFAWn et.
基金This work was supported in part by the National Natural Science Foundation of China(82260360)the Foreign Young Talent Program(QN2021033002L).
文摘Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.