The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach...The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats.展开更多
Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learni...Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learning models,especially those utilizing complex algorithms like deep learning,have demonstrated major potential in extracting important information fromlarge operational datasets.Despite their efficiency,machine learningmodels face challenges,making Explainable AI(XAI)crucial for improving their understandability and fine-tuning.The importance of feature contribution and selection using XAI in the diagnosis of machine faults is examined in this study.The technique is applied to evaluate different machine-learning algorithms.Extreme Gradient Boosting,Support Vector Machine,Gaussian Naive Bayes,and Random Forest classifiers are used alongside Logistic Regression(LR)as a baseline model because their efficacy and simplicity are evaluated thoroughly with empirical analysis.The XAI is used as a targeted feature selection technique to select among 29 features of the time and frequency domain.The XAI approach is lightweight,trained with only targeted features,and achieved similar results as the traditional approach.The accuracy without XAI on baseline LR is 79.57%,whereas the approach with XAI on LR is 80.28%.展开更多
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ...The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.展开更多
Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research l...Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique.展开更多
Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control par...Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control parameters,an efficient solution that can perform a reliable sensitivity analysis of the production terms of interest and forecast key battery properties in the early production phase is urgently required.This paper performs detailed sensitivity analysis of key production terms on determining the properties of manufactured battery electrode via advanced data-driven modelling.To be specific,an explainable neural network named generalized additive model with structured interaction(GAM-SI)is designed to predict two key battery properties,including electrode mass loading and porosity,while the effects of four early production terms on manufactured batteries are explained and analysed.The experimental results reveal that the proposed method is able to accurately predict battery electrode properties in the mixing and coating stages.In addition,the importance ratio ranking,global interpretation and local interpretation of both the main effects and pairwise interactions can be effectively visualized by the designed neural network.Due to the merits of interpretability,the proposed GAM-SI can help engineers gain important insights for understanding complicated production behavior,further benefitting smart battery production.展开更多
In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at proc...In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.展开更多
Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Mag...Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors.展开更多
In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.De...In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19.展开更多
In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by Io...In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by IoT technology,the growing number of IoT devices escalates the likelihood of attacks,emphasizing the need for robust security tools to automatically detect and explain threats.This paper introduces a deep learning methodology for detecting and classifying distributed denial of service(DDoS)attacks,addressing a significant security concern within IoT environments.An effective procedure of deep transfer learning is applied to utilize deep learning backbones,which is then evaluated on two benchmarking datasets of DDoS attacks in terms of accuracy and time complexity.By leveraging several deep architectures,the study conducts thorough binary and multiclass experiments,each varying in the complexity of classifying attack types and demonstrating real-world scenarios.Additionally,this study employs an explainable artificial intelligence(XAI)AI technique to elucidate the contribution of extracted features in the process of attack detection.The experimental results demonstrate the effectiveness of the proposed method,achieving a recall of 99.39%by the XAI bidirectional long short-term memory(XAI-BiLSTM)model.展开更多
Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of mu...Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world.展开更多
Breast cancer stands as one of the world’s most perilous and formidable diseases,having recently surpassed lung cancer as the most prevalent cancer type.This disease arises when cells in the breast undergo unregulate...Breast cancer stands as one of the world’s most perilous and formidable diseases,having recently surpassed lung cancer as the most prevalent cancer type.This disease arises when cells in the breast undergo unregulated proliferation,resulting in the formation of a tumor that has the capacity to invade surrounding tissues.It is not confined to a specific gender;both men and women can be diagnosed with breast cancer,although it is more frequently observed in women.Early detection is pivotal in mitigating its mortality rate.The key to curbing its mortality lies in early detection.However,it is crucial to explain the black-box machine learning algorithms in this field to gain the trust of medical professionals and patients.In this study,we experimented with various machine learning models to predict breast cancer using the Wisconsin Breast Cancer Dataset(WBCD)dataset.We applied Random Forest,XGBoost,Support Vector Machine(SVM),Multi-Layer Perceptron(MLP),and Gradient Boost classifiers,with the Random Forest model outperforming the others.A comparison analysis between the two methods was done after performing hyperparameter tuning on each method.The analysis showed that the random forest performs better and yields the highest result with 99.46%accuracy.After performance evaluation,two Explainable Artificial Intelligence(XAI)methods,SHapley Additive exPlanations(SHAP)and Local Interpretable Model-Agnostic Explanations(LIME),have been utilized to explain the random forest machine learning model.展开更多
Aim: To diagnose COVID-19 more efficiently and more correctly, this study proposed a novel attention network forCOVID-19 (ANC). Methods: Two datasets were used in this study. An 18-way data augmentation was proposed t...Aim: To diagnose COVID-19 more efficiently and more correctly, this study proposed a novel attention network forCOVID-19 (ANC). Methods: Two datasets were used in this study. An 18-way data augmentation was proposed toavoid overfitting. Then, convolutional block attention module (CBAM) was integrated to our model, the structureof which is fine-tuned. Finally, Grad-CAM was used to provide an explainable diagnosis. Results: The accuracyof our ANC methods on two datasets are 96.32% ± 1.06%, and 96.00% ± 1.03%, respectively. Conclusions: Thisproposed ANC method is superior to 9 state-of-the-art approaches.展开更多
Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In ...Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.展开更多
Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechani...Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry.However,real-time training and classifying network traffic pose challenges,as they can lead to the degradation of the overall dataset and difficulties preventing attacks.Additionally,existing semi-supervised learning research might need to analyze the experimental results comprehensively.This paper proposes XA-GANomaly,a novel technique for explainable adaptive semi-supervised learning using GANomaly,an image anomalous detection model that dynamically trains small subsets to these issues.First,this research introduces a deep neural network(DNN)-based GANomaly for semi-supervised learning.Second,this paper presents the proposed adaptive algorithm for the DNN-based GANomaly,which is validated with four subsets of the adaptive dataset.Finally,this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations,reconstruction error visualization,and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage,semi-supervised learning,and adaptive learning.Compared to other single-class classification techniques,the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13%and 8%of F1 scores and 4.17%and 11.51%for accuracy,respectively.Furthermore,experiments of the proposed adaptive learning reveal mostly improved results over the initial values.An analysis and monitoring system based on the combination of the three explainable methodologies is also described.Thus,the proposed method has the potential advantages to be applied in practical industry,and future research will explore handling unbalanced real-time datasets in various scenarios.展开更多
Autism Spectrum Disorder (ASD) is a developmental disorderwhose symptoms become noticeable in early years of the age though it canbe present in any age group. ASD is a mental disorder which affects the communicational...Autism Spectrum Disorder (ASD) is a developmental disorderwhose symptoms become noticeable in early years of the age though it canbe present in any age group. ASD is a mental disorder which affects the communicational, social and non-verbal behaviors. It cannot be cured completelybut can be reduced if detected early. An early diagnosis is hampered by thevariation and severity of ASD symptoms as well as having symptoms commonly seen in other mental disorders as well. Nowadays, with the emergenceof deep learning approaches in various fields, medical experts can be assistedin early diagnosis of ASD. It is very difficult for a practitioner to identifyand concentrate on the major feature’s leading to the accurate prediction ofthe ASD and this arises the need for having an automated approach. Also,presence of different symptoms of ASD traits amongst toddlers directs tothe creation of a large feature dataset. In this study, we propose a hybridapproach comprising of both, deep learning and Explainable Artificial Intelligence (XAI) to find the most contributing features for the early and preciseprediction of ASD. The proposed framework gives more accurate predictionalong with the recommendations of predicted results which will be a vital aidclinically for better and early prediction of ASD traits amongst toddlers.展开更多
Neonatal sepsis is the third most common cause of neonatal mortality and a serious public health problem,especially in developing countries.There have been researches on human sepsis,vaccine response,and immunity.Also...Neonatal sepsis is the third most common cause of neonatal mortality and a serious public health problem,especially in developing countries.There have been researches on human sepsis,vaccine response,and immunity.Also,machine learning methodologies were used for predicting infant mortality based on certain features like age,birth weight,gestational weeks,and Appearance,Pulse,Grimace,Activity and Respiration(APGAR)score.Sepsis,which is considered the most determining condition towards infant mortality,has never been considered for mortality prediction.So,we have deployed a deep neural model which is the state of art and performed a comparative analysis of machine learning models to predict the mortality among infants based on the most important features including sepsis.Also,for assessing the prediction reliability of deep neural model which is a black box,Explainable AI models like Dalex and Lime have been deployed.This would help any non-technical personnel like doctors and practitioners to understand and accordingly make decisions.展开更多
Teaching students the concepts behind computational thinking is a difficult task,often gated by the inherent difficulty of programming languages.In the classroom,teaching assistants may be required to interact with st...Teaching students the concepts behind computational thinking is a difficult task,often gated by the inherent difficulty of programming languages.In the classroom,teaching assistants may be required to interact with students to help them learn the material.Time spent in grading and offering feedback on assignments removes from this time to help students directly.As such,we offer a framework for developing an explainable artificial intelligence that performs automated analysis of student code while offering feedback and partial credit.The creation of this system is dependent on three core components.Those components are a knowledge base,a set of conditions to be analyzed,and a formal set of inference rules.In this paper,we develop such a system for our own language by employing π-calculus and Hoare logic.Our detailed system can also perform self-learning of rules.Given solution files,the system is able to extract the important aspects of the program and develop feedback that explicitly details the errors students make when they veer away from these aspects.The level of detail and expected precision can be easily modified through parameter tuning and variety in sample solutions.展开更多
Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships.It is possible to offer the explainable basis of decision-maki...Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships.It is possible to offer the explainable basis of decision-making for inference results.Through the causality of risk factors that have an ambiguous association in big medical data,it is possible to increase transparency and reliability of explainable decision-making that helps to diagnose disease status.In addition,the technique makes it possible to accurately predict disease risk for anomaly detection.Vision transformer for anomaly detection from image data makes classification through MLP.Unfortunately,in MLP,a vector value depends on patch sequence information,and thus a weight changes.This should solve the problem that there is a difference in the result value according to the change in the weight.In addition,since the deep learning model is a black box model,there is a problem that it is difficult to interpret the results determined by the model.Therefore,there is a need for an explainablemethod for the part where the disease exists.To solve the problem,this study proposes explainable anomaly detection using vision transformerbasedDeep Support Vector Data Description(SVDD).The proposed method applies the SVDD to solve the problem of MLP in which a result value is different depending on a weight change that is influenced by patch sequence information used in the vision transformer.In order to draw the explainability of model results,it visualizes normal parts through Grad-CAM.In health data,both medical staff and patients are able to identify abnormal parts easily.In addition,it is possible to improve the reliability of models and medical staff.For performance evaluation normal/abnormal classification accuracy and f-measure are evaluated,according to whether to apply SVDD.Evaluation Results The results of classification by applying the proposed SVDD are evaluated excellently.Therefore,through the proposed method,it is possible to improve the reliability of decision-making by identifying the location of the disease and deriving consistent results.展开更多
Urgent care clinics and emergency departments around the world periodically suffer from extended wait times beyond patient expectations due to surges in patient flows.The delays arising from inadequate staffing levels...Urgent care clinics and emergency departments around the world periodically suffer from extended wait times beyond patient expectations due to surges in patient flows.The delays arising from inadequate staffing levels during these periods have been linked with adverse clinical outcomes.Previous research into forecasting patient flows has mostly used statistical techniques.These studies have also predominately focussed on short‐term forecasts,which have limited practicality for the resourcing of medical personnel.This study joins an emerging body of work which seeks to explore the potential of machine learning algorithms to generate accurate forecasts of patient presentations.Our research uses datasets covering 10 years from two large urgent care clinics to develop long‐term patient flow forecasts up to one quarter ahead using a range of state‐of‐the‐art algo-rithms.A distinctive feature of this study is the use of eXplainable Artificial Intelligence(XAI)tools like Shapely and LIME that enable an in‐depth analysis of the behaviour of the models,which would otherwise be uninterpretable.These analysis tools enabled us to explore the ability of the models to adapt to the volatility in patient demand during the COVID‐19 pandemic lockdowns and to identify the most impactful variables,resulting in valuable insights into their performance.The results showed that a novel combination of advanced univariate models like Prophet as well as gradient boosting,into an ensemble,delivered the most accurate and consistent solutions on average.This approach generated improvements in the range of 16%-30%over the existing in‐house methods for esti-mating the daily patient flows 90 days ahead.展开更多
The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence(XAI),a process that explains how p...The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence(XAI),a process that explains how prediction is done in AI models.Biomedical mental disorder,i.e.,Autism Spectral Disorder(ASD)needs to be identified and classified at early stage itself in order to reduce health crisis.With this background,the current paper presents XAI-based ASD diagnosis(XAI-ASD)model to detect and classify ASD precisely.The proposed XAI-ASD technique involves the design of Bacterial Foraging Optimization(BFO)-based Feature Selection(FS)technique.In addition,Whale Optimization Algorithm(WOA)with Deep Belief Network(DBN)model is also applied for ASD classification process in which the hyperparameters of DBN model are optimally tuned with the help of WOA.In order to ensure a better ASD diagnostic outcome,a series of simulation process was conducted on ASD dataset.展开更多
文摘The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats.
基金funded by Woosong University Academic Research 2024.
文摘Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learning models,especially those utilizing complex algorithms like deep learning,have demonstrated major potential in extracting important information fromlarge operational datasets.Despite their efficiency,machine learningmodels face challenges,making Explainable AI(XAI)crucial for improving their understandability and fine-tuning.The importance of feature contribution and selection using XAI in the diagnosis of machine faults is examined in this study.The technique is applied to evaluate different machine-learning algorithms.Extreme Gradient Boosting,Support Vector Machine,Gaussian Naive Bayes,and Random Forest classifiers are used alongside Logistic Regression(LR)as a baseline model because their efficacy and simplicity are evaluated thoroughly with empirical analysis.The XAI is used as a targeted feature selection technique to select among 29 features of the time and frequency domain.The XAI approach is lightweight,trained with only targeted features,and achieved similar results as the traditional approach.The accuracy without XAI on baseline LR is 79.57%,whereas the approach with XAI on LR is 80.28%.
基金supported by theCONAHCYT(Consejo Nacional deHumanidades,Ciencias y Tecnologias).
文摘The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R432),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique.
基金supported by the National Natural Science Foundation of China (62373224,62333013,U23A20327)。
文摘Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control parameters,an efficient solution that can perform a reliable sensitivity analysis of the production terms of interest and forecast key battery properties in the early production phase is urgently required.This paper performs detailed sensitivity analysis of key production terms on determining the properties of manufactured battery electrode via advanced data-driven modelling.To be specific,an explainable neural network named generalized additive model with structured interaction(GAM-SI)is designed to predict two key battery properties,including electrode mass loading and porosity,while the effects of four early production terms on manufactured batteries are explained and analysed.The experimental results reveal that the proposed method is able to accurately predict battery electrode properties in the mixing and coating stages.In addition,the importance ratio ranking,global interpretation and local interpretation of both the main effects and pairwise interactions can be effectively visualized by the designed neural network.Due to the merits of interpretability,the proposed GAM-SI can help engineers gain important insights for understanding complicated production behavior,further benefitting smart battery production.
基金supported by the National Natural Science Foundation of China(Grant Numbers:62372083,62072074,62076054,62027827,62002047)the Sichuan Provincial Science and Technology Innovation Platform and Talent Program(Grant Number:2022JDJQ0039)+1 种基金the Sichuan Provincial Science and Technology Support Program(Grant Numbers:2022YFQ0045,2022YFS0220,2021YFG0131,2023YFS0020,2023YFS0197,2023YFG0148)the CCF-Baidu Open Fund(Grant Number:202312).
文摘In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.
基金This research is funded by the Researchers Supporting Project Number(RSPD2024R1027),King Saud University,Riyadh,Saudi Arabia.
文摘Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors.
基金the Deanship for Research Innovation,Ministry of Education in Saudi Arabia,for funding this research work through project number IFKSUDR-H122.
文摘In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19.
文摘In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by IoT technology,the growing number of IoT devices escalates the likelihood of attacks,emphasizing the need for robust security tools to automatically detect and explain threats.This paper introduces a deep learning methodology for detecting and classifying distributed denial of service(DDoS)attacks,addressing a significant security concern within IoT environments.An effective procedure of deep transfer learning is applied to utilize deep learning backbones,which is then evaluated on two benchmarking datasets of DDoS attacks in terms of accuracy and time complexity.By leveraging several deep architectures,the study conducts thorough binary and multiclass experiments,each varying in the complexity of classifying attack types and demonstrating real-world scenarios.Additionally,this study employs an explainable artificial intelligence(XAI)AI technique to elucidate the contribution of extracted features in the process of attack detection.The experimental results demonstrate the effectiveness of the proposed method,achieving a recall of 99.39%by the XAI bidirectional long short-term memory(XAI-BiLSTM)model.
文摘Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world.
基金supported by the Researchers Supporting Project(RSPD2024R846),King Saud University,Riyadh,Saudi Arabia.
文摘Breast cancer stands as one of the world’s most perilous and formidable diseases,having recently surpassed lung cancer as the most prevalent cancer type.This disease arises when cells in the breast undergo unregulated proliferation,resulting in the formation of a tumor that has the capacity to invade surrounding tissues.It is not confined to a specific gender;both men and women can be diagnosed with breast cancer,although it is more frequently observed in women.Early detection is pivotal in mitigating its mortality rate.The key to curbing its mortality lies in early detection.However,it is crucial to explain the black-box machine learning algorithms in this field to gain the trust of medical professionals and patients.In this study,we experimented with various machine learning models to predict breast cancer using the Wisconsin Breast Cancer Dataset(WBCD)dataset.We applied Random Forest,XGBoost,Support Vector Machine(SVM),Multi-Layer Perceptron(MLP),and Gradient Boost classifiers,with the Random Forest model outperforming the others.A comparison analysis between the two methods was done after performing hyperparameter tuning on each method.The analysis showed that the random forest performs better and yields the highest result with 99.46%accuracy.After performance evaluation,two Explainable Artificial Intelligence(XAI)methods,SHapley Additive exPlanations(SHAP)and Local Interpretable Model-Agnostic Explanations(LIME),have been utilized to explain the random forest machine learning model.
基金This paper is partially supported by Open Fund for Jiangsu Key Laboratory of Advanced Manufacturing Technology(HGAMTL-1703)Guangxi Key Laboratory of Trusted Software(kx201901)+5 种基金Fundamental Research Funds for the Central Universities(CDLS-2020-03)Key Laboratory of Child Development and Learning Science(Southeast University),Ministry of EducationRoyal Society International Exchanges Cost Share Award,UK(RP202G0230)Medical Research Council Confidence in Concept Award,UK(MC_PC_17171)Hope Foundation for Cancer Research,UK(RM60G0680)British Heart Foundation Accelerator Award,UK.
文摘Aim: To diagnose COVID-19 more efficiently and more correctly, this study proposed a novel attention network forCOVID-19 (ANC). Methods: Two datasets were used in this study. An 18-way data augmentation was proposed toavoid overfitting. Then, convolutional block attention module (CBAM) was integrated to our model, the structureof which is fine-tuned. Finally, Grad-CAM was used to provide an explainable diagnosis. Results: The accuracyof our ANC methods on two datasets are 96.32% ± 1.06%, and 96.00% ± 1.03%, respectively. Conclusions: Thisproposed ANC method is superior to 9 state-of-the-art approaches.
基金This work was supported in part by the National Natural Science Foundation of China(82260360)the Foreign Young Talent Program(QN2021033002L).
文摘Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by theKoreaGovernment(MOTIE)(P0008703,The CompetencyDevelopment Program for Industry Specialist).
文摘Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry.However,real-time training and classifying network traffic pose challenges,as they can lead to the degradation of the overall dataset and difficulties preventing attacks.Additionally,existing semi-supervised learning research might need to analyze the experimental results comprehensively.This paper proposes XA-GANomaly,a novel technique for explainable adaptive semi-supervised learning using GANomaly,an image anomalous detection model that dynamically trains small subsets to these issues.First,this research introduces a deep neural network(DNN)-based GANomaly for semi-supervised learning.Second,this paper presents the proposed adaptive algorithm for the DNN-based GANomaly,which is validated with four subsets of the adaptive dataset.Finally,this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations,reconstruction error visualization,and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage,semi-supervised learning,and adaptive learning.Compared to other single-class classification techniques,the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13%and 8%of F1 scores and 4.17%and 11.51%for accuracy,respectively.Furthermore,experiments of the proposed adaptive learning reveal mostly improved results over the initial values.An analysis and monitoring system based on the combination of the three explainable methodologies is also described.Thus,the proposed method has the potential advantages to be applied in practical industry,and future research will explore handling unbalanced real-time datasets in various scenarios.
基金Authors would like to thank for the support of Taif University Researchers Supporting Project Number(TURSP−2020/10),Taif University,Taif,Saudi Arabia.
文摘Autism Spectrum Disorder (ASD) is a developmental disorderwhose symptoms become noticeable in early years of the age though it canbe present in any age group. ASD is a mental disorder which affects the communicational, social and non-verbal behaviors. It cannot be cured completelybut can be reduced if detected early. An early diagnosis is hampered by thevariation and severity of ASD symptoms as well as having symptoms commonly seen in other mental disorders as well. Nowadays, with the emergenceof deep learning approaches in various fields, medical experts can be assistedin early diagnosis of ASD. It is very difficult for a practitioner to identifyand concentrate on the major feature’s leading to the accurate prediction ofthe ASD and this arises the need for having an automated approach. Also,presence of different symptoms of ASD traits amongst toddlers directs tothe creation of a large feature dataset. In this study, we propose a hybridapproach comprising of both, deep learning and Explainable Artificial Intelligence (XAI) to find the most contributing features for the early and preciseprediction of ASD. The proposed framework gives more accurate predictionalong with the recommendations of predicted results which will be a vital aidclinically for better and early prediction of ASD traits amongst toddlers.
文摘Neonatal sepsis is the third most common cause of neonatal mortality and a serious public health problem,especially in developing countries.There have been researches on human sepsis,vaccine response,and immunity.Also,machine learning methodologies were used for predicting infant mortality based on certain features like age,birth weight,gestational weeks,and Appearance,Pulse,Grimace,Activity and Respiration(APGAR)score.Sepsis,which is considered the most determining condition towards infant mortality,has never been considered for mortality prediction.So,we have deployed a deep neural model which is the state of art and performed a comparative analysis of machine learning models to predict the mortality among infants based on the most important features including sepsis.Also,for assessing the prediction reliability of deep neural model which is a black box,Explainable AI models like Dalex and Lime have been deployed.This would help any non-technical personnel like doctors and practitioners to understand and accordingly make decisions.
基金supported by general funding at IoT and Robotics Education Lab and FURI program at Arizona State University.
文摘Teaching students the concepts behind computational thinking is a difficult task,often gated by the inherent difficulty of programming languages.In the classroom,teaching assistants may be required to interact with students to help them learn the material.Time spent in grading and offering feedback on assignments removes from this time to help students directly.As such,we offer a framework for developing an explainable artificial intelligence that performs automated analysis of student code while offering feedback and partial credit.The creation of this system is dependent on three core components.Those components are a knowledge base,a set of conditions to be analyzed,and a formal set of inference rules.In this paper,we develop such a system for our own language by employing π-calculus and Hoare logic.Our detailed system can also perform self-learning of rules.Given solution files,the system is able to extract the important aspects of the program and develop feedback that explicitly details the errors students make when they veer away from these aspects.The level of detail and expected precision can be easily modified through parameter tuning and variety in sample solutions.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2020R1A6A1A03040583).
文摘Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships.It is possible to offer the explainable basis of decision-making for inference results.Through the causality of risk factors that have an ambiguous association in big medical data,it is possible to increase transparency and reliability of explainable decision-making that helps to diagnose disease status.In addition,the technique makes it possible to accurately predict disease risk for anomaly detection.Vision transformer for anomaly detection from image data makes classification through MLP.Unfortunately,in MLP,a vector value depends on patch sequence information,and thus a weight changes.This should solve the problem that there is a difference in the result value according to the change in the weight.In addition,since the deep learning model is a black box model,there is a problem that it is difficult to interpret the results determined by the model.Therefore,there is a need for an explainablemethod for the part where the disease exists.To solve the problem,this study proposes explainable anomaly detection using vision transformerbasedDeep Support Vector Data Description(SVDD).The proposed method applies the SVDD to solve the problem of MLP in which a result value is different depending on a weight change that is influenced by patch sequence information used in the vision transformer.In order to draw the explainability of model results,it visualizes normal parts through Grad-CAM.In health data,both medical staff and patients are able to identify abnormal parts easily.In addition,it is possible to improve the reliability of models and medical staff.For performance evaluation normal/abnormal classification accuracy and f-measure are evaluated,according to whether to apply SVDD.Evaluation Results The results of classification by applying the proposed SVDD are evaluated excellently.Therefore,through the proposed method,it is possible to improve the reliability of decision-making by identifying the location of the disease and deriving consistent results.
文摘Urgent care clinics and emergency departments around the world periodically suffer from extended wait times beyond patient expectations due to surges in patient flows.The delays arising from inadequate staffing levels during these periods have been linked with adverse clinical outcomes.Previous research into forecasting patient flows has mostly used statistical techniques.These studies have also predominately focussed on short‐term forecasts,which have limited practicality for the resourcing of medical personnel.This study joins an emerging body of work which seeks to explore the potential of machine learning algorithms to generate accurate forecasts of patient presentations.Our research uses datasets covering 10 years from two large urgent care clinics to develop long‐term patient flow forecasts up to one quarter ahead using a range of state‐of‐the‐art algo-rithms.A distinctive feature of this study is the use of eXplainable Artificial Intelligence(XAI)tools like Shapely and LIME that enable an in‐depth analysis of the behaviour of the models,which would otherwise be uninterpretable.These analysis tools enabled us to explore the ability of the models to adapt to the volatility in patient demand during the COVID‐19 pandemic lockdowns and to identify the most impactful variables,resulting in valuable insights into their performance.The results showed that a novel combination of advanced univariate models like Prophet as well as gradient boosting,into an ensemble,delivered the most accurate and consistent solutions on average.This approach generated improvements in the range of 16%-30%over the existing in‐house methods for esti-mating the daily patient flows 90 days ahead.
文摘The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence(XAI),a process that explains how prediction is done in AI models.Biomedical mental disorder,i.e.,Autism Spectral Disorder(ASD)needs to be identified and classified at early stage itself in order to reduce health crisis.With this background,the current paper presents XAI-based ASD diagnosis(XAI-ASD)model to detect and classify ASD precisely.The proposed XAI-ASD technique involves the design of Bacterial Foraging Optimization(BFO)-based Feature Selection(FS)technique.In addition,Whale Optimization Algorithm(WOA)with Deep Belief Network(DBN)model is also applied for ASD classification process in which the hyperparameters of DBN model are optimally tuned with the help of WOA.In order to ensure a better ASD diagnostic outcome,a series of simulation process was conducted on ASD dataset.