The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ...The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.展开更多
Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In ...Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.展开更多
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(...In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.展开更多
Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitor...Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitoring.The HIoT systems find use in several applications such as smart cities,healthcare,transportation,etc.Besides,the HIoT system and explainable artificial intelligence(XAI)tools can be deployed in the healthcare sector for effective decision-making.The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage.This article presents a new quantum-inspired differential evolution with explainable artificial intelligence based COVID-19 Detection and Classification(QIDEXAI-CDC)model for HIoT systems.The QIDEXAI-CDC model aims to identify the occurrence of COVID-19 using the XAI tools on HIoT systems.The QIDEXAI-CDC model primarily uses bilateral filtering(BF)as a preprocessing tool to eradicate the noise.In addition,RetinaNet is applied for the generation of useful feature vectors from radiological images.For COVID-19 detection and classification,quantum-inspired differential evolution(QIDE)with kernel extreme learning machine(KELM)model is utilized.The utilization of the QIDE algorithm helps to appropriately choose the weight and bias values of the KELM model.In order to report the enhanced COVID-19 detection outcomes of the QIDEXAI-CDC model,a wide range of simulations was carried out.Extensive comparative studies reported the supremacy of the QIDEXAI-CDC model over the recent approaches.展开更多
Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control par...Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control parameters,an efficient solution that can perform a reliable sensitivity analysis of the production terms of interest and forecast key battery properties in the early production phase is urgently required.This paper performs detailed sensitivity analysis of key production terms on determining the properties of manufactured battery electrode via advanced data-driven modelling.To be specific,an explainable neural network named generalized additive model with structured interaction(GAM-SI)is designed to predict two key battery properties,including electrode mass loading and porosity,while the effects of four early production terms on manufactured batteries are explained and analysed.The experimental results reveal that the proposed method is able to accurately predict battery electrode properties in the mixing and coating stages.In addition,the importance ratio ranking,global interpretation and local interpretation of both the main effects and pairwise interactions can be effectively visualized by the designed neural network.Due to the merits of interpretability,the proposed GAM-SI can help engineers gain important insights for understanding complicated production behavior,further benefitting smart battery production.展开更多
Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of mu...Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world.展开更多
Machine learning(ML)has emerged as a critical enabling tool in the sciences and industry in recent years.Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex task...Machine learning(ML)has emerged as a critical enabling tool in the sciences and industry in recent years.Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex tasks-thanks to advancements in technique,the availability of enormous databases,and improved computing power.Deep learning models are at the forefront of this advancement.However,because of their nested nonlinear structure,these strong models are termed as“black boxes,”as they provide no information about how they arrive at their conclusions.Such a lack of transparencies may be unacceptable in many applications,such as the medical domain.A lot of emphasis has recently been paid to the development of methods for visualizing,explaining,and interpreting deep learningmodels.The situation is substantially different in safety-critical applications.The lack of transparency of machine learning techniques may be limiting or even disqualifying issue in this case.Significantly,when single bad decisions can endanger human life and health(e.g.,autonomous driving,medical domain)or result in significant monetary losses(e.g.,algorithmic trading),depending on an unintelligible data-driven system may not be an option.This lack of transparency is one reason why machine learning in sectors like health is more cautious than in the consumer,e-commerce,or entertainment industries.Explainability is the term introduced in the preceding years.The AImodel’s black box nature will become explainable with these frameworks.Especially in the medical domain,diagnosing a particular disease through AI techniques would be less adapted for commercial use.These models’explainable natures will help them commercially in diagnosis decisions in the medical field.This paper explores the different frameworks for the explainability of AI models in the medical field.The available frameworks are compared with other parameters,and their suitability for medical fields is also discussed.展开更多
The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence(XAI),a process that explains how p...The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence(XAI),a process that explains how prediction is done in AI models.Biomedical mental disorder,i.e.,Autism Spectral Disorder(ASD)needs to be identified and classified at early stage itself in order to reduce health crisis.With this background,the current paper presents XAI-based ASD diagnosis(XAI-ASD)model to detect and classify ASD precisely.The proposed XAI-ASD technique involves the design of Bacterial Foraging Optimization(BFO)-based Feature Selection(FS)technique.In addition,Whale Optimization Algorithm(WOA)with Deep Belief Network(DBN)model is also applied for ASD classification process in which the hyperparameters of DBN model are optimally tuned with the help of WOA.In order to ensure a better ASD diagnostic outcome,a series of simulation process was conducted on ASD dataset.展开更多
Artificial intelligence(AI)and machine learning(ML)help in making predictions and businesses to make key decisions that are beneficial for them.In the case of the online shopping business,it’s very important to find ...Artificial intelligence(AI)and machine learning(ML)help in making predictions and businesses to make key decisions that are beneficial for them.In the case of the online shopping business,it’s very important to find trends in the data and get knowledge of features that helps drive the success of the business.In this research,a dataset of 12,330 records of customers has been analyzedwho visited an online shoppingwebsite over a period of one year.The main objective of this research is to find features that are relevant in terms of correctly predicting the purchasing decisions made by visiting customers and build ML models which could make correct predictions on unseen data in the future.The permutation feature importance approach has been used to get the importance of features according to the output variable(Revenue).Five ML models i.e.,decision tree(DT),random forest(RF),extra tree(ET)classifier,Neural networks(NN),and Logistic regression(LR)have been used to make predictions on the unseen data in the future.The performance of each model has been discussed in detail using performance measurement techniques such as accuracy score,precision,recall,F1 score,and ROC-AUC curve.RF model is the bestmodel among all five chosen based on accuracy score of 90%and F1 score of 79%followed by extra tree classifier.Hence,our study indicates that RF model can be used by online retailing businesses for predicting consumer buying behaviour.Our research also reveals the importance of page value as a key feature for capturing online purchasing trends.This may give a clue to future businesses who can focus on this specific feature and can find key factors behind page value success which in turn will help the online shopping business.展开更多
With the advances in artificial intelligence(AI),data‐driven algorithms are becoming increasingly popular in the medical domain.However,due to the nonlinear and complex behavior of many of these algorithms,decision‐...With the advances in artificial intelligence(AI),data‐driven algorithms are becoming increasingly popular in the medical domain.However,due to the nonlinear and complex behavior of many of these algorithms,decision‐making by such algorithms is not trustworthy for clinicians and is considered a blackbox process.Hence,the scientific community has introduced explainable artificial intelligence(XAI)to remedy the problem.This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction.We conducted a comprehensive search on Scopus,IEEE Explore,PubMed,and Google Scholar(first 50 citations)using a systematic search strategy.The search spanned from January 2017 to July 2023,focusing on peer‐reviewed studies implementing XAI methods in breast cancer datasets.Thirty studies met our inclusion criteria and were included in the analysis.The results revealed that SHapley Additive exPlanations(SHAP)is the top model‐agnostic XAI technique in breast cancer research in terms of usage,explaining the model prediction results,diagnosis and classification of biomarkers,and prognosis and survival analysis.Additionally,the SHAP model primarily explained tree‐based ensemble machine learning models.The most common reason is that SHAP is model agnostic,which makes it both popular and useful for explaining any model prediction.Additionally,it is relatively easy to implement effectively and completely suits performant models,such as tree‐based models.Explainable AI improves the transparency,interpretability,fairness,and trustworthiness of AI‐enabled health systems and medical devices and,ultimately,the quality of care and outcomes.展开更多
Earthquakes pose significant risks globally,necessitating effective seismic risk mitigation strategies like earthquake early warning(EEW)systems.However,developing and optimizing such systems requires thoroughly under...Earthquakes pose significant risks globally,necessitating effective seismic risk mitigation strategies like earthquake early warning(EEW)systems.However,developing and optimizing such systems requires thoroughly understanding their internal procedures and coverage limitations.This study examines a deep-learning-based on-site EEW framework known as ROSERS(Real-time On-Site Estimation of Response Spectra)proposed by the authors,which constructs response spectra from early recorded ground motion waveforms at a target site.This study has three primary goals:(1)evaluating the effectiveness and applicability of ROSERS to subduction seismic sources;(2)providing a detailed interpretation of the trained deep neural network(DNN)and surrogate latent variables(LVs)implemented in ROSERS;and(3)analyzing the spatial efficacy of the framework to assess the coverage area of on-site EEW stations.ROSERS is retrained and tested on a dataset of around 11,000 unprocessed Japanese subduction ground motions.Goodness-of-fit testing shows that the ROSERS framework achieves good performance on this database,especially given the peculiarities of the subduction seismic environment.The trained DNN and LVs are then interpreted using game theory-based Shapley additive explanations to establish cause-effect relationships.Finally,the study explores the coverage area of ROSERS by training a novel spatial regression model that estimates the LVs using geographically weighted random forest and determining the radius of similarity.The results indicate that on-site predictions can be considered reliable within a 2–9 km radius,varying based on the magnitude and distance from the earthquake source.This information can assist end-users in strategically placing sensors,minimizing blind spots,and reducing errors from regional extrapolation.展开更多
Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechani...Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry.However,real-time training and classifying network traffic pose challenges,as they can lead to the degradation of the overall dataset and difficulties preventing attacks.Additionally,existing semi-supervised learning research might need to analyze the experimental results comprehensively.This paper proposes XA-GANomaly,a novel technique for explainable adaptive semi-supervised learning using GANomaly,an image anomalous detection model that dynamically trains small subsets to these issues.First,this research introduces a deep neural network(DNN)-based GANomaly for semi-supervised learning.Second,this paper presents the proposed adaptive algorithm for the DNN-based GANomaly,which is validated with four subsets of the adaptive dataset.Finally,this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations,reconstruction error visualization,and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage,semi-supervised learning,and adaptive learning.Compared to other single-class classification techniques,the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13%and 8%of F1 scores and 4.17%and 11.51%for accuracy,respectively.Furthermore,experiments of the proposed adaptive learning reveal mostly improved results over the initial values.An analysis and monitoring system based on the combination of the three explainable methodologies is also described.Thus,the proposed method has the potential advantages to be applied in practical industry,and future research will explore handling unbalanced real-time datasets in various scenarios.展开更多
With increasing world population the demand of food production has increased exponentially.Internet of Things(IoT)based smart agriculture system can play a vital role in optimising crop yield by managing crop requirem...With increasing world population the demand of food production has increased exponentially.Internet of Things(IoT)based smart agriculture system can play a vital role in optimising crop yield by managing crop requirements in real-time.Interpretability can be an important factor to make such systems trusted and easily adopted by farmers.In this paper,we propose a novel artificial intelligence-based agriculture system that uses IoT data to monitor the environment and alerts farmers to take the required actions for maintaining ideal conditions for crop production.The strength of the proposed system is in its interpretability which makes it easy for farmers to understand,trust and use it.The use of fuzzy logic makes the system customisable in terms of types/number of sensors,type of crop,and adaptable for any soil types and weather conditions.The proposed system can identify anomalous data due to security breaches or hardware malfunction using machine learning algorithms.To ensure the viability of the system we have conducted thorough research related to agricultural factors such as soil type,soil moisture,soil temperature,plant life cycle,irrigation requirement and water application timing for Maize as our target crop.The experimental results show that our proposed system is interpretable,can detect anomalous data,and triggers actions accurately based on crop requirements.展开更多
Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded vid...Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively.展开更多
The exponential use of artificial intelligence(AI)to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed.While AI is a powerfulmeans to discover interes...The exponential use of artificial intelligence(AI)to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed.While AI is a powerfulmeans to discover interesting patterns and obtain predictive models,the use of these algorithms comes with a great responsibility,as an incomplete or unbalanced set of training data or an unproper interpretation of the models’outcomes could result in misleading conclusions that ultimately could become very dangerous.For these reasons,it is important to rely on expert knowledge when applying these methods.However,not every user can count on this specific expertise;non-AIexpert users could also benefit from applying these powerful algorithms to their domain problems,but they need basic guidelines to obtain themost out of AI models.The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features.The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering.As a result,9 papers that tackle AI algorithmrecommendation through tangible and traceable rules and heuristics were collected.The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms.展开更多
One of the most widely used smartphone operating systems,Android,is vulnerable to cutting-edge malware that employs sophisticated logic.Such malware attacks could lead to the execution of unauthorized acts on the vict...One of the most widely used smartphone operating systems,Android,is vulnerable to cutting-edge malware that employs sophisticated logic.Such malware attacks could lead to the execution of unauthorized acts on the victims’devices,stealing personal information and causing hardware damage.In previous studies,machine learning(ML)has shown its efficacy in detecting malware events and classifying their types.However,attackers are continuously developing more sophisticated methods to bypass detection.Therefore,up-to-date datasets must be utilized to implement proactive models for detecting malware events in Android mobile devices.Therefore,this study employed ML algorithms to classify Android applications into malware or goodware using permission and application programming interface(API)-based features from a recent dataset.To overcome the dataset imbalance issue,RandomOverSampler,synthetic minority oversampling with tomek links(SMOTETomek),and RandomUnderSampler were applied to the Dataset in different experiments.The results indicated that the extra tree(ET)classifier achieved the highest accuracy of 99.53%within an elapsed time of 0.0198 s in the experiment that utilized the RandomOverSampler technique.Furthermore,the explainable Artificial Intelligence(EAI)technique has been applied to add transparency to the high-performance ET classifier.The global explanation using the Shapely values indicated that the top three features contributing to the goodware class are:Ljava/net/URL;->openConnection,Landroid/location/LocationManager;->getLastKgoodwarewnLocation,and Vibrate.On the other hand,the top three features contributing to themalware class are Receive_Boot_Completed,Get_Tasks,and Kill_Background_Processes.It is believed that the proposedmodel can contribute to proactively detectingmalware events in Android devices to reduce the number of victims and increase users’trust.展开更多
基金supported by theCONAHCYT(Consejo Nacional deHumanidades,Ciencias y Tecnologias).
文摘The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.
基金This work was supported in part by the National Natural Science Foundation of China(82260360)the Foreign Young Talent Program(QN2021033002L).
文摘Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.
基金This research work was funded by Institutional Fund Projects under grant no.(IFPIP:624-611-1443)。
文摘In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.
文摘Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitoring.The HIoT systems find use in several applications such as smart cities,healthcare,transportation,etc.Besides,the HIoT system and explainable artificial intelligence(XAI)tools can be deployed in the healthcare sector for effective decision-making.The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage.This article presents a new quantum-inspired differential evolution with explainable artificial intelligence based COVID-19 Detection and Classification(QIDEXAI-CDC)model for HIoT systems.The QIDEXAI-CDC model aims to identify the occurrence of COVID-19 using the XAI tools on HIoT systems.The QIDEXAI-CDC model primarily uses bilateral filtering(BF)as a preprocessing tool to eradicate the noise.In addition,RetinaNet is applied for the generation of useful feature vectors from radiological images.For COVID-19 detection and classification,quantum-inspired differential evolution(QIDE)with kernel extreme learning machine(KELM)model is utilized.The utilization of the QIDE algorithm helps to appropriately choose the weight and bias values of the KELM model.In order to report the enhanced COVID-19 detection outcomes of the QIDEXAI-CDC model,a wide range of simulations was carried out.Extensive comparative studies reported the supremacy of the QIDEXAI-CDC model over the recent approaches.
基金supported by the National Natural Science Foundation of China (62373224,62333013,U23A20327)。
文摘Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control parameters,an efficient solution that can perform a reliable sensitivity analysis of the production terms of interest and forecast key battery properties in the early production phase is urgently required.This paper performs detailed sensitivity analysis of key production terms on determining the properties of manufactured battery electrode via advanced data-driven modelling.To be specific,an explainable neural network named generalized additive model with structured interaction(GAM-SI)is designed to predict two key battery properties,including electrode mass loading and porosity,while the effects of four early production terms on manufactured batteries are explained and analysed.The experimental results reveal that the proposed method is able to accurately predict battery electrode properties in the mixing and coating stages.In addition,the importance ratio ranking,global interpretation and local interpretation of both the main effects and pairwise interactions can be effectively visualized by the designed neural network.Due to the merits of interpretability,the proposed GAM-SI can help engineers gain important insights for understanding complicated production behavior,further benefitting smart battery production.
文摘Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world.
基金funded by the Centre for Advanced Modeling and Geospatial Information Systems(CAMGIS),Faculty of Engineering&IT,University of Technology Sydney.
文摘Machine learning(ML)has emerged as a critical enabling tool in the sciences and industry in recent years.Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex tasks-thanks to advancements in technique,the availability of enormous databases,and improved computing power.Deep learning models are at the forefront of this advancement.However,because of their nested nonlinear structure,these strong models are termed as“black boxes,”as they provide no information about how they arrive at their conclusions.Such a lack of transparencies may be unacceptable in many applications,such as the medical domain.A lot of emphasis has recently been paid to the development of methods for visualizing,explaining,and interpreting deep learningmodels.The situation is substantially different in safety-critical applications.The lack of transparency of machine learning techniques may be limiting or even disqualifying issue in this case.Significantly,when single bad decisions can endanger human life and health(e.g.,autonomous driving,medical domain)or result in significant monetary losses(e.g.,algorithmic trading),depending on an unintelligible data-driven system may not be an option.This lack of transparency is one reason why machine learning in sectors like health is more cautious than in the consumer,e-commerce,or entertainment industries.Explainability is the term introduced in the preceding years.The AImodel’s black box nature will become explainable with these frameworks.Especially in the medical domain,diagnosing a particular disease through AI techniques would be less adapted for commercial use.These models’explainable natures will help them commercially in diagnosis decisions in the medical field.This paper explores the different frameworks for the explainability of AI models in the medical field.The available frameworks are compared with other parameters,and their suitability for medical fields is also discussed.
文摘The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence(XAI),a process that explains how prediction is done in AI models.Biomedical mental disorder,i.e.,Autism Spectral Disorder(ASD)needs to be identified and classified at early stage itself in order to reduce health crisis.With this background,the current paper presents XAI-based ASD diagnosis(XAI-ASD)model to detect and classify ASD precisely.The proposed XAI-ASD technique involves the design of Bacterial Foraging Optimization(BFO)-based Feature Selection(FS)technique.In addition,Whale Optimization Algorithm(WOA)with Deep Belief Network(DBN)model is also applied for ASD classification process in which the hyperparameters of DBN model are optimally tuned with the help of WOA.In order to ensure a better ASD diagnostic outcome,a series of simulation process was conducted on ASD dataset.
文摘Artificial intelligence(AI)and machine learning(ML)help in making predictions and businesses to make key decisions that are beneficial for them.In the case of the online shopping business,it’s very important to find trends in the data and get knowledge of features that helps drive the success of the business.In this research,a dataset of 12,330 records of customers has been analyzedwho visited an online shoppingwebsite over a period of one year.The main objective of this research is to find features that are relevant in terms of correctly predicting the purchasing decisions made by visiting customers and build ML models which could make correct predictions on unseen data in the future.The permutation feature importance approach has been used to get the importance of features according to the output variable(Revenue).Five ML models i.e.,decision tree(DT),random forest(RF),extra tree(ET)classifier,Neural networks(NN),and Logistic regression(LR)have been used to make predictions on the unseen data in the future.The performance of each model has been discussed in detail using performance measurement techniques such as accuracy score,precision,recall,F1 score,and ROC-AUC curve.RF model is the bestmodel among all five chosen based on accuracy score of 90%and F1 score of 79%followed by extra tree classifier.Hence,our study indicates that RF model can be used by online retailing businesses for predicting consumer buying behaviour.Our research also reveals the importance of page value as a key feature for capturing online purchasing trends.This may give a clue to future businesses who can focus on this specific feature and can find key factors behind page value success which in turn will help the online shopping business.
文摘With the advances in artificial intelligence(AI),data‐driven algorithms are becoming increasingly popular in the medical domain.However,due to the nonlinear and complex behavior of many of these algorithms,decision‐making by such algorithms is not trustworthy for clinicians and is considered a blackbox process.Hence,the scientific community has introduced explainable artificial intelligence(XAI)to remedy the problem.This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction.We conducted a comprehensive search on Scopus,IEEE Explore,PubMed,and Google Scholar(first 50 citations)using a systematic search strategy.The search spanned from January 2017 to July 2023,focusing on peer‐reviewed studies implementing XAI methods in breast cancer datasets.Thirty studies met our inclusion criteria and were included in the analysis.The results revealed that SHapley Additive exPlanations(SHAP)is the top model‐agnostic XAI technique in breast cancer research in terms of usage,explaining the model prediction results,diagnosis and classification of biomarkers,and prognosis and survival analysis.Additionally,the SHAP model primarily explained tree‐based ensemble machine learning models.The most common reason is that SHAP is model agnostic,which makes it both popular and useful for explaining any model prediction.Additionally,it is relatively easy to implement effectively and completely suits performant models,such as tree‐based models.Explainable AI improves the transparency,interpretability,fairness,and trustworthiness of AI‐enabled health systems and medical devices and,ultimately,the quality of care and outcomes.
文摘Earthquakes pose significant risks globally,necessitating effective seismic risk mitigation strategies like earthquake early warning(EEW)systems.However,developing and optimizing such systems requires thoroughly understanding their internal procedures and coverage limitations.This study examines a deep-learning-based on-site EEW framework known as ROSERS(Real-time On-Site Estimation of Response Spectra)proposed by the authors,which constructs response spectra from early recorded ground motion waveforms at a target site.This study has three primary goals:(1)evaluating the effectiveness and applicability of ROSERS to subduction seismic sources;(2)providing a detailed interpretation of the trained deep neural network(DNN)and surrogate latent variables(LVs)implemented in ROSERS;and(3)analyzing the spatial efficacy of the framework to assess the coverage area of on-site EEW stations.ROSERS is retrained and tested on a dataset of around 11,000 unprocessed Japanese subduction ground motions.Goodness-of-fit testing shows that the ROSERS framework achieves good performance on this database,especially given the peculiarities of the subduction seismic environment.The trained DNN and LVs are then interpreted using game theory-based Shapley additive explanations to establish cause-effect relationships.Finally,the study explores the coverage area of ROSERS by training a novel spatial regression model that estimates the LVs using geographically weighted random forest and determining the radius of similarity.The results indicate that on-site predictions can be considered reliable within a 2–9 km radius,varying based on the magnitude and distance from the earthquake source.This information can assist end-users in strategically placing sensors,minimizing blind spots,and reducing errors from regional extrapolation.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by theKoreaGovernment(MOTIE)(P0008703,The CompetencyDevelopment Program for Industry Specialist).
文摘Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry.However,real-time training and classifying network traffic pose challenges,as they can lead to the degradation of the overall dataset and difficulties preventing attacks.Additionally,existing semi-supervised learning research might need to analyze the experimental results comprehensively.This paper proposes XA-GANomaly,a novel technique for explainable adaptive semi-supervised learning using GANomaly,an image anomalous detection model that dynamically trains small subsets to these issues.First,this research introduces a deep neural network(DNN)-based GANomaly for semi-supervised learning.Second,this paper presents the proposed adaptive algorithm for the DNN-based GANomaly,which is validated with four subsets of the adaptive dataset.Finally,this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations,reconstruction error visualization,and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage,semi-supervised learning,and adaptive learning.Compared to other single-class classification techniques,the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13%and 8%of F1 scores and 4.17%and 11.51%for accuracy,respectively.Furthermore,experiments of the proposed adaptive learning reveal mostly improved results over the initial values.An analysis and monitoring system based on the combination of the three explainable methodologies is also described.Thus,the proposed method has the potential advantages to be applied in practical industry,and future research will explore handling unbalanced real-time datasets in various scenarios.
基金This work was supported by the Central Queensland University Research Grant RSH5345(partially)and the Open Access Journal Scheme.
文摘With increasing world population the demand of food production has increased exponentially.Internet of Things(IoT)based smart agriculture system can play a vital role in optimising crop yield by managing crop requirements in real-time.Interpretability can be an important factor to make such systems trusted and easily adopted by farmers.In this paper,we propose a novel artificial intelligence-based agriculture system that uses IoT data to monitor the environment and alerts farmers to take the required actions for maintaining ideal conditions for crop production.The strength of the proposed system is in its interpretability which makes it easy for farmers to understand,trust and use it.The use of fuzzy logic makes the system customisable in terms of types/number of sensors,type of crop,and adaptable for any soil types and weather conditions.The proposed system can identify anomalous data due to security breaches or hardware malfunction using machine learning algorithms.To ensure the viability of the system we have conducted thorough research related to agricultural factors such as soil type,soil moisture,soil temperature,plant life cycle,irrigation requirement and water application timing for Maize as our target crop.The experimental results show that our proposed system is interpretable,can detect anomalous data,and triggers actions accurately based on crop requirements.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R193)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.Taif University Researchers Supporting Project(TURSP-2020/26),Taif University,Taif,Saudi Arabia.
文摘Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively.
基金funded by the Spanish Government Ministry of Economy and Competitiveness through the DEFINES Project Grant No. (TIN2016-80172-R)the Ministry of Science and Innovation through the AVisSA Project Grant No. (PID2020-118345RBI00)supported by the Spanish Ministry of Education and Vocational Training under an FPU Fellowship (FPU17/03276).
文摘The exponential use of artificial intelligence(AI)to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed.While AI is a powerfulmeans to discover interesting patterns and obtain predictive models,the use of these algorithms comes with a great responsibility,as an incomplete or unbalanced set of training data or an unproper interpretation of the models’outcomes could result in misleading conclusions that ultimately could become very dangerous.For these reasons,it is important to rely on expert knowledge when applying these methods.However,not every user can count on this specific expertise;non-AIexpert users could also benefit from applying these powerful algorithms to their domain problems,but they need basic guidelines to obtain themost out of AI models.The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features.The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering.As a result,9 papers that tackle AI algorithmrecommendation through tangible and traceable rules and heuristics were collected.The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms.
基金funded by the SAUDI ARAMCO Cybersecurity Chair at Imam Abdulrahman Bin Faisal University,Saudi Arabia.
文摘One of the most widely used smartphone operating systems,Android,is vulnerable to cutting-edge malware that employs sophisticated logic.Such malware attacks could lead to the execution of unauthorized acts on the victims’devices,stealing personal information and causing hardware damage.In previous studies,machine learning(ML)has shown its efficacy in detecting malware events and classifying their types.However,attackers are continuously developing more sophisticated methods to bypass detection.Therefore,up-to-date datasets must be utilized to implement proactive models for detecting malware events in Android mobile devices.Therefore,this study employed ML algorithms to classify Android applications into malware or goodware using permission and application programming interface(API)-based features from a recent dataset.To overcome the dataset imbalance issue,RandomOverSampler,synthetic minority oversampling with tomek links(SMOTETomek),and RandomUnderSampler were applied to the Dataset in different experiments.The results indicated that the extra tree(ET)classifier achieved the highest accuracy of 99.53%within an elapsed time of 0.0198 s in the experiment that utilized the RandomOverSampler technique.Furthermore,the explainable Artificial Intelligence(EAI)technique has been applied to add transparency to the high-performance ET classifier.The global explanation using the Shapely values indicated that the top three features contributing to the goodware class are:Ljava/net/URL;->openConnection,Landroid/location/LocationManager;->getLastKgoodwarewnLocation,and Vibrate.On the other hand,the top three features contributing to themalware class are Receive_Boot_Completed,Get_Tasks,and Kill_Background_Processes.It is believed that the proposedmodel can contribute to proactively detectingmalware events in Android devices to reduce the number of victims and increase users’trust.