期刊文献+
共找到363篇文章
< 1 2 19 >
每页显示 20 50 100
Explainable Artificial Intelligence(XAI)Model for Cancer Image Classification
1
作者 Amit Singhal Krishna Kant Agrawal +3 位作者 Angeles Quezada Adrian Rodriguez Aguiñaga Samantha Jiménez Satya Prakash Yadav 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期401-441,共41页
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ... The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment. 展开更多
关键词 explainable artificial intelligence artificial intelligence XAI healthcare CANCER image classification
下载PDF
CrossLinkNet: An Explainable and Trustworthy AI Framework for Whole-Slide Images Segmentation
2
作者 Peng Xiao Qi Zhong +3 位作者 Jingxue Chen Dongyuan Wu Zhen Qin Erqiang Zhou 《Computers, Materials & Continua》 SCIE EI 2024年第6期4703-4724,共22页
In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at proc... In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics. 展开更多
关键词 explainable AI security TRUSTWORTHY CrossLinkNet whole slide images
下载PDF
Explainable Neural Network for Sensitivity Analysis of Lithium-ion Battery Smart Production
3
作者 Kailong Liu Qiao Peng +2 位作者 Yuhang Liu Naxin Cui Chenghui Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第9期1944-1953,共10页
Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control par... Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control parameters,an efficient solution that can perform a reliable sensitivity analysis of the production terms of interest and forecast key battery properties in the early production phase is urgently required.This paper performs detailed sensitivity analysis of key production terms on determining the properties of manufactured battery electrode via advanced data-driven modelling.To be specific,an explainable neural network named generalized additive model with structured interaction(GAM-SI)is designed to predict two key battery properties,including electrode mass loading and porosity,while the effects of four early production terms on manufactured batteries are explained and analysed.The experimental results reveal that the proposed method is able to accurately predict battery electrode properties in the mixing and coating stages.In addition,the importance ratio ranking,global interpretation and local interpretation of both the main effects and pairwise interactions can be effectively visualized by the designed neural network.Due to the merits of interpretability,the proposed GAM-SI can help engineers gain important insights for understanding complicated production behavior,further benefitting smart battery production. 展开更多
关键词 Battery management battery manufacturing data science explainable artificial intelligence sensitivity analysis
下载PDF
Transparent and Accurate COVID-19 Diagnosis:Integrating Explainable AI with Advanced Deep Learning in CT Imaging
4
作者 Mohammad Mehedi Hassan Salman A.AlQahtani +1 位作者 Mabrook S.AlRakhami Ahmed Zohier Elhendi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3101-3123,共23页
In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.De... In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19. 展开更多
关键词 explainable AI COVID-19 CT images deep learning
下载PDF
GliomaCNN: An Effective Lightweight CNN Model in Assessment of Classifying Brain Tumor from Magnetic Resonance Images Using Explainable AI
5
作者 Md.Atiqur Rahman Mustavi Ibne Masum +4 位作者 Khan Md Hasib M.F.Mridha Sultan Alfarhood Mejdl Safran Dunren Che 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2425-2448,共24页
Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Mag... Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors. 展开更多
关键词 Deep learning magnetic resonance imaging convolutional neural networks explainable AI boosting algorithm ablation
下载PDF
MAIPFE:An Efficient Multimodal Approach Integrating Pre-Emptive Analysis,Personalized Feature Selection,and Explainable AI
6
作者 Moshe Dayan Sirapangi S.Gopikrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第5期2229-2251,共23页
Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of mu... Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world. 展开更多
关键词 Predictive health modeling Medical Internet of Things explainable artificial intelligence personalized feature selection preemptive analysis
下载PDF
A Study on the Explainability of Thyroid Cancer Prediction:SHAP Values and Association-Rule Based Feature Integration Framework
7
作者 Sujithra Sankar S.Sathyalakshmi 《Computers, Materials & Continua》 SCIE EI 2024年第5期3111-3138,共28页
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi... In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications. 展开更多
关键词 explainable AI machine learning clinical decision support systems thyroid cancer association-rule based framework SHAP values classification and prediction
下载PDF
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine 被引量:2
8
作者 Ahmad Chaddad Qizong Lu +5 位作者 Jiali Li Yousef Katib Reem Kateb Camel Tanougast Ahmed Bouridane Ahmed Abdulkadir 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第4期859-876,共18页
Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In ... Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives. 展开更多
关键词 Domain adaptation explainable artificial intelligence federated learning
下载PDF
XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly 被引量:2
9
作者 Yuna Han Hangbae Chang 《Computers, Materials & Continua》 SCIE EI 2023年第7期221-237,共17页
Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechani... Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry.However,real-time training and classifying network traffic pose challenges,as they can lead to the degradation of the overall dataset and difficulties preventing attacks.Additionally,existing semi-supervised learning research might need to analyze the experimental results comprehensively.This paper proposes XA-GANomaly,a novel technique for explainable adaptive semi-supervised learning using GANomaly,an image anomalous detection model that dynamically trains small subsets to these issues.First,this research introduces a deep neural network(DNN)-based GANomaly for semi-supervised learning.Second,this paper presents the proposed adaptive algorithm for the DNN-based GANomaly,which is validated with four subsets of the adaptive dataset.Finally,this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations,reconstruction error visualization,and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage,semi-supervised learning,and adaptive learning.Compared to other single-class classification techniques,the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13%and 8%of F1 scores and 4.17%and 11.51%for accuracy,respectively.Furthermore,experiments of the proposed adaptive learning reveal mostly improved results over the initial values.An analysis and monitoring system based on the combination of the three explainable methodologies is also described.Thus,the proposed method has the potential advantages to be applied in practical industry,and future research will explore handling unbalanced real-time datasets in various scenarios. 展开更多
关键词 Intrusion detection system(IDS) adaptive learning semi-supervised learning explainable artificial intelligence(XAI) monitoring system
下载PDF
Explainable AI Enabled Infant Mortality Prediction Based on Neonatal Sepsis 被引量:1
10
作者 Priti Shaw Kaustubh Pachpor Suresh Sankaranarayanan 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期311-325,共15页
Neonatal sepsis is the third most common cause of neonatal mortality and a serious public health problem,especially in developing countries.There have been researches on human sepsis,vaccine response,and immunity.Also... Neonatal sepsis is the third most common cause of neonatal mortality and a serious public health problem,especially in developing countries.There have been researches on human sepsis,vaccine response,and immunity.Also,machine learning methodologies were used for predicting infant mortality based on certain features like age,birth weight,gestational weeks,and Appearance,Pulse,Grimace,Activity and Respiration(APGAR)score.Sepsis,which is considered the most determining condition towards infant mortality,has never been considered for mortality prediction.So,we have deployed a deep neural model which is the state of art and performed a comparative analysis of machine learning models to predict the mortality among infants based on the most important features including sepsis.Also,for assessing the prediction reliability of deep neural model which is a black box,Explainable AI models like Dalex and Lime have been deployed.This would help any non-technical personnel like doctors and practitioners to understand and accordingly make decisions. 展开更多
关键词 APGAR SEPSIS explainable AI machine learning
下载PDF
Explainable Rules and Heuristics in AI Algorithm Recommendation Approaches——A Systematic Literature Review and Mapping Study
11
作者 Francisco JoséGarcía-Penlvo Andrea Vázquez-Ingelmo Alicia García-Holgado 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第8期1023-1051,共29页
The exponential use of artificial intelligence(AI)to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed.While AI is a powerfulmeans to discover interes... The exponential use of artificial intelligence(AI)to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed.While AI is a powerfulmeans to discover interesting patterns and obtain predictive models,the use of these algorithms comes with a great responsibility,as an incomplete or unbalanced set of training data or an unproper interpretation of the models’outcomes could result in misleading conclusions that ultimately could become very dangerous.For these reasons,it is important to rely on expert knowledge when applying these methods.However,not every user can count on this specific expertise;non-AIexpert users could also benefit from applying these powerful algorithms to their domain problems,but they need basic guidelines to obtain themost out of AI models.The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features.The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering.As a result,9 papers that tackle AI algorithmrecommendation through tangible and traceable rules and heuristics were collected.The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms. 展开更多
关键词 SLR systematic literature review artificial intelligence machine learning algorithm recommendation HEURISTICS explainability
下载PDF
Forecasting patient demand at urgent care clinics using explainable machine learning
12
作者 Teo Susnjak Paula Maddigan 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第3期712-733,共22页
Urgent care clinics and emergency departments around the world periodically suffer from extended wait times beyond patient expectations due to surges in patient flows.The delays arising from inadequate staffing levels... Urgent care clinics and emergency departments around the world periodically suffer from extended wait times beyond patient expectations due to surges in patient flows.The delays arising from inadequate staffing levels during these periods have been linked with adverse clinical outcomes.Previous research into forecasting patient flows has mostly used statistical techniques.These studies have also predominately focussed on short‐term forecasts,which have limited practicality for the resourcing of medical personnel.This study joins an emerging body of work which seeks to explore the potential of machine learning algorithms to generate accurate forecasts of patient presentations.Our research uses datasets covering 10 years from two large urgent care clinics to develop long‐term patient flow forecasts up to one quarter ahead using a range of state‐of‐the‐art algo-rithms.A distinctive feature of this study is the use of eXplainable Artificial Intelligence(XAI)tools like Shapely and LIME that enable an in‐depth analysis of the behaviour of the models,which would otherwise be uninterpretable.These analysis tools enabled us to explore the ability of the models to adapt to the volatility in patient demand during the COVID‐19 pandemic lockdowns and to identify the most impactful variables,resulting in valuable insights into their performance.The results showed that a novel combination of advanced univariate models like Prophet as well as gradient boosting,into an ensemble,delivered the most accurate and consistent solutions on average.This approach generated improvements in the range of 16%-30%over the existing in‐house methods for esti-mating the daily patient flows 90 days ahead. 展开更多
关键词 data mining explainable AI forecasting machine learning patient flow urgent care clinics
下载PDF
Explainable Anomaly Detection Using Vision Transformer Based SVDD
13
作者 Ji-Won Baek Kyungyong Chung 《Computers, Materials & Continua》 SCIE EI 2023年第3期6573-6586,共14页
Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships.It is possible to offer the explainable basis of decision-maki... Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships.It is possible to offer the explainable basis of decision-making for inference results.Through the causality of risk factors that have an ambiguous association in big medical data,it is possible to increase transparency and reliability of explainable decision-making that helps to diagnose disease status.In addition,the technique makes it possible to accurately predict disease risk for anomaly detection.Vision transformer for anomaly detection from image data makes classification through MLP.Unfortunately,in MLP,a vector value depends on patch sequence information,and thus a weight changes.This should solve the problem that there is a difference in the result value according to the change in the weight.In addition,since the deep learning model is a black box model,there is a problem that it is difficult to interpret the results determined by the model.Therefore,there is a need for an explainablemethod for the part where the disease exists.To solve the problem,this study proposes explainable anomaly detection using vision transformerbasedDeep Support Vector Data Description(SVDD).The proposed method applies the SVDD to solve the problem of MLP in which a result value is different depending on a weight change that is influenced by patch sequence information used in the vision transformer.In order to draw the explainability of model results,it visualizes normal parts through Grad-CAM.In health data,both medical staff and patients are able to identify abnormal parts easily.In addition,it is possible to improve the reliability of models and medical staff.For performance evaluation normal/abnormal classification accuracy and f-measure are evaluated,according to whether to apply SVDD.Evaluation Results The results of classification by applying the proposed SVDD are evaluated excellently.Therefore,through the proposed method,it is possible to improve the reliability of decision-making by identifying the location of the disease and deriving consistent results. 展开更多
关键词 explainable AI anomaly detection vision transformer SVDD health care deep learning CLASSIFICATION
下载PDF
Explainable Classification Model for Android Malware Analysis Using API and Permission-Based Features
14
作者 Nida Aslam Irfan Ullah Khan +5 位作者 Salma Abdulrahman Bader Aisha Alansari Lama Abdullah Alaqeel Razan Mohammed Khormy Zahra Abdultawab AlKubaish Tariq Hussain 《Computers, Materials & Continua》 SCIE EI 2023年第9期3167-3188,共22页
One of the most widely used smartphone operating systems,Android,is vulnerable to cutting-edge malware that employs sophisticated logic.Such malware attacks could lead to the execution of unauthorized acts on the vict... One of the most widely used smartphone operating systems,Android,is vulnerable to cutting-edge malware that employs sophisticated logic.Such malware attacks could lead to the execution of unauthorized acts on the victims’devices,stealing personal information and causing hardware damage.In previous studies,machine learning(ML)has shown its efficacy in detecting malware events and classifying their types.However,attackers are continuously developing more sophisticated methods to bypass detection.Therefore,up-to-date datasets must be utilized to implement proactive models for detecting malware events in Android mobile devices.Therefore,this study employed ML algorithms to classify Android applications into malware or goodware using permission and application programming interface(API)-based features from a recent dataset.To overcome the dataset imbalance issue,RandomOverSampler,synthetic minority oversampling with tomek links(SMOTETomek),and RandomUnderSampler were applied to the Dataset in different experiments.The results indicated that the extra tree(ET)classifier achieved the highest accuracy of 99.53%within an elapsed time of 0.0198 s in the experiment that utilized the RandomOverSampler technique.Furthermore,the explainable Artificial Intelligence(EAI)technique has been applied to add transparency to the high-performance ET classifier.The global explanation using the Shapely values indicated that the top three features contributing to the goodware class are:Ljava/net/URL;->openConnection,Landroid/location/LocationManager;->getLastKgoodwarewnLocation,and Vibrate.On the other hand,the top three features contributing to themalware class are Receive_Boot_Completed,Get_Tasks,and Kill_Background_Processes.It is believed that the proposedmodel can contribute to proactively detectingmalware events in Android devices to reduce the number of victims and increase users’trust. 展开更多
关键词 Android malware machine learning malware detection explainable artificial intelligence cyber security
下载PDF
Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for Clustered IoT Driven Ubiquitous Computing System
15
作者 Reda Salama Mahmoud Ragab 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2917-2932,共16页
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(... In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%. 展开更多
关键词 Blockchain internet of things ubiquitous computing explainable artificial intelligence CLUSTERING deep learning
下载PDF
Explainable Artificial Intelligence-Based Model Drift Detection Applicable to Unsupervised Environments
16
作者 Yongsoo Lee Yeeun Lee +1 位作者 Eungyu Lee Taejin Lee 《Computers, Materials & Continua》 SCIE EI 2023年第8期1701-1719,共19页
Cybersecurity increasingly relies on machine learning(ML)models to respond to and detect attacks.However,the rapidly changing data environment makes model life-cycle management after deployment essential.Real-time det... Cybersecurity increasingly relies on machine learning(ML)models to respond to and detect attacks.However,the rapidly changing data environment makes model life-cycle management after deployment essential.Real-time detection of drift signals from various threats is fundamental for effectively managing deployed models.However,detecting drift in unsupervised environments can be challenging.This study introduces a novel approach leveraging Shapley additive explanations(SHAP),a widely recognized explainability technique in ML,to address drift detection in unsupervised settings.The proposed method incorporates a range of plots and statistical techniques to enhance drift detection reliability and introduces a drift suspicion metric that considers the explanatory aspects absent in the current approaches.To validate the effectiveness of the proposed approach in a real-world scenario,we applied it to an environment designed to detect domain generation algorithms(DGAs).The dataset was obtained from various types of DGAs provided by NetLab.Based on this dataset composition,we sought to validate the proposed SHAP-based approach through drift scenarios that occur when a previously deployed model detects new data types in an environment that detects real-world DGAs.The results revealed that more than 90%of the drift data exceeded the threshold,demonstrating the high reliability of the approach to detect drift in an unsupervised environment.The proposed method distinguishes itself fromexisting approaches by employing explainable artificial intelligence(XAI)-based detection,which is not limited by model or system environment constraints.In conclusion,this paper proposes a novel approach to detect drift in unsupervised ML settings for cybersecurity.The proposed method employs SHAP-based XAI and a drift suspicion metric to improve drift detection reliability.It is versatile and suitable for various realtime data analysis contexts beyond DGA detection environments.This study significantly contributes to theMLcommunity by addressing the critical issue of managing ML models in real-world cybersecurity settings.Our approach is distinguishable from existing techniques by employing XAI-based detection,which is not limited by model or system environment constraints.As a result,our method can be applied in critical domains that require adaptation to continuous changes,such as cybersecurity.Through extensive validation across diverse settings beyond DGA detection environments,the proposed method will emerge as a versatile drift detection technique suitable for a wide range of real-time data analysis contexts.It is also anticipated to emerge as a new approach to protect essential systems and infrastructures from attacks. 展开更多
关键词 CYBERSECURITY machine learning(ML) model life-cycle management drift detection unsupervised environments shapley additive explanations(SHAP) explainability
下载PDF
Quantum Inspired Differential Evolution with Explainable Artificial Intelligence-Based COVID-19 Detection
17
作者 Abdullah M.Basahel Mohammad Yamin 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期209-224,共16页
Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitor... Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitoring.The HIoT systems find use in several applications such as smart cities,healthcare,transportation,etc.Besides,the HIoT system and explainable artificial intelligence(XAI)tools can be deployed in the healthcare sector for effective decision-making.The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage.This article presents a new quantum-inspired differential evolution with explainable artificial intelligence based COVID-19 Detection and Classification(QIDEXAI-CDC)model for HIoT systems.The QIDEXAI-CDC model aims to identify the occurrence of COVID-19 using the XAI tools on HIoT systems.The QIDEXAI-CDC model primarily uses bilateral filtering(BF)as a preprocessing tool to eradicate the noise.In addition,RetinaNet is applied for the generation of useful feature vectors from radiological images.For COVID-19 detection and classification,quantum-inspired differential evolution(QIDE)with kernel extreme learning machine(KELM)model is utilized.The utilization of the QIDE algorithm helps to appropriately choose the weight and bias values of the KELM model.In order to report the enhanced COVID-19 detection outcomes of the QIDEXAI-CDC model,a wide range of simulations was carried out.Extensive comparative studies reported the supremacy of the QIDEXAI-CDC model over the recent approaches. 展开更多
关键词 Human-centric IoT quantum computing explainable artificial intelligence healthcare COVID-19 diagnosis
下载PDF
Explainable Heart Disease Prediction Using Ensemble-Quantum Machine Learning Approach
18
作者 Ghada Abdulsalam Souham Meshoul Hadil Shaiba 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期761-779,共19页
Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum com... Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum computers can lead to a quantum leap in the healthcarefield.Heart disease seriously threa-tens human health since it is the leading cause of death worldwide.Quantum machine learning methods can propose effective solutions to predict heart disease and aid in early diagnosis.In this study,an ensemble machine learning model based on quantum machine learning classifiers is proposed to predict the risk of heart disease.The proposed model is a bagging ensemble learning model where a quantum support vector classifier was used as a base classifier.Further-more,in order to make the model’s outcomes more explainable,the importance of every single feature in the prediction is computed and visualized using SHapley Additive exPlanations(SHAP)framework.In the experimental study,other stand-alone quantum classifiers,namely,Quantum Support Vector Classifier(QSVC),Quantum Neural Network(QNN),and Variational Quantum Classifier(VQC)are applied and compared with classical machine learning classifiers such as Sup-port Vector Machine(SVM),and Artificial Neural Network(ANN).The experi-mental results on the Cleveland dataset reveal the superiority of QSVC compared to the others,which explains its use in the proposed bagging model.The Bagging-QSVC model outperforms all aforementioned classifiers with an accuracy of 90.16%while showing great competitiveness compared to some state-of-the-art models using the same dataset.The results of the study indicate that quantum machine learning classifiers perform better than classical machine learning classi-fiers in predicting heart disease.In addition,the study reveals that the bagging ensemble learning technique is effective in improving the prediction accuracy of quantum classifiers. 展开更多
关键词 Machine learning ensemble learning quantum machine learning explainable machine learning heart disease prediction
下载PDF
Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods
19
作者 Wahidul Hasan Abir Faria Rahman Khanam +5 位作者 Kazi Nabiul Alam Myriam Hadjouni Hela Elmannai Sami Bourouis Rajesh Dey Mohammad Monirujjaman Khan 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2151-2169,共19页
Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded vid... Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively. 展开更多
关键词 Deepfake deep learning explainable artificial intelligence(XAI) convolutional neural network(CNN) local interpretable model-agnostic explanations(LIME)
下载PDF
Explainable AI and Interpretable Model for Insurance Premium Prediction
20
作者 Umar Abdulkadir Isa Anil Fernando 《Journal on Artificial Intelligence》 2023年第1期31-42,共12页
Traditional machine learning metrics(TMLMs)are quite useful for the current research work precision,recall,accuracy,MSE and RMSE.Not enough for a practitioner to be confident about the performance and dependability of... Traditional machine learning metrics(TMLMs)are quite useful for the current research work precision,recall,accuracy,MSE and RMSE.Not enough for a practitioner to be confident about the performance and dependability of innovative interpretable model 85%–92%.We included in the prediction process,machine learning models(MLMs)with greater than 99%accuracy with a sensitivity of 95%–98%and specifically in the database.We need to explain the model to domain specialists through the MLMs.Human-understandable explanations in addition to ML professionals must establish trust in the prediction of our model.This is achieved by creating a model-independent,locally accurate explanation set that makes it better than the primary model.As we know that human interaction with machine learning systems on this model’s interpretability is more crucial.For supporting set validations in model selection insurance premium prediction.In this study,we proposed the use of the(LIME and SHAP)approach to understand research properly and explain a model developed using random forest regression to predict insurance premiums.The SHAP algorithm’s drawback,as seen in our experiments,is its lengthy computing time—to produce the findings,it must compute every possible combination.In addition,the experiments conducted were intended to focus on the model’s interpretability and explain its ability using LIME and SHAP,not the insurance premium charge prediction.Three experiments were conducted through experiment,one was to interpret the random forest regression model using LIME techniques.In experiment 2,we used the SHAP technique to interpret the model insurance premium prediction(IPP). 展开更多
关键词 LIME SHAP INNOVATIVE explainable AI random forest machine learning insurance premium
下载PDF
上一页 1 2 19 下一页 到第
使用帮助 返回顶部