期刊文献+
共找到24篇文章
< 1 2 >
每页显示 20 50 100
Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for Clustered IoT Driven Ubiquitous Computing System
1
作者 Reda Salama Mahmoud Ragab 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2917-2932,共16页
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(... In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%. 展开更多
关键词 Blockchain internet of things ubiquitous computing explainable artificial intelligence CLUSTERING deep learning
下载PDF
Quantum Inspired Differential Evolution with Explainable Artificial Intelligence-Based COVID-19 Detection
2
作者 Abdullah M.Basahel Mohammad Yamin 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期209-224,共16页
Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitor... Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitoring.The HIoT systems find use in several applications such as smart cities,healthcare,transportation,etc.Besides,the HIoT system and explainable artificial intelligence(XAI)tools can be deployed in the healthcare sector for effective decision-making.The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage.This article presents a new quantum-inspired differential evolution with explainable artificial intelligence based COVID-19 Detection and Classification(QIDEXAI-CDC)model for HIoT systems.The QIDEXAI-CDC model aims to identify the occurrence of COVID-19 using the XAI tools on HIoT systems.The QIDEXAI-CDC model primarily uses bilateral filtering(BF)as a preprocessing tool to eradicate the noise.In addition,RetinaNet is applied for the generation of useful feature vectors from radiological images.For COVID-19 detection and classification,quantum-inspired differential evolution(QIDE)with kernel extreme learning machine(KELM)model is utilized.The utilization of the QIDE algorithm helps to appropriately choose the weight and bias values of the KELM model.In order to report the enhanced COVID-19 detection outcomes of the QIDEXAI-CDC model,a wide range of simulations was carried out.Extensive comparative studies reported the supremacy of the QIDEXAI-CDC model over the recent approaches. 展开更多
关键词 Human-centric IoT quantum computing explainable artificial intelligence healthcare COVID-19 diagnosis
下载PDF
Modeling of Explainable Artificial Intelligence for Biomedical Mental Disorder Diagnosis
3
作者 Anwer Mustafa Hilal Imène ISSAOUI +5 位作者 Marwa Obayya Fahd N.Al-Wesabi Nadhem NEMRI Manar Ahmed Hamza Mesfer Al Duhayyim Abu Sarwar Zamani 《Computers, Materials & Continua》 SCIE EI 2022年第5期3853-3867,共15页
The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence(XAI),a process that explains how p... The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence(XAI),a process that explains how prediction is done in AI models.Biomedical mental disorder,i.e.,Autism Spectral Disorder(ASD)needs to be identified and classified at early stage itself in order to reduce health crisis.With this background,the current paper presents XAI-based ASD diagnosis(XAI-ASD)model to detect and classify ASD precisely.The proposed XAI-ASD technique involves the design of Bacterial Foraging Optimization(BFO)-based Feature Selection(FS)technique.In addition,Whale Optimization Algorithm(WOA)with Deep Belief Network(DBN)model is also applied for ASD classification process in which the hyperparameters of DBN model are optimally tuned with the help of WOA.In order to ensure a better ASD diagnostic outcome,a series of simulation process was conducted on ASD dataset. 展开更多
关键词 explainable artificial intelligence autism spectral disorder feature selection data classification machine learning metaheuristics
下载PDF
Explainable Artificial Intelligence Solution for Online Retail
4
作者 Kumail Javaid Ayesha Siddiqa +5 位作者 Syed Abbas Zilqurnain Naqvi Allah Ditta Muhammad Ahsan M.A.Khan Tariq Mahmood Muhammad Adnan Khan 《Computers, Materials & Continua》 SCIE EI 2022年第6期4425-4442,共18页
Artificial intelligence(AI)and machine learning(ML)help in making predictions and businesses to make key decisions that are beneficial for them.In the case of the online shopping business,it’s very important to find ... Artificial intelligence(AI)and machine learning(ML)help in making predictions and businesses to make key decisions that are beneficial for them.In the case of the online shopping business,it’s very important to find trends in the data and get knowledge of features that helps drive the success of the business.In this research,a dataset of 12,330 records of customers has been analyzedwho visited an online shoppingwebsite over a period of one year.The main objective of this research is to find features that are relevant in terms of correctly predicting the purchasing decisions made by visiting customers and build ML models which could make correct predictions on unseen data in the future.The permutation feature importance approach has been used to get the importance of features according to the output variable(Revenue).Five ML models i.e.,decision tree(DT),random forest(RF),extra tree(ET)classifier,Neural networks(NN),and Logistic regression(LR)have been used to make predictions on the unseen data in the future.The performance of each model has been discussed in detail using performance measurement techniques such as accuracy score,precision,recall,F1 score,and ROC-AUC curve.RF model is the bestmodel among all five chosen based on accuracy score of 90%and F1 score of 79%followed by extra tree classifier.Hence,our study indicates that RF model can be used by online retailing businesses for predicting consumer buying behaviour.Our research also reveals the importance of page value as a key feature for capturing online purchasing trends.This may give a clue to future businesses who can focus on this specific feature and can find key factors behind page value success which in turn will help the online shopping business. 展开更多
关键词 explainable artificial intelligence online retail neural network random forest regression
下载PDF
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine
5
作者 Ahmad Chaddad Qizong Lu +5 位作者 Jiali Li Yousef Katib Reem Kateb Camel Tanougast Ahmed Bouridane Ahmed Abdulkadir 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第4期859-876,共18页
Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In ... Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives. 展开更多
关键词 Domain adaptation explainable artificial intelligence federated learning
下载PDF
Explainable Artificial Intelligence (XAI) techniques for energy and powersystems: Review, challenges and opportunities 被引量:3
6
作者 R.Machlev L.Heistrene +4 位作者 M.Perl K.Y.Levy J.Belikov S.Mannor Y.Levron 《Energy and AI》 2022年第3期193-205,共13页
Despite widespread adoption and outstanding performance, machine learning models are considered as ‘‘blackboxes’’, since it is very difficult to understand how such models operate in practice. Therefore, in the po... Despite widespread adoption and outstanding performance, machine learning models are considered as ‘‘blackboxes’’, since it is very difficult to understand how such models operate in practice. Therefore, in the powersystems field, which requires a high level of accountability, it is hard for experts to trust and justify decisionsand recommendations made by these models. Meanwhile, in the last couple of years, Explainable ArtificialIntelligence (XAI) techniques have been developed to improve the explainability of machine learning models,such that their output can be better understood. In this light, it is the purpose of this paper to highlight thepotential of using XAI for power system applications. We first present the common challenges of using XAI insuch applications and then review and analyze the recent works on this topic, and the on-going trends in theresearch community. We hope that this paper will trigger fruitful discussions and encourage further researchon this important emerging topic. 展开更多
关键词 POWER ENERGY Neural network Deep-learning explainable artificial intelligence XAI
原文传递
Explainable artificial intelligence and interpretable machine learning for agricultural data analysis 被引量:1
7
作者 Masahiro Ryo 《Artificial Intelligence in Agriculture》 2022年第1期257-265,共9页
Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from t... Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from the data and the reasons behind predictions.To address this issue,I introduce an emerging subdomain of artificial intelligence,explainable artificial intelligence(XAI),and associated toolkits,interpretable machine learning.This study demonstrates the usefulness of several methods by applying them to an openly available dataset.The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil,climate,and management variables.Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is<5000 kg/ha and the maximum temperature is higher than 32°.These methods are useful to answer(i)which variables are important for prediction in regression/classification,(ii)which variable interactions are important for prediction,(iii)how important variables and their interactions are associated with the response variable,(iv)what are the reasons underlying a predicted value for a certain instance,and(v)whether different machine learning algorithms offer the same answer to these questions.I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice,while these questions are unanswered.XAI and interpretable machine learning can enhance trust and explainability in AI. 展开更多
关键词 Interpretable machine learning explainable artificial intelligence AGRICULTURE Crop yield NO-TILLAGE XAI
原文传递
MAIPFE:An Efficient Multimodal Approach Integrating Pre-Emptive Analysis,Personalized Feature Selection,and Explainable AI
8
作者 Moshe Dayan Sirapangi S.Gopikrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第5期2229-2251,共23页
Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of mu... Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world. 展开更多
关键词 Predictive health modeling Medical Internet of Things explainable artificial intelligence personalized feature selection preemptive analysis
下载PDF
Causality in structural engineering: discovering new knowledge by tying induction and deduction via mapping functions and explainable artificial intelligence
9
作者 M.Z.Naser 《AI in Civil Engineering》 2022年第1期82-97,共16页
Causality is the science of cause and effect.It is through causality that explanations can be derived,theories can be formed,and new knowledge can be discovered.This paper presents a modern look into establishing caus... Causality is the science of cause and effect.It is through causality that explanations can be derived,theories can be formed,and new knowledge can be discovered.This paper presents a modern look into establishing causality within structural engineering systems.In this pursuit,this paper starts with a gentle introduction to causality.Then,this paper pivots to contrast commonly adopted methods for inferring causes and effects,i.e.,induction(empiricism)and deduc-tion(rationalism),and outlines how these methods continue to shape our structural engineering philosophy and,by extension,our domain.The bulk of this paper is dedicated to establishing an approach and criteria to tie principles of induction and deduction to derive causal laws(i.e.,mapping functions)through explainable artificial intelligence(XAI)capable of describing new knowledge pertaining to structural engineering phenomena.The proposed approach and criteria are then examined via a case study. 展开更多
关键词 CAUSALITY explainable artificial intelligence Mapping functions Knowledge discovery Structural engineering
原文传递
XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly 被引量:1
10
作者 Yuna Han Hangbae Chang 《Computers, Materials & Continua》 SCIE EI 2023年第7期221-237,共17页
Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechani... Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry.However,real-time training and classifying network traffic pose challenges,as they can lead to the degradation of the overall dataset and difficulties preventing attacks.Additionally,existing semi-supervised learning research might need to analyze the experimental results comprehensively.This paper proposes XA-GANomaly,a novel technique for explainable adaptive semi-supervised learning using GANomaly,an image anomalous detection model that dynamically trains small subsets to these issues.First,this research introduces a deep neural network(DNN)-based GANomaly for semi-supervised learning.Second,this paper presents the proposed adaptive algorithm for the DNN-based GANomaly,which is validated with four subsets of the adaptive dataset.Finally,this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations,reconstruction error visualization,and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage,semi-supervised learning,and adaptive learning.Compared to other single-class classification techniques,the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13%and 8%of F1 scores and 4.17%and 11.51%for accuracy,respectively.Furthermore,experiments of the proposed adaptive learning reveal mostly improved results over the initial values.An analysis and monitoring system based on the combination of the three explainable methodologies is also described.Thus,the proposed method has the potential advantages to be applied in practical industry,and future research will explore handling unbalanced real-time datasets in various scenarios. 展开更多
关键词 Intrusion detection system(IDS) adaptive learning semi-supervised learning explainable artificial intelligence(XAI) monitoring system
下载PDF
An Interpretable Artificial Intelligence Based Smart Agriculture System
11
作者 Fariza Sabrina Shaleeza Sohail +3 位作者 Farnaz Farid Sayka Jahan Farhad Ahamed Steven Gordon 《Computers, Materials & Continua》 SCIE EI 2022年第8期3777-3797,共21页
With increasing world population the demand of food production has increased exponentially.Internet of Things(IoT)based smart agriculture system can play a vital role in optimising crop yield by managing crop requirem... With increasing world population the demand of food production has increased exponentially.Internet of Things(IoT)based smart agriculture system can play a vital role in optimising crop yield by managing crop requirements in real-time.Interpretability can be an important factor to make such systems trusted and easily adopted by farmers.In this paper,we propose a novel artificial intelligence-based agriculture system that uses IoT data to monitor the environment and alerts farmers to take the required actions for maintaining ideal conditions for crop production.The strength of the proposed system is in its interpretability which makes it easy for farmers to understand,trust and use it.The use of fuzzy logic makes the system customisable in terms of types/number of sensors,type of crop,and adaptable for any soil types and weather conditions.The proposed system can identify anomalous data due to security breaches or hardware malfunction using machine learning algorithms.To ensure the viability of the system we have conducted thorough research related to agricultural factors such as soil type,soil moisture,soil temperature,plant life cycle,irrigation requirement and water application timing for Maize as our target crop.The experimental results show that our proposed system is interpretable,can detect anomalous data,and triggers actions accurately based on crop requirements. 展开更多
关键词 explainable artificial intelligence fuzzy logic internet of things machine learning SENSORS smart agriculture
下载PDF
Explainable Classification Model for Android Malware Analysis Using API and Permission-Based Features
12
作者 Nida Aslam Irfan Ullah Khan +5 位作者 Salma Abdulrahman Bader Aisha Alansari Lama Abdullah Alaqeel Razan Mohammed Khormy Zahra Abdultawab AlKubaish Tariq Hussain 《Computers, Materials & Continua》 SCIE EI 2023年第9期3167-3188,共22页
One of the most widely used smartphone operating systems,Android,is vulnerable to cutting-edge malware that employs sophisticated logic.Such malware attacks could lead to the execution of unauthorized acts on the vict... One of the most widely used smartphone operating systems,Android,is vulnerable to cutting-edge malware that employs sophisticated logic.Such malware attacks could lead to the execution of unauthorized acts on the victims’devices,stealing personal information and causing hardware damage.In previous studies,machine learning(ML)has shown its efficacy in detecting malware events and classifying their types.However,attackers are continuously developing more sophisticated methods to bypass detection.Therefore,up-to-date datasets must be utilized to implement proactive models for detecting malware events in Android mobile devices.Therefore,this study employed ML algorithms to classify Android applications into malware or goodware using permission and application programming interface(API)-based features from a recent dataset.To overcome the dataset imbalance issue,RandomOverSampler,synthetic minority oversampling with tomek links(SMOTETomek),and RandomUnderSampler were applied to the Dataset in different experiments.The results indicated that the extra tree(ET)classifier achieved the highest accuracy of 99.53%within an elapsed time of 0.0198 s in the experiment that utilized the RandomOverSampler technique.Furthermore,the explainable Artificial Intelligence(EAI)technique has been applied to add transparency to the high-performance ET classifier.The global explanation using the Shapely values indicated that the top three features contributing to the goodware class are:Ljava/net/URL;->openConnection,Landroid/location/LocationManager;->getLastKgoodwarewnLocation,and Vibrate.On the other hand,the top three features contributing to themalware class are Receive_Boot_Completed,Get_Tasks,and Kill_Background_Processes.It is believed that the proposedmodel can contribute to proactively detectingmalware events in Android devices to reduce the number of victims and increase users’trust. 展开更多
关键词 Android malware machine learning malware detection explainable artificial intelligence cyber security
下载PDF
Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods
13
作者 Wahidul Hasan Abir Faria Rahman Khanam +5 位作者 Kazi Nabiul Alam Myriam Hadjouni Hela Elmannai Sami Bourouis Rajesh Dey Mohammad Monirujjaman Khan 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2151-2169,共19页
Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded vid... Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively. 展开更多
关键词 Deepfake deep learning explainable artificial intelligence(XAI) convolutional neural network(CNN) local interpretable model-agnostic explanations(LIME)
下载PDF
Autism Spectrum Disorder Prediction by an Explainable Deep Learning Approach 被引量:1
14
作者 Anupam Garg Anshu Parashar +4 位作者 Dipto Barman Sahil Jain Divya Singhal Mehedi Masud Mohamed Abouhawwash 《Computers, Materials & Continua》 SCIE EI 2022年第4期1459-1471,共13页
Autism Spectrum Disorder (ASD) is a developmental disorderwhose symptoms become noticeable in early years of the age though it canbe present in any age group. ASD is a mental disorder which affects the communicational... Autism Spectrum Disorder (ASD) is a developmental disorderwhose symptoms become noticeable in early years of the age though it canbe present in any age group. ASD is a mental disorder which affects the communicational, social and non-verbal behaviors. It cannot be cured completelybut can be reduced if detected early. An early diagnosis is hampered by thevariation and severity of ASD symptoms as well as having symptoms commonly seen in other mental disorders as well. Nowadays, with the emergenceof deep learning approaches in various fields, medical experts can be assistedin early diagnosis of ASD. It is very difficult for a practitioner to identifyand concentrate on the major feature’s leading to the accurate prediction ofthe ASD and this arises the need for having an automated approach. Also,presence of different symptoms of ASD traits amongst toddlers directs tothe creation of a large feature dataset. In this study, we propose a hybridapproach comprising of both, deep learning and Explainable Artificial Intelligence (XAI) to find the most contributing features for the early and preciseprediction of ASD. The proposed framework gives more accurate predictionalong with the recommendations of predicted results which will be a vital aidclinically for better and early prediction of ASD traits amongst toddlers. 展开更多
关键词 Deep learning explainable artificial intelligence autism spectrum disorder machine learning
下载PDF
Explainable Software Fault Localization Model: From Blackbox to Whitebox
15
作者 Abdulaziz Alhumam 《Computers, Materials & Continua》 SCIE EI 2022年第10期1463-1482,共20页
The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets.Plenty of machine intelligence models has offered the effective ... The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets.Plenty of machine intelligence models has offered the effective localization of defects.Some models can precisely locate the faulty with more than 95%accuracy,resulting in demand for trustworthy models in fault localization.Confidence and trustworthiness within machine intelligencebased software models can only be achieved via explainable artificial intelligence in Fault Localization(XFL).The current study presents a model for generating counterfactual interpretations for the fault localization model’s decisions.Neural system approximations and disseminated presentation of input information may be achieved by building a nonlinear neural network model.That demonstrates a high level of proficiency in transfer learning,even with minimal training data.The proposed XFL would make the decisionmaking transparent simultaneously without impacting the model’s performance.The proposed XFL ranks the software program statements based on the possible vulnerability score approximated from the training data.The model’s performance is further evaluated using various metrics like the number of assessed statements,confidence level of fault localization,and TopN evaluation strategies. 展开更多
关键词 Software fault localization explainable artificial intelligence statement ranking vulnerability detection
下载PDF
Efficient Explanation and Evaluation Methodology Based on Hybrid Feature Dropout
16
作者 Jingang Kim Suengbum Lim Taejin Lee 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期471-490,共20页
AI-related research is conducted in various ways,but the reliability of AI prediction results is currently insufficient,so expert decisions are indispensable for tasks that require essential decision-making.XAI(eXplai... AI-related research is conducted in various ways,but the reliability of AI prediction results is currently insufficient,so expert decisions are indispensable for tasks that require essential decision-making.XAI(eXplainable AI)is studied to improve the reliability of AI.However,each XAI methodology shows different results in the same data set and exact model.This means that XAI results must be given meaning,and a lot of noise value emerges.This paper proposes the HFD(Hybrid Feature Dropout)-based XAI and evaluation methodology.The proposed XAI methodology can mitigate shortcomings,such as incorrect feature weights and impractical feature selection.There are few XAI evaluation methods.This paper proposed four evaluation criteria that can give practical meaning.As a result of verifying with the malware data set(Data Challenge 2019),we confirmed better results than other XAI methodologies in 4 evaluation criteria.Since the efficiency of interpretation is verified with a reasonable XAI evaluation standard,The practicality of the XAI methodology will be improved.In addition,The usefulness of the XAI methodology will be demonstrated to enhance the reliability of AI,and it helps apply AI results to essential tasks that require expert decision-making. 展开更多
关键词 explainable artificial intelligence EVALUATION hybrid feature dropout deep learning error detection
下载PDF
Deep learning-based subseasonal to seasonal precipitation prediction in southwest China:Algorithm comparison and sensitivity to input features 被引量:1
17
作者 GuoLu Gao Yang Li +3 位作者 XueYun Zhou XiaoMing Xiang JiaQi Li ShuCheng Yin 《Earth and Planetary Physics》 CAS CSCD 2023年第4期471-486,共16页
The prediction of precipitation at subseasonal to seasonal(S2S)timescales remains an enormous challenge because of the gap between weather and climate predictions.This study compares three deep learning algorithms,nam... The prediction of precipitation at subseasonal to seasonal(S2S)timescales remains an enormous challenge because of the gap between weather and climate predictions.This study compares three deep learning algorithms,namely,the long short-term memory recurrent(LSTM),gated recurrent unit(GRU),and recurrent neural network(RNN),and selects the optimal algorithm to establish an S2S precipitation prediction model.The models were evaluated in four subregions of the Sichuan Province:the Plateau,Valley,eastern Basin,and western Basin.The results showed that the RNN model had better performance than the LSTM and GRU models.This could be because the RNN model had an advantage over the LSTM model in the transformation of climate indices with positive and negative variations.In the validation of test datasets,the RNN model successfully predicted the precipitation trend in most years during the wet season(May-October).The RNN model had a lower prediction bias(within±10%),higher sign accuracy of the precipitation trend(~88.95%),and greater accuracy of the maximum precipitation month(>0.85).For the prediction of different lead times,the RNN model was able to provide a stable trend prediction for summer precipitation,and the time correlation coefficient score was higher than that of the National Climate Center of China.Furthermore,this study proposed a method to measure the sensitivity of the RNN model to different input features,which may provide unprecedented insights into the nonlinear relationship and complicated feedback process among climate systems.The results of the sensitivity distribution are as follows.First,the Niño 4 and Niño 3.4 indices were equally important for the prediction of wet season precipitation.Second,the sensitivity of the snow cover on the Tibetan Plateau was higher than that in the Northern Hemisphere.Third,an opposite sensitivity appeared in two different patterns of the Indian Ocean and sea ice concentrations in the Arctic and the Barents Sea. 展开更多
关键词 recurrent neural network long short-term memory recurrent sensitivity analysis artificial intelligence explainability complex terrain southwest China
下载PDF
Optimal Machine Learning Enabled Intrusion Detection in Cyber-Physical System Environment
18
作者 Bassam A.Y.Alqaralleh Fahad Aldhaban +1 位作者 Esam A.AlQarallehs Ahmad H.Al-Omari 《Computers, Materials & Continua》 SCIE EI 2022年第9期4691-4707,共17页
Cyber-attacks on cyber-physical systems(CPSs)resulted to sensing and actuation misbehavior,severe damage to physical object,and safety risk.Machine learning(ML)models have been presented to hinder cyberattacks on the ... Cyber-attacks on cyber-physical systems(CPSs)resulted to sensing and actuation misbehavior,severe damage to physical object,and safety risk.Machine learning(ML)models have been presented to hinder cyberattacks on the CPS environment;however,the non-existence of labelled data from new attacks makes their detection quite interesting.Intrusion Detection System(IDS)is a commonly utilized to detect and classify the existence of intrusions in the CPS environment,which acts as an important part in secure CPS environment.Latest developments in deep learning(DL)and explainable artificial intelligence(XAI)stimulate new IDSs to manage cyberattacks with minimum complexity and high sophistication.In this aspect,this paper presents an XAI based IDS using feature selection with Dirichlet Variational Autoencoder(XAIIDS-FSDVAE)model for CPS.The proposed model encompasses the design of coyote optimization algorithm(COA)based feature selection(FS)model is derived to select an optimal subset of features.Next,an intelligent Dirichlet Variational Autoencoder(DVAE)technique is employed for the anomaly detection process in the CPS environment.Finally,the parameter optimization of the DVAE takes place using a manta ray foraging optimization(MRFO)model to tune the parameter of the DVAE.In order to determine the enhanced intrusion detection efficiency of the XAIIDS-FSDVAE technique,a wide range of simulations take place using the benchmark datasets.The experimental results reported the better performance of the XAIIDSFSDVAE technique over the recent methods in terms of several evaluation parameters. 展开更多
关键词 Cyber-physical systems explainable artificial intelligence deep learning SECURITY intrusion detection metaheuristics
下载PDF
Interpretable and Adaptable Early Warning Learning Analytics Model
19
作者 Shaleeza Sohail Atif Alvi Aasia Khanum 《Computers, Materials & Continua》 SCIE EI 2022年第5期3211-3225,共15页
Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders... Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions.Recently,some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified.However,adaptability is not specifically considered in this domain.This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations.It also provides explainability in the form of rules describing the reasoning behind a particular output.The paper also discusses the system evaluation on a benchmark dataset showing promising results.The measure of explainability,fuzzy index,shows that the model is highly interpretable.This system achieves more than 82%recall in both the classification and the context adaptation stages. 展开更多
关键词 Learning analytics interpretable machine learning fuzzy systems early warning INTERPRETABILITY explainable artificial intelligence
下载PDF
Understanding electricity prices beyond the merit order principle using explainable AI
20
作者 Julius Trebbien Leonardo Rydin Gorjao +2 位作者 Aaron Praktiknjo Benjamin Schafer Dirk Witthaut 《Energy and AI》 2023年第3期149-159,共11页
Electricity prices in liberalized markets are determined by the supply and demand for electric power,which are in turn driven by various external influences that vary strongly in time.In perfect competition,the merit ... Electricity prices in liberalized markets are determined by the supply and demand for electric power,which are in turn driven by various external influences that vary strongly in time.In perfect competition,the merit order principle describes that dispatchable power plants enter the market in the order of their marginal costs to meet the residual load,i.e.the difference of load and renewable generation.Various market models are based on this principle when attempting to predict electricity prices,yet the principle is fraught with assumptions and simplifications and thus is limited in accurately predicting prices.In this article,we present an explainable machine learning model for the electricity prices on the German day-ahead market which foregoes of the aforementioned assumptions of the merit order principle.Our model is designed for an ex-post analysis of prices and builds on various external features.Using SHapley Additive exPlanation(SHAP)values we disentangle the role of the different features and quantify their importance from empiric data,and therein circumvent the limitations inherent to the merit order principle.We show that load,wind and solar generation are the central external features driving prices,as expected,wherein wind generation affects prices more than solar generation.Similarly,fuel prices also highly affect prices,and do so in a nontrivial manner.Moreover,large generation ramps are correlated with high prices due to the limited flexibility of nuclear and lignite plants.Overall,we offer a model that describes the influence of the main drivers of electricity prices in Germany,taking us a step beyond the limited merit order principle in explaining the drivers of electricity prices and their relation to each other. 展开更多
关键词 Electricity prices Merit order principle explainable artificial intelligence Machine learning Fuel prices Energy market
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部