期刊文献+
共找到1,745篇文章
< 1 2 88 >
每页显示 20 50 100
CrossLinkNet: An Explainable and Trustworthy AI Framework for Whole-Slide Images Segmentation
1
作者 Peng Xiao Qi Zhong +3 位作者 Jingxue Chen Dongyuan Wu Zhen Qin Erqiang Zhou 《Computers, Materials & Continua》 SCIE EI 2024年第6期4703-4724,共22页
In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at proc... In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics. 展开更多
关键词 explainable AI security TRUSTWORTHY CrossLinkNet whole slide images
下载PDF
Spatial Attention Integrated EfficientNet Architecture for Breast Cancer Classification with Explainable AI
2
作者 Sannasi Chakravarthy Bharanidharan Nagarajan +4 位作者 Surbhi Bhatia Khan Vinoth Kumar Venkatesan Mahesh Thyluru Ramakrishna Ahlam AlMusharraf Khursheed Aurungzeb 《Computers, Materials & Continua》 SCIE EI 2024年第9期5029-5045,共17页
Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research l... Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique. 展开更多
关键词 EfficientNet MAMMOGRAMS breast cancer explainable AI deep-learning transfer learning
下载PDF
Machine Fault Diagnosis Using Audio Sensors Data and Explainable AI Techniques-LIME and SHAP
3
作者 Aniqua Nusrat Zereen Abir Das Jia Uddin 《Computers, Materials & Continua》 SCIE EI 2024年第9期3463-3484,共22页
Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learni... Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learning models,especially those utilizing complex algorithms like deep learning,have demonstrated major potential in extracting important information fromlarge operational datasets.Despite their efficiency,machine learningmodels face challenges,making Explainable AI(XAI)crucial for improving their understandability and fine-tuning.The importance of feature contribution and selection using XAI in the diagnosis of machine faults is examined in this study.The technique is applied to evaluate different machine-learning algorithms.Extreme Gradient Boosting,Support Vector Machine,Gaussian Naive Bayes,and Random Forest classifiers are used alongside Logistic Regression(LR)as a baseline model because their efficacy and simplicity are evaluated thoroughly with empirical analysis.The XAI is used as a targeted feature selection technique to select among 29 features of the time and frequency domain.The XAI approach is lightweight,trained with only targeted features,and achieved similar results as the traditional approach.The accuracy without XAI on baseline LR is 79.57%,whereas the approach with XAI on LR is 80.28%. 展开更多
关键词 explainable AI feature selection machine learning machine fault diagnosis
下载PDF
Transparent and Accurate COVID-19 Diagnosis:Integrating Explainable AI with Advanced Deep Learning in CT Imaging
4
作者 Mohammad Mehedi Hassan Salman A.AlQahtani +1 位作者 Mabrook S.AlRakhami Ahmed Zohier Elhendi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3101-3123,共23页
In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.De... In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19. 展开更多
关键词 explainable AI COVID-19 CT images deep learning
下载PDF
Explainable Artificial Intelligence(XAI)Model for Cancer Image Classification
5
作者 Amit Singhal Krishna Kant Agrawal +3 位作者 Angeles Quezada Adrian Rodriguez Aguiñaga Samantha Jiménez Satya Prakash Yadav 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期401-441,共41页
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ... The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment. 展开更多
关键词 explainable artificial intelligence artificial intelligence XAI healthcare CANCER image classification
下载PDF
Explainable AI-Based DDoS Attacks Classification Using Deep Transfer Learning
6
作者 Ahmad Alzu’bi Amjad Albashayreh +1 位作者 Abdelrahman Abuarqoub Mai A.M.Alfawair 《Computers, Materials & Continua》 SCIE EI 2024年第9期3785-3802,共18页
In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by Io... In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by IoT technology,the growing number of IoT devices escalates the likelihood of attacks,emphasizing the need for robust security tools to automatically detect and explain threats.This paper introduces a deep learning methodology for detecting and classifying distributed denial of service(DDoS)attacks,addressing a significant security concern within IoT environments.An effective procedure of deep transfer learning is applied to utilize deep learning backbones,which is then evaluated on two benchmarking datasets of DDoS attacks in terms of accuracy and time complexity.By leveraging several deep architectures,the study conducts thorough binary and multiclass experiments,each varying in the complexity of classifying attack types and demonstrating real-world scenarios.Additionally,this study employs an explainable artificial intelligence(XAI)AI technique to elucidate the contribution of extracted features in the process of attack detection.The experimental results demonstrate the effectiveness of the proposed method,achieving a recall of 99.39%by the XAI bidirectional long short-term memory(XAI-BiLSTM)model. 展开更多
关键词 DDoS attack classification deep learning explainable AI CYBERSECURITY
下载PDF
Explainable Neural Network for Sensitivity Analysis of Lithium-ion Battery Smart Production
7
作者 Kailong Liu Qiao Peng +2 位作者 Yuhang Liu Naxin Cui Chenghui Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第9期1944-1953,共10页
Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control par... Battery production is crucial for determining the quality of electrode,which in turn affects the manufactured battery performance.As battery production is complicated with strongly coupled intermediate and control parameters,an efficient solution that can perform a reliable sensitivity analysis of the production terms of interest and forecast key battery properties in the early production phase is urgently required.This paper performs detailed sensitivity analysis of key production terms on determining the properties of manufactured battery electrode via advanced data-driven modelling.To be specific,an explainable neural network named generalized additive model with structured interaction(GAM-SI)is designed to predict two key battery properties,including electrode mass loading and porosity,while the effects of four early production terms on manufactured batteries are explained and analysed.The experimental results reveal that the proposed method is able to accurately predict battery electrode properties in the mixing and coating stages.In addition,the importance ratio ranking,global interpretation and local interpretation of both the main effects and pairwise interactions can be effectively visualized by the designed neural network.Due to the merits of interpretability,the proposed GAM-SI can help engineers gain important insights for understanding complicated production behavior,further benefitting smart battery production. 展开更多
关键词 Battery management battery manufacturing data science explainable artificial intelligence sensitivity analysis
下载PDF
GliomaCNN: An Effective Lightweight CNN Model in Assessment of Classifying Brain Tumor from Magnetic Resonance Images Using Explainable AI
8
作者 Md.Atiqur Rahman Mustavi Ibne Masum +4 位作者 Khan Md Hasib M.F.Mridha Sultan Alfarhood Mejdl Safran Dunren Che 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2425-2448,共24页
Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Mag... Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors. 展开更多
关键词 Deep learning magnetic resonance imaging convolutional neural networks explainable AI boosting algorithm ablation
下载PDF
MAIPFE:An Efficient Multimodal Approach Integrating Pre-Emptive Analysis,Personalized Feature Selection,and Explainable AI
9
作者 Moshe Dayan Sirapangi S.Gopikrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第5期2229-2251,共23页
Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of mu... Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world. 展开更多
关键词 Predictive health modeling Medical Internet of Things explainable artificial intelligence personalized feature selection preemptive analysis
下载PDF
Modeling and Predictive Analytics of Breast Cancer Using Ensemble Learning Techniques:An Explainable Artificial Intelligence Approach
10
作者 Avi Deb Raha Fatema Jannat Dihan +8 位作者 Mrityunjoy Gain Saydul Akbar Murad Apurba Adhikary Md.Bipul Hossain Md.Mehedi Hassan Taher Al-Shehari Nasser A.Alsadhan Mohammed Kadrie Anupam Kumar Bairagi 《Computers, Materials & Continua》 SCIE EI 2024年第12期4033-4048,共16页
Breast cancer stands as one of the world’s most perilous and formidable diseases,having recently surpassed lung cancer as the most prevalent cancer type.This disease arises when cells in the breast undergo unregulate... Breast cancer stands as one of the world’s most perilous and formidable diseases,having recently surpassed lung cancer as the most prevalent cancer type.This disease arises when cells in the breast undergo unregulated proliferation,resulting in the formation of a tumor that has the capacity to invade surrounding tissues.It is not confined to a specific gender;both men and women can be diagnosed with breast cancer,although it is more frequently observed in women.Early detection is pivotal in mitigating its mortality rate.The key to curbing its mortality lies in early detection.However,it is crucial to explain the black-box machine learning algorithms in this field to gain the trust of medical professionals and patients.In this study,we experimented with various machine learning models to predict breast cancer using the Wisconsin Breast Cancer Dataset(WBCD)dataset.We applied Random Forest,XGBoost,Support Vector Machine(SVM),Multi-Layer Perceptron(MLP),and Gradient Boost classifiers,with the Random Forest model outperforming the others.A comparison analysis between the two methods was done after performing hyperparameter tuning on each method.The analysis showed that the random forest performs better and yields the highest result with 99.46%accuracy.After performance evaluation,two Explainable Artificial Intelligence(XAI)methods,SHapley Additive exPlanations(SHAP)and Local Interpretable Model-Agnostic Explanations(LIME),have been utilized to explain the random forest machine learning model. 展开更多
关键词 Breast cancer prediction machine learning models explainable artificial intelligence random forest hyperparameter tuning
下载PDF
A Study on the Explainability of Thyroid Cancer Prediction:SHAP Values and Association-Rule Based Feature Integration Framework
11
作者 Sujithra Sankar S.Sathyalakshmi 《Computers, Materials & Continua》 SCIE EI 2024年第5期3111-3138,共28页
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi... In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications. 展开更多
关键词 explainable AI machine learning clinical decision support systems thyroid cancer association-rule based framework SHAP values classification and prediction
下载PDF
基于知识图谱增强的恶意代码分类方法
12
作者 夏冰 何取东 +2 位作者 刘文博 楚世豪 庞建民 《郑州大学学报(理学版)》 CAS 北大核心 2025年第2期61-68,共8页
针对应用程序接口(application programming interface,API)序列识别的恶意代码分类方法存在特征描述能力弱和调用关系缺失的问题,提出一种基于知识图谱增强的恶意代码分类方法。首先,基于函数调用图抽取恶意代码所含的API实体及其调用... 针对应用程序接口(application programming interface,API)序列识别的恶意代码分类方法存在特征描述能力弱和调用关系缺失的问题,提出一种基于知识图谱增强的恶意代码分类方法。首先,基于函数调用图抽取恶意代码所含的API实体及其调用关系,在此基础上构建恶意代码API知识图谱。其次,使用Word2Vec技术计算携带上下文调用语义的API序列向量,借助TransE技术捕获API知识图谱中的API实体向量,将这两个向量的融合结果作为API特征。最后,将恶意代码所含的API表示为特征矩阵,输入TextCNN进行分类模型训练。在恶意代码家族分类任务中,与基线模型相比,所提方法的准确率有较大提升,达到93.8%,表明知识图谱可以有效增强恶意代码家族分类效果。同时,通过可解释性实验证实了所提方法具有应用价值。 展开更多
关键词 恶意代码 API序列 语义抽取 知识图谱 可解释性
下载PDF
可解释AI综述及其在地震科学领域中的应用展望
13
作者 黄立洪 李健 +9 位作者 刘哲函 王晓明 商杰 盖磊 邱宏茂 李铭 弓妮 韩守诚 徐妍妍 刘泽玉 《地震科学进展》 2025年第1期1-11,共11页
近十年来,人工智能(Artificial Intelligence,AI)作为计算机科学技术的一项重要分支,在计算机视觉、自然语言处理、机器翻译等研究领域取得了跨越级突破。作为21世纪初的一项颠覆性技术,AI很快被应用于包括地震科学在内的各行业领域。然... 近十年来,人工智能(Artificial Intelligence,AI)作为计算机科学技术的一项重要分支,在计算机视觉、自然语言处理、机器翻译等研究领域取得了跨越级突破。作为21世纪初的一项颠覆性技术,AI很快被应用于包括地震科学在内的各行业领域。然而,目前以机器学习和深度学习为代表的人工智能技术,虽然在表现性能上远超传统方法,但通常模型结构更加复杂,缺乏透明度,具有黑盒本质,因此制约了其在大多数行业领域中的决策级应用。在这样的时代背景之下,可解释AI技术应运而生,旨在帮助人类用户创造一个能够理解、信任和有效管理的新一代人工智能系统。本文梳理了可解释AI的定义和方法,简要介绍了地震学AI技术的研究进展,总结了可解释AI技术在地震科学领域中的应用现状,并讨论了可解释AI技术的未来发展趋势,提出了可解释AI技术在地震科学领域中的应用展望。 展开更多
关键词 人工智能 黑盒模型 可解释性 地震科学
下载PDF
Co-selection may explain the unexpectedly high prevalence of plasmid-mediated colistin resistance gene mcr-1 in a Chinese broiler farm 被引量:5
14
作者 Yu-Ping Cao Qing-Qing Lin +6 位作者 Wan-Yun He Jing Wang Meng-Ying Yi Lu-Chao Lv Jun Yang Jian-Hua Liu Jian-Ying Guo 《Zoological Research》 SCIE CAS CSCD 2020年第5期569-575,共7页
DEAR EDITOR,The rise of the plasmid-encoded colistin resistance gene mcr-1 is a major concern globally.Here,during a routine surveillance,an unexpectedly high prevalence of Escherichia coli with reduced susceptibility... DEAR EDITOR,The rise of the plasmid-encoded colistin resistance gene mcr-1 is a major concern globally.Here,during a routine surveillance,an unexpectedly high prevalence of Escherichia coli with reduced susceptibility to colistin (69.9%) was observed in a Chinese broiler farm.Fifty-three (63.9%) E.coli isolates were positive for mcr-1.All identified mcr-1-positive E. 展开更多
关键词 globally ROUTINE explain
下载PDF
ANC: Attention Network for COVID-19 Explainable Diagnosis Based on Convolutional Block Attention Module 被引量:9
15
作者 Yudong Zhang Xin Zhang Weiguo Zhu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2021年第6期1037-1058,共22页
Aim: To diagnose COVID-19 more efficiently and more correctly, this study proposed a novel attention network forCOVID-19 (ANC). Methods: Two datasets were used in this study. An 18-way data augmentation was proposed t... Aim: To diagnose COVID-19 more efficiently and more correctly, this study proposed a novel attention network forCOVID-19 (ANC). Methods: Two datasets were used in this study. An 18-way data augmentation was proposed toavoid overfitting. Then, convolutional block attention module (CBAM) was integrated to our model, the structureof which is fine-tuned. Finally, Grad-CAM was used to provide an explainable diagnosis. Results: The accuracyof our ANC methods on two datasets are 96.32% ± 1.06%, and 96.00% ± 1.03%, respectively. Conclusions: Thisproposed ANC method is superior to 9 state-of-the-art approaches. 展开更多
关键词 Deep learning convolutional block attention module attention mechanism COVID-19 explainable diagnosis
下载PDF
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine 被引量:4
16
作者 Ahmad Chaddad Qizong Lu +5 位作者 Jiali Li Yousef Katib Reem Kateb Camel Tanougast Ahmed Bouridane Ahmed Abdulkadir 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第4期859-876,共18页
Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In ... Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives. 展开更多
关键词 Domain adaptation explainable artificial intelligence federated learning
下载PDF
The Chinese skeleton: insights into microstructure that help to explain the epidemiology of fracture 被引量:13
17
作者 Elaine Cong Marcella D Walker 《Bone Research》 SCIE CAS 2014年第2期80-92,共13页
Osteoporotic fractures are a major public health problem worldwide, but incidence varies greatly across racial groups and geographic regions. Recent work suggests that the incidence of osteoporotic fracture is rising ... Osteoporotic fractures are a major public health problem worldwide, but incidence varies greatly across racial groups and geographic regions. Recent work suggests that the incidence of osteoporotic fracture is rising among Asian populations. Studies comparing areal bone mineral density and fracture across races generally indicate lower bone mineral density in Asian individuals including the Chinese, but this does not reflect their relatively low risk of non-vertebral fractures. In contrast, the Chinese have relatively high vertebral fracture rates similar to that of Caucasians. The paradoxically low risk for some types of fractures among the Chinese despite their low areal bone mineral density has been elucidated in part by recent advances in skeletal imaging. New techniques for assessing bone quality non-invasively demonstrate that the Chinese compensate for smaller bone size by differences in hip geometry and microstructural skeletal organization. Studies evaluating factors influencing racial differences in bone remodeling, as well as bone acquisition and loss, may further elucidate racial variation in bone microstructure. Advances in understanding the microstructure of the Chinese skeleton have not only helped to explain the epidemiology of fracture in the Chinese, but may also provide insight into the epidemiology of fracture in other races as well. 展开更多
关键词 bone insights into microstructure that help to explain the epidemiology of fracture The Chinese skeleton
下载PDF
Explainable Artificial Intelligence-A New Step towards the Trust in Medical Diagnosis with AI Frameworks:A Review 被引量:1
18
作者 Nilkanth Mukund Deshpande Shilpa Gite +1 位作者 Biswajeet Pradhan Mazen Ebraheem Assiri 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第12期843-872,共30页
Machine learning(ML)has emerged as a critical enabling tool in the sciences and industry in recent years.Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex task... Machine learning(ML)has emerged as a critical enabling tool in the sciences and industry in recent years.Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex tasks-thanks to advancements in technique,the availability of enormous databases,and improved computing power.Deep learning models are at the forefront of this advancement.However,because of their nested nonlinear structure,these strong models are termed as“black boxes,”as they provide no information about how they arrive at their conclusions.Such a lack of transparencies may be unacceptable in many applications,such as the medical domain.A lot of emphasis has recently been paid to the development of methods for visualizing,explaining,and interpreting deep learningmodels.The situation is substantially different in safety-critical applications.The lack of transparency of machine learning techniques may be limiting or even disqualifying issue in this case.Significantly,when single bad decisions can endanger human life and health(e.g.,autonomous driving,medical domain)or result in significant monetary losses(e.g.,algorithmic trading),depending on an unintelligible data-driven system may not be an option.This lack of transparency is one reason why machine learning in sectors like health is more cautious than in the consumer,e-commerce,or entertainment industries.Explainability is the term introduced in the preceding years.The AImodel’s black box nature will become explainable with these frameworks.Especially in the medical domain,diagnosing a particular disease through AI techniques would be less adapted for commercial use.These models’explainable natures will help them commercially in diagnosis decisions in the medical field.This paper explores the different frameworks for the explainability of AI models in the medical field.The available frameworks are compared with other parameters,and their suitability for medical fields is also discussed. 展开更多
关键词 Medical imaging explainability artificial intelligence XAI
下载PDF
Explainable AI Enabled Infant Mortality Prediction Based on Neonatal Sepsis 被引量:1
19
作者 Priti Shaw Kaustubh Pachpor Suresh Sankaranarayanan 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期311-325,共15页
Neonatal sepsis is the third most common cause of neonatal mortality and a serious public health problem,especially in developing countries.There have been researches on human sepsis,vaccine response,and immunity.Also... Neonatal sepsis is the third most common cause of neonatal mortality and a serious public health problem,especially in developing countries.There have been researches on human sepsis,vaccine response,and immunity.Also,machine learning methodologies were used for predicting infant mortality based on certain features like age,birth weight,gestational weeks,and Appearance,Pulse,Grimace,Activity and Respiration(APGAR)score.Sepsis,which is considered the most determining condition towards infant mortality,has never been considered for mortality prediction.So,we have deployed a deep neural model which is the state of art and performed a comparative analysis of machine learning models to predict the mortality among infants based on the most important features including sepsis.Also,for assessing the prediction reliability of deep neural model which is a black box,Explainable AI models like Dalex and Lime have been deployed.This would help any non-technical personnel like doctors and practitioners to understand and accordingly make decisions. 展开更多
关键词 APGAR SEPSIS explainable AI machine learning
下载PDF
XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly 被引量:2
20
作者 Yuna Han Hangbae Chang 《Computers, Materials & Continua》 SCIE EI 2023年第7期221-237,共17页
Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechani... Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission.Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry.However,real-time training and classifying network traffic pose challenges,as they can lead to the degradation of the overall dataset and difficulties preventing attacks.Additionally,existing semi-supervised learning research might need to analyze the experimental results comprehensively.This paper proposes XA-GANomaly,a novel technique for explainable adaptive semi-supervised learning using GANomaly,an image anomalous detection model that dynamically trains small subsets to these issues.First,this research introduces a deep neural network(DNN)-based GANomaly for semi-supervised learning.Second,this paper presents the proposed adaptive algorithm for the DNN-based GANomaly,which is validated with four subsets of the adaptive dataset.Finally,this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations,reconstruction error visualization,and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage,semi-supervised learning,and adaptive learning.Compared to other single-class classification techniques,the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13%and 8%of F1 scores and 4.17%and 11.51%for accuracy,respectively.Furthermore,experiments of the proposed adaptive learning reveal mostly improved results over the initial values.An analysis and monitoring system based on the combination of the three explainable methodologies is also described.Thus,the proposed method has the potential advantages to be applied in practical industry,and future research will explore handling unbalanced real-time datasets in various scenarios. 展开更多
关键词 Intrusion detection system(IDS) adaptive learning semi-supervised learning explainable artificial intelligence(XAI) monitoring system
下载PDF
上一页 1 2 88 下一页 到第
使用帮助 返回顶部