期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods
1
作者 Wahidul Hasan Abir Faria Rahman Khanam +5 位作者 Kazi Nabiul Alam Myriam Hadjouni Hela Elmannai Sami Bourouis Rajesh Dey Mohammad Monirujjaman Khan 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2151-2169,共19页
Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded vid... Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively. 展开更多
关键词 Deepfake deep learning explainable artificial intelligence(XAI) convolutional neural network(CNN) local interpretable model-agnostic explanations(LIME)
下载PDF
IsomapVSG-LIME:一种新的模型无关解释方法
2
作者 向许 于洪 +1 位作者 张晓霞 王国胤 《智能系统学报》 CSCD 北大核心 2023年第4期841-848,共8页
为了解决局部可解释模型无关的解释(local interpretable model-agnostic explanations,LIME)随机扰动采样方法导致产生的解释缺乏局部忠实性和稳定性的问题,本文提出了一种新的模型无关解释方法IsomapVSG-LIME。该方法使用基于流形学... 为了解决局部可解释模型无关的解释(local interpretable model-agnostic explanations,LIME)随机扰动采样方法导致产生的解释缺乏局部忠实性和稳定性的问题,本文提出了一种新的模型无关解释方法IsomapVSG-LIME。该方法使用基于流形学习的等距映射虚拟样本生成(isometric mapping virtual sample generation,IsomapVSG)方法代替LIME的随机扰动采样方法来生成样本,并使用凝聚层次聚类方法从虚拟样本中选择具有代表性的样本用以训练解释模型;本文还提出了一种新的解释稳定性评价指标—特征序列稳定性指数(features sequence stability index,FSSI),解决了以往评价指标忽略特征的序关系和解释翻转的问题。实验结果表明,本文提出的方法在稳定性和局部忠实性上均优于现有的最新模型。 展开更多
关键词 局部可解释模型无关的解释 机器学习 等距映射虚拟样本生成 凝聚层次聚类 稳定性 局部忠实性 随机扰动采样 特征序列稳定性指数
下载PDF
利用LIME对脓毒症预测模型进行可解释性分析 被引量:6
3
作者 黄艺龙 秦小林 +2 位作者 陈芋文 张力戈 易斌 《计算机应用》 CSCD 北大核心 2021年第S01期332-335,共4页
针对机器学习应用于脓毒症预测存在预测准确率低和可解释性不足的问题,提出了利用LIME对基于机器学习的脓毒症预测模型进行可解释性分析。模型由预测和解释两部分组成:预测部分使用XGBoost和线性回归(LR),首先通过XGBoost进行特征提取,... 针对机器学习应用于脓毒症预测存在预测准确率低和可解释性不足的问题,提出了利用LIME对基于机器学习的脓毒症预测模型进行可解释性分析。模型由预测和解释两部分组成:预测部分使用XGBoost和线性回归(LR),首先通过XGBoost进行特征提取,再利用LR对提取到的特征进行分类;解释部分使用LIME模型提取出关键的预测指标对模型进行解释。实验结果表明,通过XGBoost+LR模型进行脓毒症预测的准确率为99%,受试者工作特征曲线下面积(AUROC)为0.984,优于单独使用XGBoost(准确率:95%,AUROC:0.953)和LR(准确率:53%,AUROC:0.556)或者LGBM(准确率:90%,AUROC:0.974),同时通过LIME能有效地提取出前10个最重要的指标,对脓毒症预测模型进行可解释性分析,提高了模型的可信度。 展开更多
关键词 脓毒症 机器学习 XGBoost 模型可解释性 LIME
下载PDF
具有可解释性的OFDM雷达信号识别方法 被引量:2
4
作者 葛鹏 张文强 +2 位作者 金炜东 郭建 何贤坤 《太赫兹科学与电子信息学报》 北大核心 2020年第2期228-234,共7页
针对目前正交频分复用(OFDM)雷达信号识别方法存在的问题,提出了一种具有可解释性的OFDM雷达信号识别方法。该方法是通过基于树结构的流程优化(TPOT)和与模型无关的局部可理解的解释性(LIME)相结合对OFDM雷达信号进行识别。针对OFDM雷... 针对目前正交频分复用(OFDM)雷达信号识别方法存在的问题,提出了一种具有可解释性的OFDM雷达信号识别方法。该方法是通过基于树结构的流程优化(TPOT)和与模型无关的局部可理解的解释性(LIME)相结合对OFDM雷达信号进行识别。针对OFDM雷达信号特性提取了复杂度特征和基于时频图矩阵的奇异值熵,组成特征向量;通过TPOT,得到表现最佳的机器学习流程;通过"解释器"解释预测结果,对识别结果做出是否识别正确的风险评估,同时可根据OFDM雷达信号的解释性,得到哪些信号不易区分。实验表明,该方法对信噪比为0dB时的OFDM雷达信号的识别率达91%,通过LIME给出的解释性可以判断数据集中不易区分的雷达信号类型。 展开更多
关键词 OFDM雷达信号 机器学习 奇异值熵 流程优化 局部可理解的解释性
下载PDF
基于LIME的恶意代码对抗样本生成技术
5
作者 黄天波 李成扬 +2 位作者 刘永志 李燈辉 文伟平 《北京航空航天大学学报》 EI CAS CSCD 北大核心 2022年第2期331-338,共8页
基于机器学习检测恶意代码技术的研究和分析,针对机器学习模型对抗样本的生成提出一种基于模型无关的局部可解释(LIME)的黑盒对抗样本生成方法。该方法可以对任意黑盒的恶意代码分类器生成对抗样本,绕过机器学习模型检测。使用简单模型... 基于机器学习检测恶意代码技术的研究和分析,针对机器学习模型对抗样本的生成提出一种基于模型无关的局部可解释(LIME)的黑盒对抗样本生成方法。该方法可以对任意黑盒的恶意代码分类器生成对抗样本,绕过机器学习模型检测。使用简单模型模拟目标分类器的局部表现,获取特征权重;通过扰动算法生成扰动,根据生成的扰动对原恶意代码进行修改后生成对抗样本;基于2015年微软公布的常见恶意样本数据集和收集的来自50多个供应商的良性样本数据对所提方法进行实验,参照常见恶意代码分类器实现了18个基于不同算法或特征的目标分类器,使用所提方法对目标分类器进行攻击,使分类器的真阳性率均降低到接近0。此外,对MalGAN和ZOO两个先进的黑盒对抗样本生成方法与所提方法进行对比,实验结果表明:所提方法能够有效生成对抗样本,且方法本身具有适用范围广泛、能灵活控制扰动和健全性的优点。 展开更多
关键词 对抗样本 恶意代码 机器学习 模型无关的局部可解释(LIME) 目标分类器
下载PDF
Examining the characteristics between time and distance gaps of secondary crashes
6
作者 Xinyuan Liu Jinjun Tang +2 位作者 Chen Yuan Fan Gao Xizhi Ding 《Transportation Safety and Environment》 EI 2024年第1期116-131,共16页
Understanding the characteristics of time and distance gaps between the primary(PC)and secondary crashes(SC)is crucial for preventing SC ccurrences and improving road safety.Although previous studies have tried to ana... Understanding the characteristics of time and distance gaps between the primary(PC)and secondary crashes(SC)is crucial for preventing SC ccurrences and improving road safety.Although previous studies have tried to analyse the variation of gaps,there is limited evidence in quantifying the relationships between different gaps and various influential factors.This study proposed a two-layer stacking framework to discuss the time and distance gaps.Specifically,the framework took random forests(RF),gradient boosting decision tree(GBDT)and eXtreme gradient boosting as the base classifiers in the first layer and applied logistic regression(LR)as a combiner in the second layer.On this basis,the local interpretable model-agnostic explanations(LIME)technology was used to interpret the output of the stacking model from both local and global perspectives.Through SC dentification and feature selection,346 SCs and 22 crash-related factors were collected from California interstate freeways.The results showed that the stacking model outperformed base models evaluated by accuracy,precision,and recall indicators.The explanations based on LIME suggest that collision type,distance,speed and volume are the critical features that affect the time and distance gaps.Higher volume can prolong queue length and increase the distance gap from the SCs to PCs.And collision types,peak periods,workday,truck involved and tow away likely induce a long-distance gap.Conversely,there is a shorter distance gap when secondary roads run in the same direction and are close to the primary roads.Lower speed is a significant factor resulting in a long-time gap,while the higher speed is correlated with a short-time gap.These results are expected to provide insights into how contributory features affect the time and distance gaps and help decision-makers develop accurate decisions to prevent SCs. 展开更多
关键词 secondary crash(SC) time and distance gaps stacking framework local interpretable model-agnostic explanations(LIME)
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部