期刊文献+

Insights into Manipulation: Unveiling Tampered Images Using Modified ELA, Deep Learning, and Explainable AI

Insights into Manipulation: Unveiling Tampered Images Using Modified ELA, Deep Learning, and Explainable AI
下载PDF
导出
摘要 Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task for maintaining the integrity of digital content. This thesis explores the use of Modified Error Level Analysis (ELA) in combination with a Convolutional Neural Network (CNN), as well as, Feedforward Neural Network (FNN) model to detect digital image forgeries. Additionally, incorporation of Explainable Artificial Intelligence (XAI) to this research provided insights into the process of decision-making by the models. The study trains and tests the models on the CASIA2 dataset, emphasizing both authentic and forged images. The CNN model is trained and evaluated, and Explainable AI (SHapley Additive exPlanation— SHAP) is incorporated to explain the model’s predictions. Similarly, the FNN model is trained and evaluated, and XAI (SHAP) is incorporated to explain the model’s predictions. The results obtained from the analysis reveals that the proposed approach using CNN model is most effective in detecting image forgeries and provides valuable explanations for decision interpretability. Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task for maintaining the integrity of digital content. This thesis explores the use of Modified Error Level Analysis (ELA) in combination with a Convolutional Neural Network (CNN), as well as, Feedforward Neural Network (FNN) model to detect digital image forgeries. Additionally, incorporation of Explainable Artificial Intelligence (XAI) to this research provided insights into the process of decision-making by the models. The study trains and tests the models on the CASIA2 dataset, emphasizing both authentic and forged images. The CNN model is trained and evaluated, and Explainable AI (SHapley Additive exPlanation— SHAP) is incorporated to explain the model’s predictions. Similarly, the FNN model is trained and evaluated, and XAI (SHAP) is incorporated to explain the model’s predictions. The results obtained from the analysis reveals that the proposed approach using CNN model is most effective in detecting image forgeries and provides valuable explanations for decision interpretability.
作者 Md. Mehedi Hasan Md. Masud Rana Abu Sayed Md. Mostafizur Rahaman Md. Mehedi Hasan;Md. Masud Rana;Abu Sayed Md. Mostafizur Rahaman(Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka, Bangladesh;Department of Computer Science and Engineering, Jahangirnagar University, Dhaka, Bangladesh)
出处 《Journal of Computer and Communications》 2024年第6期135-151,共17页 电脑和通信(英文)
关键词 IFD DIF ELA CNN FNN XAI SHAP CASIA2.0 IFD DIF ELA CNN FNN XAI SHAP CASIA2.0
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部