Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded vid...Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively.展开更多
Understanding the characteristics of time and distance gaps between the primary(PC)and secondary crashes(SC)is crucial for preventing SC ccurrences and improving road safety.Although previous studies have tried to ana...Understanding the characteristics of time and distance gaps between the primary(PC)and secondary crashes(SC)is crucial for preventing SC ccurrences and improving road safety.Although previous studies have tried to analyse the variation of gaps,there is limited evidence in quantifying the relationships between different gaps and various influential factors.This study proposed a two-layer stacking framework to discuss the time and distance gaps.Specifically,the framework took random forests(RF),gradient boosting decision tree(GBDT)and eXtreme gradient boosting as the base classifiers in the first layer and applied logistic regression(LR)as a combiner in the second layer.On this basis,the local interpretable model-agnostic explanations(LIME)technology was used to interpret the output of the stacking model from both local and global perspectives.Through SC dentification and feature selection,346 SCs and 22 crash-related factors were collected from California interstate freeways.The results showed that the stacking model outperformed base models evaluated by accuracy,precision,and recall indicators.The explanations based on LIME suggest that collision type,distance,speed and volume are the critical features that affect the time and distance gaps.Higher volume can prolong queue length and increase the distance gap from the SCs to PCs.And collision types,peak periods,workday,truck involved and tow away likely induce a long-distance gap.Conversely,there is a shorter distance gap when secondary roads run in the same direction and are close to the primary roads.Lower speed is a significant factor resulting in a long-time gap,while the higher speed is correlated with a short-time gap.These results are expected to provide insights into how contributory features affect the time and distance gaps and help decision-makers develop accurate decisions to prevent SCs.展开更多
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R193)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.Taif University Researchers Supporting Project(TURSP-2020/26),Taif University,Taif,Saudi Arabia.
文摘Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively.
基金This research was funded in part by Innovation-Driven Project of Central South University(Grant No.2020CX041)the Fundamental Research Funds for the Central Universities of Central South University(Grant No.2022ZZTS0717)。
文摘Understanding the characteristics of time and distance gaps between the primary(PC)and secondary crashes(SC)is crucial for preventing SC ccurrences and improving road safety.Although previous studies have tried to analyse the variation of gaps,there is limited evidence in quantifying the relationships between different gaps and various influential factors.This study proposed a two-layer stacking framework to discuss the time and distance gaps.Specifically,the framework took random forests(RF),gradient boosting decision tree(GBDT)and eXtreme gradient boosting as the base classifiers in the first layer and applied logistic regression(LR)as a combiner in the second layer.On this basis,the local interpretable model-agnostic explanations(LIME)technology was used to interpret the output of the stacking model from both local and global perspectives.Through SC dentification and feature selection,346 SCs and 22 crash-related factors were collected from California interstate freeways.The results showed that the stacking model outperformed base models evaluated by accuracy,precision,and recall indicators.The explanations based on LIME suggest that collision type,distance,speed and volume are the critical features that affect the time and distance gaps.Higher volume can prolong queue length and increase the distance gap from the SCs to PCs.And collision types,peak periods,workday,truck involved and tow away likely induce a long-distance gap.Conversely,there is a shorter distance gap when secondary roads run in the same direction and are close to the primary roads.Lower speed is a significant factor resulting in a long-time gap,while the higher speed is correlated with a short-time gap.These results are expected to provide insights into how contributory features affect the time and distance gaps and help decision-makers develop accurate decisions to prevent SCs.