Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and r...Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and resource protection roles.The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI.Although various adversarial attack and defense methods are actively being studied,there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks.An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and recall.Additional evaluation indicators are necessary to evaluate the robustness of AI models against adversarial attacks.In this paper,a Sophisticated Adversarial Robustness Score(SARS)is proposed for AI model robustness evaluation.SARS uses various factors in addition to the ratio of perturbated features and the size of perturbation to evaluate robustness accurately in the evaluation process.This evaluation indicator reflects aspects that are difficult to evaluate using traditional evaluation indicators.Moreover,the level of robustness can be evaluated by considering the difficulty of generating adversarial samples through adversarial attacks.This paper proposed using SARS,calculated based on adversarial attacks,to identify data groups with robustness vulnerability and improve robustness through adversarial training.Through SARS,it is possible to evaluate the level of robustness,which can help developers identify areas for improvement.To validate the proposed method,experiments were conducted using a malware dataset.Through adversarial training,it was confirmed that SARS increased by 70.59%,and the recall reduction rate improved by 64.96%.Through SARS,it is possible to evaluate whether an AI model is vulnerable to adversarial attacks and to identify vulnerable data types.In addition,it is expected that improved models can be achieved by improving resistance to adversarial attacks via methods such as adversarial training.展开更多
Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and appli...Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and applications should be transparent,explainable,and trustworthy.Accordingly,the field of Explainable AI(XAI)is expanding rapidly.XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network(DNN)produces their outcomes.Moreover,many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems.In this paper,we conduct a systematic literature review of provenance,XAI,and trustworthy AI(TAI)to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems.Moreover,we also discuss the patterns of recent developments in this area and offer a vision for research in the near future.We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance,XAI,and TAI.展开更多
基金supported by an Institute of Information and Communications Technology Planning and Evaluation (IITP)grant funded by the Korean Government (MSIT) (No.2022-0-00089,Development of Clustering and Analysis Technology to Identify Cyber-Attack Groups Based on Life-Cycle)and MISP (Ministry of Science,ICT&Future Planning),Korea,under the National Program for Excellence in SW (2019-0-01834)supervised by the IITP (Institute of Information&Communications Technology Planning&Evaluation) (2019-0-01834).
文摘Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and resource protection roles.The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI.Although various adversarial attack and defense methods are actively being studied,there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks.An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and recall.Additional evaluation indicators are necessary to evaluate the robustness of AI models against adversarial attacks.In this paper,a Sophisticated Adversarial Robustness Score(SARS)is proposed for AI model robustness evaluation.SARS uses various factors in addition to the ratio of perturbated features and the size of perturbation to evaluate robustness accurately in the evaluation process.This evaluation indicator reflects aspects that are difficult to evaluate using traditional evaluation indicators.Moreover,the level of robustness can be evaluated by considering the difficulty of generating adversarial samples through adversarial attacks.This paper proposed using SARS,calculated based on adversarial attacks,to identify data groups with robustness vulnerability and improve robustness through adversarial training.Through SARS,it is possible to evaluate the level of robustness,which can help developers identify areas for improvement.To validate the proposed method,experiments were conducted using a malware dataset.Through adversarial training,it was confirmed that SARS increased by 70.59%,and the recall reduction rate improved by 64.96%.Through SARS,it is possible to evaluate whether an AI model is vulnerable to adversarial attacks and to identify vulnerable data types.In addition,it is expected that improved models can be achieved by improving resistance to adversarial attacks via methods such as adversarial training.
基金supported by the National Science Foundation under Grants No.2019609the National Aeronautics and Space Administration under Grant No.80NSSC21M0028.
文摘Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and applications should be transparent,explainable,and trustworthy.Accordingly,the field of Explainable AI(XAI)is expanding rapidly.XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network(DNN)produces their outcomes.Moreover,many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems.In this paper,we conduct a systematic literature review of provenance,XAI,and trustworthy AI(TAI)to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems.Moreover,we also discuss the patterns of recent developments in this area and offer a vision for research in the near future.We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance,XAI,and TAI.