期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Explainable Artificial Intelligence-Based Model Drift Detection Applicable to Unsupervised Environments
1
作者 Yongsoo lee Yeeun lee +1 位作者 Eungyu lee taejin lee 《Computers, Materials & Continua》 SCIE EI 2023年第8期1701-1719,共19页
Cybersecurity increasingly relies on machine learning(ML)models to respond to and detect attacks.However,the rapidly changing data environment makes model life-cycle management after deployment essential.Real-time det... Cybersecurity increasingly relies on machine learning(ML)models to respond to and detect attacks.However,the rapidly changing data environment makes model life-cycle management after deployment essential.Real-time detection of drift signals from various threats is fundamental for effectively managing deployed models.However,detecting drift in unsupervised environments can be challenging.This study introduces a novel approach leveraging Shapley additive explanations(SHAP),a widely recognized explainability technique in ML,to address drift detection in unsupervised settings.The proposed method incorporates a range of plots and statistical techniques to enhance drift detection reliability and introduces a drift suspicion metric that considers the explanatory aspects absent in the current approaches.To validate the effectiveness of the proposed approach in a real-world scenario,we applied it to an environment designed to detect domain generation algorithms(DGAs).The dataset was obtained from various types of DGAs provided by NetLab.Based on this dataset composition,we sought to validate the proposed SHAP-based approach through drift scenarios that occur when a previously deployed model detects new data types in an environment that detects real-world DGAs.The results revealed that more than 90%of the drift data exceeded the threshold,demonstrating the high reliability of the approach to detect drift in an unsupervised environment.The proposed method distinguishes itself fromexisting approaches by employing explainable artificial intelligence(XAI)-based detection,which is not limited by model or system environment constraints.In conclusion,this paper proposes a novel approach to detect drift in unsupervised ML settings for cybersecurity.The proposed method employs SHAP-based XAI and a drift suspicion metric to improve drift detection reliability.It is versatile and suitable for various realtime data analysis contexts beyond DGA detection environments.This study significantly contributes to theMLcommunity by addressing the critical issue of managing ML models in real-world cybersecurity settings.Our approach is distinguishable from existing techniques by employing XAI-based detection,which is not limited by model or system environment constraints.As a result,our method can be applied in critical domains that require adaptation to continuous changes,such as cybersecurity.Through extensive validation across diverse settings beyond DGA detection environments,the proposed method will emerge as a versatile drift detection technique suitable for a wide range of real-time data analysis contexts.It is also anticipated to emerge as a new approach to protect essential systems and infrastructures from attacks. 展开更多
关键词 CYBERSECURITY machine learning(ML) model life-cycle management drift detection unsupervised environments shapley additive explanations(SHAP) explainability
下载PDF
Adversarial Attack-Based Robustness Evaluation for Trustworthy AI
2
作者 Eungyu lee Yongsoo lee taejin lee 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1919-1935,共17页
Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and r... Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and resource protection roles.The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI.Although various adversarial attack and defense methods are actively being studied,there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks.An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and recall.Additional evaluation indicators are necessary to evaluate the robustness of AI models against adversarial attacks.In this paper,a Sophisticated Adversarial Robustness Score(SARS)is proposed for AI model robustness evaluation.SARS uses various factors in addition to the ratio of perturbated features and the size of perturbation to evaluate robustness accurately in the evaluation process.This evaluation indicator reflects aspects that are difficult to evaluate using traditional evaluation indicators.Moreover,the level of robustness can be evaluated by considering the difficulty of generating adversarial samples through adversarial attacks.This paper proposed using SARS,calculated based on adversarial attacks,to identify data groups with robustness vulnerability and improve robustness through adversarial training.Through SARS,it is possible to evaluate the level of robustness,which can help developers identify areas for improvement.To validate the proposed method,experiments were conducted using a malware dataset.Through adversarial training,it was confirmed that SARS increased by 70.59%,and the recall reduction rate improved by 64.96%.Through SARS,it is possible to evaluate whether an AI model is vulnerable to adversarial attacks and to identify vulnerable data types.In addition,it is expected that improved models can be achieved by improving resistance to adversarial attacks via methods such as adversarial training. 展开更多
关键词 AI ROBUSTNESS adversarial attack adversarial robustness robustness indicator trustworthy AI
下载PDF
Efficient Explanation and Evaluation Methodology Based on Hybrid Feature Dropout
3
作者 Jingang Kim Suengbum Lim taejin lee 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期471-490,共20页
AI-related research is conducted in various ways,but the reliability of AI prediction results is currently insufficient,so expert decisions are indispensable for tasks that require essential decision-making.XAI(eXplai... AI-related research is conducted in various ways,but the reliability of AI prediction results is currently insufficient,so expert decisions are indispensable for tasks that require essential decision-making.XAI(eXplainable AI)is studied to improve the reliability of AI.However,each XAI methodology shows different results in the same data set and exact model.This means that XAI results must be given meaning,and a lot of noise value emerges.This paper proposes the HFD(Hybrid Feature Dropout)-based XAI and evaluation methodology.The proposed XAI methodology can mitigate shortcomings,such as incorrect feature weights and impractical feature selection.There are few XAI evaluation methods.This paper proposed four evaluation criteria that can give practical meaning.As a result of verifying with the malware data set(Data Challenge 2019),we confirmed better results than other XAI methodologies in 4 evaluation criteria.Since the efficiency of interpretation is verified with a reasonable XAI evaluation standard,The practicality of the XAI methodology will be improved.In addition,The usefulness of the XAI methodology will be demonstrated to enhance the reliability of AI,and it helps apply AI results to essential tasks that require expert decision-making. 展开更多
关键词 Explainable artificial intelligence EVALUATION hybrid feature dropout deep learning error detection
下载PDF
Covariant open string field theory on multiple Dp-branes
4
作者 taejin lee 《Chinese Physics C》 SCIE CAS CSCD 2018年第11期45-57,共13页
We study covariant open bosonic string field theories on multiple Dp-branes by using the deformed cubic string field theory, which is equivalent to string field theory in the proper-time gauge. Constructing the Fock s... We study covariant open bosonic string field theories on multiple Dp-branes by using the deformed cubic string field theory, which is equivalent to string field theory in the proper-time gauge. Constructing the Fock space representations of the three-string vertex and the four-string vertex on multiple Dp-branes, we obtain the field theoretical effective action in the zero-slope limit. On multiple D0-branes, the effective action reduces to the Banks-Fishler-Shenker-Susskind (BFSS) matrix model. We also discuss the relation between open string field theory on multiple D-instantons in the zero-slope limit and the Ishibashi-Kawai-Kitazawa-Tsuchiya (IKKT) matrix model. The covariant open string field theory on multiple Dp-branes could be useful to study the non-perturbative properties of quantum field theories in (p+1)-dimensions in the framework of the string theory. The non-zero-slope corrections may be evaluated systematically by using covariant string field theory. 展开更多
关键词 open string Dp-brane covariant string field theory Yang-Mills gauge theory matrix model
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部