摘要
源代码漏洞检测常使用代码指标、机器学习和深度学习等技术.但是这些技术存在无法保留源代码中的句法和语义信息、需要大量专家知识对漏洞特征进行定义等问题.为应对现有技术存在的问题,提出基于BERT(bidirectional encoder representations from transformers)模型的源代码漏洞检测模型.该模型将需要检测的源代码分割为多个小样本,将每个小样本转换成近似自然语言的形式,通过BERT模型实现源代码中漏洞特征的自动提取,然后训练具有良好性能的漏洞分类器,实现Python语言多种类型漏洞的检测.该模型在不同类型的漏洞中实现了平均99.2%的准确率、97.2%的精确率、96.2%的召回率和96.7%的F1分数的检测水平,对比现有的漏洞检测方法有2%~14%的性能提升.实验结果表明,该模型是一种通用的、轻量级的、可扩展的漏洞检测方法.
Techniques such as code metrics,machine learning,and deep learning are commonly employed in source code vulnerability detection.However,these techniques have problems,such as their inability to retain the syntactic and semantic information of the source code and the requirement of extensive expert knowledge to define vulnerability features.To cope with the problems of existing techniques,this paper proposed a source code vulnerability detection model based on BERT(bidirectional encoder representations from transformers)model.The model splits the source code to be detected into multiple small samples,converted each small sample into the form of approximate natural language,realized the automatic extraction of vulnerability features in the source code through the BERT model,and then trained a vulnerability classifier with good performance to realize the detection of multiple types of vulnerabilities in Python language.The model achieved an average detection accuracy of 99.2%,precision of 97.2%,recall of 96.2%,and an F1 score of 96.7%across various vulnerability types.This represents a performance improvement of 2%to 14%over existing vulnerability detection methods.The experimental results showed that the model was a general,lightweight and scalable vulnerability detection method.
作者
罗乐琦
张艳硕
王志强
文津
薛培阳
Luo Leqi;Zhang Yanshuo;Wang Zhiqiang;Wen Jin;Xue Peiyang(Beijing Electronic Science and Technology Institute,Beijing 100070)
出处
《信息安全研究》
CSCD
北大核心
2024年第4期294-301,共8页
Journal of Information Security Research
基金
中国博士后科学基金面上项目(2019M650606)
中央高校基本科研业务费专项资金项目(328202203,20230045Z0114)
北京电子科技学院一流学科建设项目(3201012)。