期刊文献+

基于预训练模型和多层次信息的代码坏味检测方法 被引量:2

Code Smell Detection Approach Based on Pre-training Model and Multi-level Information
下载PDF
导出
摘要 目前已有的代码坏味检测方法仅依赖于代码结构信息和启发式规则,对嵌入在不同层次代码中的语义信息关注不够,而且现有的代码坏味检测方法准确率还有进一步提升的空间.针对该问题,提出一种基于预训练模型和多层次信息的代码坏味检测方法DeepSmell,首先采用静态分析工具提取程序中的代码坏味实例和多层次代码度量信息,并对代码坏味实例进行标记;然后通过抽象语法树解析并获取源代码中与代码坏味相关的层次信息,将其中的文本信息与度量信息相结合生成数据样本;最后使用BERT预训练模型将文本信息转化为词向量,应用GRU-LSTM模型获取层次信息之间潜在的语义关系,并结合CNN模型与注意力机制检测代码坏味.在实验中,选取JUnit、Xalan和SPECjbb2005等24个大型实际应用程序构建训练集和测试集,并对特征依恋、长方法、数据类和上帝类等4种代码坏味进行检测.实验结果表明,DeepSmell与目前已有的检测方法相比在平均查全率和F1值上分别提高了9.3%和10.44%,同时保持了较高的查准率,DeepSmell可以有效地实现代码坏味检测. Most of the existing code smell detection approaches rely on code structure information and heuristic rules,while pay little attention to the semantic information embedded in different levels of code,and the accuracy of code smell detection approaches is not high.To solve this problem,this study proposes a novel approach DeepSmell based on a pre-trained model and multi-level metrics.Firstly,the static analysis tool is used to extract code smell instances and multi-level code metric information in the source program and mark these instances.Secondly,the level information that relate to code smells in the source code are parsed and obtained through the abstract syntax tree.The textual information composed of the level information is combined with code metric information to generate the data set.Finally,text information is converted into word vectors using the BERT pre-training model.The GRU-LSTM model is applied to obtain the potential semantic relationship among the identifiers,and the CNN model is combined with attention mechanism to code smell detection.The experiment tested four kinds of code smells including feature envy,long method,data class,and god class on 24 open source programs such as JUnit,Xalan,and SPECjbb2005.The results show that DeepSmell improves the average recall and F1 by 9.3%and 10.44%respectively compared with existing detection methods,and maintains a high level of precision at the same time.
作者 张杨 东春浩 刘辉 葛楚妍 ZHANG Yang;DONG Chun-Hao;LIU Hui;GE Chu-Yan(School of Information Science and Engineering,Hebei University of Science and Technology,Shijiazhuang 050018,China;School of Computer Science and Technology,Beijing Institute of Technology,Beijing 100081,China)
出处 《软件学报》 EI CSCD 北大核心 2022年第5期1551-1568,共18页 Journal of Software
基金 国家自然科学基金(62172037) 河北省自然科学基金重点项目(18960106D) 河北省高等学校科学研究计划重点项目(ZD2019093)。
关键词 代码坏味 深度学习 预训练模型 抽象语法树 多层次信息 code smell deep learning pre-trained model abstract syntax tree multi-level information
  • 相关文献

参考文献7

二级参考文献12

共引文献17

同被引文献40

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部