摘要
随着人工智能的发展,深度学习技术在自然语言处理(NLP)领域已经取得了显著进步。然而,NLP模型还存在安全性漏洞。文章分析了深度学习在NLP三大核心任务(包括文本表示、语序建模和知识表示)中的应用现状,针对文本生成、文本分类以及语义解析面临的攻击技术,探讨了对抗性训练、正则化技术、模型蒸馏等一系列防御技术在实际应用中的效用和局限,并通过文本分类任务的实证研究验证了集成对抗训练的有效性。
With the advancement of artificial intelligence,deep learning has made significant progress in the field of NLP.However,there are still security vulnerabilities in NLP model.In view of this,the article sets out to explore the current application status of deep learning in the three core tasks of NLP,including text representation,sequence modeling and knowledge representation and explores the effectiveness and limitations of a series of defense techniques such as adversarial training,regularization techniques,and model distillation In the end,the effectiveness of integrated adversarial training was verified through empirical research on text classification.
作者
马甜
张国梁
郭晓军
Ma Tian;Zhang Guoiang;Guo Xiaojun(School of Information Engineering,Xizang Minzu University,Xianyang 712082)
出处
《中阿科技论坛(中英文)》
2024年第1期98-102,共5页
China-Arab States Science and Technology Forum
基金
西藏自治区自然科学基金项目(XZ202001ZR0022G)。
关键词
自然语言处理
深度学习
语序建模
攻击技术
防御技术
Natural language processing
Deep learning
Sequence modeling
Attack techniques
Defense techniques