期刊文献+

大语言模型应用的安全与隐私问题综述

A Survey on Security and Privacy Problems in Applications of Large Language Models
下载PDF
导出
摘要 随着人工智能软件和算力基础设施的快速发展,大语言模型(LLM)在自然语言处理方面进步显著,成为构建大型智能应用系统的新一代基础模块。大语言模型需要在海量文本数据集上进行训练,并依赖高性能神经网络处理器(NPUs)的算力,在各种真实场景中展现出模仿人类的语言处理和逻辑推理能力,包括代码生成、知识问答和检索推荐等。本文对大语言模型应用中的各种隐私和安全问题进行对比,归纳为三类安全场景,即训练推理、集成部署和检索增强。大语言模型应用面临的安全风险与传统应用系统存在本质差别,本文结合安全场景讨论了大语言模型应用的安全加固方法,并提供了未来的研究方向。 With the rapid development of artificial intelligence software and computing infrastructure,Large Language Models(LLMs)have made significant progress in natural language processing and become the new generation of basic modules for building large-scale intelligent application systems.LLMs need to be trained on massive text datasets and rely on high-performance Neural Network Processors(NPUs)for computing power,exhibiting human-like language processing and logical reasoning abilities in various real-world scenarios,including code generation for software developer,knowledge based question answering,and information retrieval recommendation.This paper compares various privacy and security issues in LLM applications and summarizes them into three security scenarios:training inference,integrated deployment,and retrieval enhancement.The security risks faced by LLM applications are fundamentally different from those of traditional application systems,and therefore this paper proposes security and privacy protection methods and future research directions in line with the security scenarios.
作者 王乔晨 吴振刚 刘虎 Wang Qiaochen;Wu Zhengang;Liu Hu(LaLink Services&Solutions Co.,Ltd.,Beijing,100071;CNBM Technology Co.,Ltd.,Beijing,100048)
出处 《工业信息安全》 2024年第5期40-45,共6页 Industry Information Security
关键词 大语言模型 大语言模型应用 安全 隐私 Large Language Models LLM Applications Security Privacy
  • 相关文献

参考文献5

二级参考文献10

共引文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部