This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist...This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.展开更多
针对农业病害领域命名实体识别过程中存在的预训练语言模型利用不充分、外部知识注入利用率低、嵌套命名实体识别率低的问题,本文提出基于连续提示注入和指针网络的命名实体识别模型CP-MRC(Continuous prompts for machine reading comp...针对农业病害领域命名实体识别过程中存在的预训练语言模型利用不充分、外部知识注入利用率低、嵌套命名实体识别率低的问题,本文提出基于连续提示注入和指针网络的命名实体识别模型CP-MRC(Continuous prompts for machine reading comprehension)。该模型引入BERT(Bidirectional encoder representation from transformers)预训练模型,通过冻结BERT模型原有参数,保留其在预训练阶段获取到的文本表征能力;为了增强模型对领域数据的适用性,在每层Transformer中插入连续可训练提示向量;为提高嵌套命名实体识别的准确性,采用指针网络抽取实体序列。在自建农业病害数据集上开展了对比实验,该数据集包含2933条文本语料,8个实体类型,共10414个实体。实验结果显示,CP-MRC模型的精确率、召回率、F1值达到83.55%、81.4%、82.4%,优于其他模型;在病原、作物两类嵌套实体的识别率较其他模型F1值提升3个百分点和13个百分点,嵌套实体识别率明显提升。本文提出的模型仅采用少量可训练参数仍然具备良好识别性能,为较大规模预训练模型在信息抽取任务上的应用提供了思路。展开更多
文摘This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.
文摘针对农业病害领域命名实体识别过程中存在的预训练语言模型利用不充分、外部知识注入利用率低、嵌套命名实体识别率低的问题,本文提出基于连续提示注入和指针网络的命名实体识别模型CP-MRC(Continuous prompts for machine reading comprehension)。该模型引入BERT(Bidirectional encoder representation from transformers)预训练模型,通过冻结BERT模型原有参数,保留其在预训练阶段获取到的文本表征能力;为了增强模型对领域数据的适用性,在每层Transformer中插入连续可训练提示向量;为提高嵌套命名实体识别的准确性,采用指针网络抽取实体序列。在自建农业病害数据集上开展了对比实验,该数据集包含2933条文本语料,8个实体类型,共10414个实体。实验结果显示,CP-MRC模型的精确率、召回率、F1值达到83.55%、81.4%、82.4%,优于其他模型;在病原、作物两类嵌套实体的识别率较其他模型F1值提升3个百分点和13个百分点,嵌套实体识别率明显提升。本文提出的模型仅采用少量可训练参数仍然具备良好识别性能,为较大规模预训练模型在信息抽取任务上的应用提供了思路。