提出并实现了一个本地轻量化课程教学智能辅助系统.该系统利用IPEX-LLM(Intel PyTorch extention for large language model)加速库,在计算资源受限的设备上高效部署并运行经过QLoRA(quantum-logic optimized resource allocation)框架...提出并实现了一个本地轻量化课程教学智能辅助系统.该系统利用IPEX-LLM(Intel PyTorch extention for large language model)加速库,在计算资源受限的设备上高效部署并运行经过QLoRA(quantum-logic optimized resource allocation)框架微调的大语言模型,并结合增强检索技术,实现了智能问答、智能出题、教学大纲生成、教学演示文档生成等4个主要功能模块的课程灵活定制,在帮助教师提高教学备课和授课的质量与效率、保护数据隐私的同时,支撑学生个性化学习并提供实时反馈.在性能实验中,以集成优化后的Chatglm3-6B模型为例,该系统处理64-token输出任务时仅需4.08 s,验证了其在资源受限环境下快速推理的能力.在实践案例分析中,通过与原生Chatgml-6B和ChatGPT4.0在功能实现上的对比,进一步表明了该系统具备优越的准确性和实用性.展开更多
Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a...Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.展开更多
本研究旨在探讨基于大型语言模型(large language model,LLM)的人工智能技术在科技创新管理中的应用。通过分析LLM的潜力,揭示了其在科技创新领域中的关键作用,包括自然语言处理、知识管理和决策支持。关键词分析表明,LLM、科技创新管...本研究旨在探讨基于大型语言模型(large language model,LLM)的人工智能技术在科技创新管理中的应用。通过分析LLM的潜力,揭示了其在科技创新领域中的关键作用,包括自然语言处理、知识管理和决策支持。关键词分析表明,LLM、科技创新管理、人工智能技术、知识管理、决策支持是本研究的核心概念。本研究强调,LLM可以提供智能化的信息处理和决策支持,有助于加速科技创新过程、优化资源配置、降低风险,并提高创新管理的效率和质量。通过深入研究LLM在科技创新管理中的实际应用,本研究为科技领域的管理者和决策者提供了有力的工具和方法,以应对不断变化的市场需求和竞争环境。展开更多
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist...This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.展开更多
Large Language Models(LLMs),such as ChatGPT and Bard,have revolutionized natural language understanding and generation.They possess deep language comprehension,human-like text generation capabilities,contextual awaren...Large Language Models(LLMs),such as ChatGPT and Bard,have revolutionized natural language understanding and generation.They possess deep language comprehension,human-like text generation capabilities,contextual awareness,and robust problem-solving skills,making them invaluable in various domains(e.g.,search engines,customer support,translation).In the meantime,LLMs have also gained traction in the security community,revealing security vulnerabilities and showcasing their potential in security-related tasks.This paper explores the intersection of LLMs with security and privacy.Specifically,we investigate how LLMs positively impact security and privacy,potential risks and threats associated with their use,and inherent vulnerabilities within LLMs.Through a comprehensive literature review,the paper categorizes the papers into‘‘The Good’’(beneficial LLM applications),‘‘The Bad’’(offensive applications),and‘‘The Ugly’’(vulnerabilities of LLMs and their defenses).We have some interesting findings.For example,LLMs have proven to enhance code security(code vulnerability detection)and data privacy(data confidentiality protection),outperforming traditional methods.However,they can also be harnessed for various attacks(particularly user-level attacks)due to their human-like reasoning abilities.We have identified areas that require further research efforts.For example,Research on model and parameter extraction attacks is limited and often theoretical,hindered by LLM parameter scale and confidentiality.Safe instruction tuning,a recent development,requires more exploration.We hope that our work can shed light on the LLMs’potential to both bolster and jeopardize cybersecurity.展开更多
文摘提出并实现了一个本地轻量化课程教学智能辅助系统.该系统利用IPEX-LLM(Intel PyTorch extention for large language model)加速库,在计算资源受限的设备上高效部署并运行经过QLoRA(quantum-logic optimized resource allocation)框架微调的大语言模型,并结合增强检索技术,实现了智能问答、智能出题、教学大纲生成、教学演示文档生成等4个主要功能模块的课程灵活定制,在帮助教师提高教学备课和授课的质量与效率、保护数据隐私的同时,支撑学生个性化学习并提供实时反馈.在性能实验中,以集成优化后的Chatglm3-6B模型为例,该系统处理64-token输出任务时仅需4.08 s,验证了其在资源受限环境下快速推理的能力.在实践案例分析中,通过与原生Chatgml-6B和ChatGPT4.0在功能实现上的对比,进一步表明了该系统具备优越的准确性和实用性.
文摘Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.
文摘本研究旨在探讨基于大型语言模型(large language model,LLM)的人工智能技术在科技创新管理中的应用。通过分析LLM的潜力,揭示了其在科技创新领域中的关键作用,包括自然语言处理、知识管理和决策支持。关键词分析表明,LLM、科技创新管理、人工智能技术、知识管理、决策支持是本研究的核心概念。本研究强调,LLM可以提供智能化的信息处理和决策支持,有助于加速科技创新过程、优化资源配置、降低风险,并提高创新管理的效率和质量。通过深入研究LLM在科技创新管理中的实际应用,本研究为科技领域的管理者和决策者提供了有力的工具和方法,以应对不断变化的市场需求和竞争环境。
文摘This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.
基金supported partly by the National Science Foundation award FMitF-2319242.
文摘Large Language Models(LLMs),such as ChatGPT and Bard,have revolutionized natural language understanding and generation.They possess deep language comprehension,human-like text generation capabilities,contextual awareness,and robust problem-solving skills,making them invaluable in various domains(e.g.,search engines,customer support,translation).In the meantime,LLMs have also gained traction in the security community,revealing security vulnerabilities and showcasing their potential in security-related tasks.This paper explores the intersection of LLMs with security and privacy.Specifically,we investigate how LLMs positively impact security and privacy,potential risks and threats associated with their use,and inherent vulnerabilities within LLMs.Through a comprehensive literature review,the paper categorizes the papers into‘‘The Good’’(beneficial LLM applications),‘‘The Bad’’(offensive applications),and‘‘The Ugly’’(vulnerabilities of LLMs and their defenses).We have some interesting findings.For example,LLMs have proven to enhance code security(code vulnerability detection)and data privacy(data confidentiality protection),outperforming traditional methods.However,they can also be harnessed for various attacks(particularly user-level attacks)due to their human-like reasoning abilities.We have identified areas that require further research efforts.For example,Research on model and parameter extraction attacks is limited and often theoretical,hindered by LLM parameter scale and confidentiality.Safe instruction tuning,a recent development,requires more exploration.We hope that our work can shed light on the LLMs’potential to both bolster and jeopardize cybersecurity.