期刊文献+
共找到9篇文章
< 1 >
每页显示 20 50 100
Security Vulnerability Analyses of Large Language Models (LLMs) through Extension of the Common Vulnerability Scoring System (CVSS) Framework
1
作者 Alicia Biju Vishnupriya Ramesh Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期340-358,共19页
Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a... Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks. 展开更多
关键词 Common Vulnerability Scoring System (CVSS) Large Language Models (llms) DALL-E Prompt Injections Training Data Poisoning CVSS Metrics
下载PDF
GUARDIAN: A Multi-Tiered Defense Architecture for Thwarting Prompt Injection Attacks on LLMs
2
作者 Parijat Rai Saumil Sood +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期43-68,共26页
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist... This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner. 展开更多
关键词 Large Language Models (llms) Adversarial Attack Prompt Injection Filter Defense Artificial Intelligence Machine Learning CYBERSECURITY
下载PDF
大语言模型辅助下的增强现实装配方法
3
作者 鲍劲松 李建军 +2 位作者 袁轶 吕超凡 王森 《航空制造技术》 CSCD 北大核心 2024年第16期107-116,共10页
基于增强现实的装配引导系统将数字信息叠加到物理场景中,有效指导了复杂装配作业任务。然而装配环境中人与物理世界的隔阂仍然巨大,待融合到物理世界的信息需事先准备好,并且需要人工在装配过程中来触发。研究实时且无处不在的提示,成... 基于增强现实的装配引导系统将数字信息叠加到物理场景中,有效指导了复杂装配作业任务。然而装配环境中人与物理世界的隔阂仍然巨大,待融合到物理世界的信息需事先准备好,并且需要人工在装配过程中来触发。研究实时且无处不在的提示,成为当前增强现实环境下的复杂装配研究热点,本文提出了一种基于大语言模型(LLMs)辅助的增强现实装配方法,其核心是将LLMs作为装配过程中的另外一个大脑,提供无处不在的装配引导和工艺信息提示支持。首先,建立了LLMs辅助的增强现实装配方法体系,分析了体系的要素及关联关系。其次,面向LLMs环境,构建了匹配的工艺信息模型。随后,给出了基于LLMs的辅助引导装配方法和流程。最后,结合某线缆装配专业知识,研发了一个专业问答系统,实现了LLMs智能辅助引导,使装配合格率提升了15%,并通过多个案例验证了该方法的有效性。 展开更多
关键词 增强现实 大语言模型(llms) 装配 问答系统 知识图谱
下载PDF
大语言模型研究现状与趋势 被引量:3
4
作者 王耀祖 李擎 +1 位作者 戴张杰 徐越 《工程科学学报》 EI CSCD 北大核心 2024年第8期1411-1425,共15页
在过去20年中,语言建模(Language models,LM)已经成为一种主要方法,用于语言理解和生成,同时作为自然语言处理(Natural language processing,NLP)领域下游的关键技术受到广泛关注.近年来,大语言模型(Large language models,LLMs),例如Ch... 在过去20年中,语言建模(Language models,LM)已经成为一种主要方法,用于语言理解和生成,同时作为自然语言处理(Natural language processing,NLP)领域下游的关键技术受到广泛关注.近年来,大语言模型(Large language models,LLMs),例如ChatGPT等技术,取得了显著进展,对人工智能乃至其他领域的变革和发展产生了深远的影响.鉴于LLMs迅猛的发展,本文首先对LLMs相关技术架构和模型规模等方面的演进历程进行了全面综述,总结了模型训练方法、优化技术以及评估手段.随后,分析了LLMs在教育、医疗、金融、工业等领域的应用现状,同时讨论了它们的优势和局限性.此外,还探讨了大语言模型针对社会伦理、隐私和安全等方面引发的安全性与一致性问题及技术措施.最后,展望了大语言模型未来的研究趋势,包括模型的规模与效能、多模态处理、社会影响等方面的发展方向.本文通过全面分析当前研究状况和未来走向,旨在为研究者提供关于大语言模型的深刻见解和启发,以推动该领域的进一步发展. 展开更多
关键词 大语言模型(llms) 自然语言处理 深度学习 人工智能 ChatGPT
下载PDF
Impact of Artificial Intelligence on Corporate Leadership
5
作者 Daniel Schilling Weiss Nguyen Mudassir Mohiddin Shaik 《Journal of Computer and Communications》 2024年第4期40-48,共9页
Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examini... Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examining both its advantages and disadvantages. Positive impacts of AI are evident in communication, feedback systems, tracking mechanisms, and decision-making processes within organizations. AI-powered communication tools, as exemplified by Slack, facilitate seamless collaboration, transcending geographical barriers. Feedback systems, like Adobe’s Performance Management System, employ AI algorithms to provide personalized development opportunities, enhancing employee growth. AI-based tracking systems optimize resource allocation, as exemplified by studies like “AI-Based Tracking Systems: Enhancing Efficiency and Accountability.” Additionally, AI-powered decision support, demonstrated during the COVID-19 pandemic, showcases the capability to navigate complex challenges and maintain resilience. However, AI adoption poses challenges in human resources, potentially leading to job displacement and necessitating upskilling efforts. Managing AI errors becomes crucial, as illustrated by instances like Amazon’s biased recruiting tool. Data privacy concerns also arise, emphasizing the need for robust security measures. The proposed solution suggests leveraging Local Machine Learning Models (LLMs) to address data privacy issues. Approaches such as federated learning, on-device learning, differential privacy, and homomorphic encryption offer promising strategies. By exploring the evolving dynamics of AI and leadership, this research advocates for responsible AI adoption and proposes LLMs as a potential solution, fostering a balanced integration of AI benefits while mitigating associated risks in corporate settings. 展开更多
关键词 Artificial Intelligence (AI) Corporate Leadership Communication Feedback Systems Tracking Mechanisms DECISION-MAKING Local Machine Learning Models (llms) Federated Learning On-Device Learning Differential Privacy Homomorphic Encryption
下载PDF
Smaller & Smarter: Score-Driven Network Chaining of Smaller Language Models
6
作者 Gunika Dhingra Siddansh Chawla +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期23-42,共20页
With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily meas... With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study. 展开更多
关键词 Large Language Models (llms) Smaller Language Models (SLMs) FINANCE NETWORKING Supervisor Model Scoring Function
下载PDF
多模态数字虚拟人的设计与低成本实现
7
作者 胡俊伟 苏世杰 《电脑乐园》 2023年第4期0001-0003,共3页
本文详细阐述了一种原创的、基于大型语言模型(LLMs)的多模态数字虚拟人低成本实现方案。文章首先介绍了 LLMs 在自然语言处理领域的最新进展,特别是在人机交互方面的应用。随后,文章深入探讨了如何以低成本方式构建多模态虚拟人系统,... 本文详细阐述了一种原创的、基于大型语言模型(LLMs)的多模态数字虚拟人低成本实现方案。文章首先介绍了 LLMs 在自然语言处理领域的最新进展,特别是在人机交互方面的应用。随后,文章深入探讨了如何以低成本方式构建多模态虚拟人系统,包括人脸检测、语音识别、语义理解和语音合成等关键技术环节。本文的核心在于提出了一种独创的、成本效益高的系统设计方案,旨在实现功能全面且经济高效的虚拟人应用。最后,文章还讨论了这一技术在未来发展中的潜力,以及如何进一步提升其性能和应用范围。 展开更多
关键词 大型语言模型(llms) 数字虚拟人 实现方案 多模态技术 人脸识别 语音识别 语义理解
下载PDF
融合创新:以大型语言模型技术赋力民族语言学研究
8
作者 刘杰 《西南民族大学学报(人文社会科学版)》 CSSCI 北大核心 2024年第2期9-19,共11页
大型语言模型(Large-scale Language Models,LLMs)在自然语言处理(Natural Language Processing,NLP)领域取得了显著的突破。民族语言学作为一门研究人类语言多样性、演变及其与文化关系的学科,与大型语言模型技术的结合将为语言学研究... 大型语言模型(Large-scale Language Models,LLMs)在自然语言处理(Natural Language Processing,NLP)领域取得了显著的突破。民族语言学作为一门研究人类语言多样性、演变及其与文化关系的学科,与大型语言模型技术的结合将为语言学研究带来新的可能。通过深入分析大型语言模型技术在民族语言学研究领域的应用与影响,从民族语言资源建设、语言文本生成、语言翻译与对话系统、语言特征分析与挖掘、语言的演变与历史研究这5个方面入手,揭示大型语言模型技术在民族语言学研究领域所具有的广泛应用前景和深远影响。进一步分析大型语言模型技术在民族语言学研究中的潜力与价值,并探讨该研究方向对“有形”“有感”“有效”地增进民族认同感、增强民族自信心、促进民族团结,实现中华民族伟大复兴的实际应用价值和意义。 展开更多
关键词 数字人文 大型语言模型(llms) 习近平文化思想 民族语言学 中华民族共同体意识
原文传递
LLM4CP:Adapting Large Language Models for Channel Prediction
9
作者 Boxun Liu Xuanyu Liu +2 位作者 Shijian Gao Xiang Cheng Liuqing Yang 《Journal of Communications and Information Networks》 EI CSCD 2024年第2期113-125,共13页
Channel prediction is an effective approach for reducing the feedback or estimation overhead in massive multi-input multi-output (m-MIMO) systems. However, existing channel prediction methods lack precision due to mod... Channel prediction is an effective approach for reducing the feedback or estimation overhead in massive multi-input multi-output (m-MIMO) systems. However, existing channel prediction methods lack precision due to model mismatch errors or network generalization issues. Large language models (LLMs) have demonstrated powerful modeling and generalization abilities, and have been successfully applied to cross-modal tasks, including the time series analysis. Leveraging the expressive power of LLMs, we propose a pre-trained LLM-empowered channel prediction(LLM4CP)method to predict the future downlink channel state information (CSI) sequence based on the historical uplink CSI sequence. We fine-tune the network while freezing most of the parameters of the pre-trained LLM for better cross-modality knowledge transfer. To bridge the gap between the channel data and the feature space of the LLM,preprocessor, embedding, and output modules are specifically tailored by taking into account unique channel characteristics. Simulations validate that the proposed method achieves state-of-the-art (SOTA) prediction performance on full-sample, few-shot, and generalization tests with low training and inference costs. 展开更多
关键词 channel prediction massive multi-input multi-output(m-MIMO) large language models(llms) fine-tuning time-series
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部