A color petri net (CPN) based attack modeling approach is addressed. Compared with graph-based modeling, CPN based attack model is flexible enough to model Internet intrusions, because of their static and dynamic feat...A color petri net (CPN) based attack modeling approach is addressed. Compared with graph-based modeling, CPN based attack model is flexible enough to model Internet intrusions, because of their static and dynamic features. The processes and rules of building CPN based attack model from attack tree are also presented. In order to evaluate the risk of intrusion, some cost elements are added to CPN based attack modeling. This extended model is useful in intrusion detection and risk evaluation. Experiences show that it is easy to exploit CPN based attack modeling approach to provide the controlling functions, such as intrusion response and intrusion defense. A case study given in this paper shows that CPN based attack model has many unique characters which attack tree model hasn’t.展开更多
Based on the analysis for the interception process of ship-to-air missile system to the anti-ship missile stream, the antagonism of ship-to-air missile and anti-ship missile stream was modeled by Monte Carlo method. T...Based on the analysis for the interception process of ship-to-air missile system to the anti-ship missile stream, the antagonism of ship-to-air missile and anti-ship missile stream was modeled by Monte Carlo method. This model containing the probability of acquiring anti-ship missile, threat estimation, firepower distribution, interception, effectiveness evaluation and firepower turning, can dynamically simulate the antagonism process of anti-ship missile attack stream and anti-air missile weapon system. The anti-ship missile's saturation attack stream for different ship-to-air missile systems can be calculated quantitatively. The simulated results reveal the relations among the anti-ship missile saturation attack and the attack intensity of anti-ship missile, interception mode and the main parameters of anti-air missile weapon system. It provides a theoretical basis for the effective operation of anti-ship missile.展开更多
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor...The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.展开更多
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist...This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.展开更多
Internet worms can propagate across networks at terrifying speeds,reduce network security to a remarkable extent,and cause heavy economic losses.Thus,the rapid elimination of Internet worms using partial immunization ...Internet worms can propagate across networks at terrifying speeds,reduce network security to a remarkable extent,and cause heavy economic losses.Thus,the rapid elimination of Internet worms using partial immunization becomes a significant matter for sustaining Internet infrastructure.This paper addresses this issue by presenting a novel worm susceptible-vaccinated-exposed-infectious-recovered model,named the SVEIR model.The SVEIR model extends the classical susceptible-exposed-infectious-recovered model(refer to SEIR model)through incorporating a saturated incidence rate and a partial immunization rate.The basic reproduction number in the SVEIR model is obtained.By virtue of the basic reproduction number,we prove the global stabilities of an infection-free equilibrium point and a unique endemic equilibrium point.Numerical methods are used to verify the proposed SVEIR model.Simulation results show that partial immunization is highly effective for eliminating worms,and the SVEIR model is viable for controlling and forecasting Internet worms.展开更多
With the rapid development of e-commerce, the security issues of collaborative filtering recommender systems have been widely investigated. Malicious users can benefit from injecting a great quantities of fake profile...With the rapid development of e-commerce, the security issues of collaborative filtering recommender systems have been widely investigated. Malicious users can benefit from injecting a great quantities of fake profiles into recommender systems to manipulate recommendation results. As one of the most important attack methods in recommender systems, the shilling attack has been paid considerable attention, especially to its model and the way to detect it. Among them, the loose version of Group Shilling Attack Generation Algorithm (GSAGenl) has outstanding performance. It can be immune to some PCC (Pearson Correlation Coefficient)-based detectors due to the nature of anti-Pearson correlation. In order to overcome the vulnerabilities caused by GSAGenl, a gravitation-based detection model (GBDM) is presented, integrated with a sophisticated gravitational detector and a decider. And meanwhile two new basic attributes and a particle filter algorithm are used for tracking prediction. And then, whether an attack occurs can be judged according to the law of universal gravitation in decision-making. The detection performances of GBDM, HHT-SVM, UnRAP, AP-UnRAP Semi-SAD,SVM-TIA and PCA-P are compared and evaluated. And simulation results show the effectiveness and availability of GBDM.展开更多
Algebraic attack was applied to attack Filter-Combintr model keystreamgenerators. We proposed the technique of function composition to improve the model, and the improvedmodel can resist the algebraic attack. A new cr...Algebraic attack was applied to attack Filter-Combintr model keystreamgenerators. We proposed the technique of function composition to improve the model, and the improvedmodel can resist the algebraic attack. A new criterion for designing Filter-Combiner model was alsoproposed: the total length I. of Linear Finite State Machines used in the model should be largeenough and the degree d of Filter-Combiner function should be approximate [L/2].展开更多
As the internet of things(IoT)continues to expand rapidly,the significance of its security concerns has grown in recent years.To address these concerns,physical unclonable functions(PUFs)have emerged as valuable tools...As the internet of things(IoT)continues to expand rapidly,the significance of its security concerns has grown in recent years.To address these concerns,physical unclonable functions(PUFs)have emerged as valuable tools for enhancing IoT security.PUFs leverage the inherent randomness found in the embedded hardware of IoT devices.However,it has been shown that some PUFs can be modeled by attackers using machine-learning-based approaches.In this paper,a new deep learning(DL)-based modeling attack is introduced to break the resistance of complex XAPUFs.Because training DL models is a problem that falls under the category of NP-hard problems,there has been a significant increase in the use of meta-heuristics(MH)to optimize DL parameters.Nevertheless,it is widely recognized that finding the right balance between exploration and exploitation when dealing with complex problems can pose a significant challenge.To address these chal-lenges,a novel migration-based multi-parent genetic algorithm(MBMPGA)is developed to train the deep convolutional neural network(DCNN)in order to achieve a higher rate of accuracy and convergence speed while decreas-ing the run-time of the attack.In the proposed MBMPGA,a non-linear migration model of the biogeography-based optimization(BBO)is utilized to enhance the exploitation ability of GA.A new multi-parent crossover is then introduced to enhance the exploration ability of GA.The behavior of the proposed MBMPGA is examined on two real-world optimization problems.In benchmark problems,MBMPGA outperforms other MH algorithms in convergence rate.The proposed model are also compared with previous attacking models on several simulated challenge-response pairs(CRPs).The simulation results on the XAPUF datasets show that the introduced attack in this paper obtains more than 99%modeling accuracy even on 8-XAPUF.In addition,the proposed MBMPGA-DCNN outperforms the state-of-the-art modeling attacks in a reduced timeframe and with a smaller number of required sets of CRPs.The area under the curve(AUC)of MBMPGA-DCNN outperforms other architectures.MBMPGA-DCNN achieved sensitivities,specificities,and accuracies of 99.12%,95.14%,and 98.21%,respectively,in the test datasets,establishing it as the most successful method.展开更多
近年来,大语言模型(large language model,LLM)在一系列下游任务中得到了广泛应用,并在多个领域表现出了卓越的文本理解、生成与推理能力.然而,越狱攻击正成为大语言模型的新兴威胁.越狱攻击能够绕过大语言模型的安全机制,削弱价值观对...近年来,大语言模型(large language model,LLM)在一系列下游任务中得到了广泛应用,并在多个领域表现出了卓越的文本理解、生成与推理能力.然而,越狱攻击正成为大语言模型的新兴威胁.越狱攻击能够绕过大语言模型的安全机制,削弱价值观对齐的影响,诱使经过对齐的大语言模型产生有害输出.越狱攻击带来的滥用、劫持、泄露等问题已对基于大语言模型的对话系统与应用程序造成了严重威胁.对近年的越狱攻击研究进行了系统梳理,并基于攻击原理将其分为基于人工设计的攻击、基于模型生成的攻击与基于对抗性优化的攻击3类.详细总结了相关研究的基本原理、实施方法与研究结论,全面回顾了大语言模型越狱攻击的发展历程,为后续的研究提供了有效参考.对现有的安全措施进行了简略回顾,从内部防御与外部防御2个角度介绍了能够缓解越狱攻击并提高大语言模型生成内容安全性的相关技术,并对不同方法的利弊进行了罗列与比较.在上述工作的基础上,对大语言模型越狱攻击领域的现存问题与前沿方向进行探讨,并结合多模态、模型编辑、多智能体等方向进行研究展望.展开更多
基金Supperted by the Nation High Technology Research and Development Program of China (863 Program) (No.2002AA001042) and the Tackle Key Problem Program of Sichuan Province (No. 01GG0712)
文摘A color petri net (CPN) based attack modeling approach is addressed. Compared with graph-based modeling, CPN based attack model is flexible enough to model Internet intrusions, because of their static and dynamic features. The processes and rules of building CPN based attack model from attack tree are also presented. In order to evaluate the risk of intrusion, some cost elements are added to CPN based attack modeling. This extended model is useful in intrusion detection and risk evaluation. Experiences show that it is easy to exploit CPN based attack modeling approach to provide the controlling functions, such as intrusion response and intrusion defense. A case study given in this paper shows that CPN based attack model has many unique characters which attack tree model hasn’t.
文摘Based on the analysis for the interception process of ship-to-air missile system to the anti-ship missile stream, the antagonism of ship-to-air missile and anti-ship missile stream was modeled by Monte Carlo method. This model containing the probability of acquiring anti-ship missile, threat estimation, firepower distribution, interception, effectiveness evaluation and firepower turning, can dynamically simulate the antagonism process of anti-ship missile attack stream and anti-air missile weapon system. The anti-ship missile's saturation attack stream for different ship-to-air missile systems can be calculated quantitatively. The simulated results reveal the relations among the anti-ship missile saturation attack and the attack intensity of anti-ship missile, interception mode and the main parameters of anti-air missile weapon system. It provides a theoretical basis for the effective operation of anti-ship missile.
文摘The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.
文摘This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.
基金This work is supported by the National Natural Science Foundation of China(Nos.61272541,61572170)Natural Science Foundation of Hebei Province of China(Nos.F2015205157,F2016205023)+1 种基金Natural Science Foundation of Hebei Normal University(No.L2015Z08)Educational Commission of Hebei Province of China(No.QN2014165).
文摘Internet worms can propagate across networks at terrifying speeds,reduce network security to a remarkable extent,and cause heavy economic losses.Thus,the rapid elimination of Internet worms using partial immunization becomes a significant matter for sustaining Internet infrastructure.This paper addresses this issue by presenting a novel worm susceptible-vaccinated-exposed-infectious-recovered model,named the SVEIR model.The SVEIR model extends the classical susceptible-exposed-infectious-recovered model(refer to SEIR model)through incorporating a saturated incidence rate and a partial immunization rate.The basic reproduction number in the SVEIR model is obtained.By virtue of the basic reproduction number,we prove the global stabilities of an infection-free equilibrium point and a unique endemic equilibrium point.Numerical methods are used to verify the proposed SVEIR model.Simulation results show that partial immunization is highly effective for eliminating worms,and the SVEIR model is viable for controlling and forecasting Internet worms.
基金supported by the National Natural Science Foundation of P.R.China(No.61672297)the Key Research and Development Program of Jiangsu Province(Social Development Program,No.BE2017742)+1 种基金The Sixth Talent Peaks Project of Jiangsu Province(No.DZXX-017)Jiangsu Natural Science Foundation for Excellent Young Scholar(No.BK20160089)
文摘With the rapid development of e-commerce, the security issues of collaborative filtering recommender systems have been widely investigated. Malicious users can benefit from injecting a great quantities of fake profiles into recommender systems to manipulate recommendation results. As one of the most important attack methods in recommender systems, the shilling attack has been paid considerable attention, especially to its model and the way to detect it. Among them, the loose version of Group Shilling Attack Generation Algorithm (GSAGenl) has outstanding performance. It can be immune to some PCC (Pearson Correlation Coefficient)-based detectors due to the nature of anti-Pearson correlation. In order to overcome the vulnerabilities caused by GSAGenl, a gravitation-based detection model (GBDM) is presented, integrated with a sophisticated gravitational detector and a decider. And meanwhile two new basic attributes and a particle filter algorithm are used for tracking prediction. And then, whether an attack occurs can be judged according to the law of universal gravitation in decision-making. The detection performances of GBDM, HHT-SVM, UnRAP, AP-UnRAP Semi-SAD,SVM-TIA and PCA-P are compared and evaluated. And simulation results show the effectiveness and availability of GBDM.
文摘Algebraic attack was applied to attack Filter-Combintr model keystreamgenerators. We proposed the technique of function composition to improve the model, and the improvedmodel can resist the algebraic attack. A new criterion for designing Filter-Combiner model was alsoproposed: the total length I. of Linear Finite State Machines used in the model should be largeenough and the degree d of Filter-Combiner function should be approximate [L/2].
文摘As the internet of things(IoT)continues to expand rapidly,the significance of its security concerns has grown in recent years.To address these concerns,physical unclonable functions(PUFs)have emerged as valuable tools for enhancing IoT security.PUFs leverage the inherent randomness found in the embedded hardware of IoT devices.However,it has been shown that some PUFs can be modeled by attackers using machine-learning-based approaches.In this paper,a new deep learning(DL)-based modeling attack is introduced to break the resistance of complex XAPUFs.Because training DL models is a problem that falls under the category of NP-hard problems,there has been a significant increase in the use of meta-heuristics(MH)to optimize DL parameters.Nevertheless,it is widely recognized that finding the right balance between exploration and exploitation when dealing with complex problems can pose a significant challenge.To address these chal-lenges,a novel migration-based multi-parent genetic algorithm(MBMPGA)is developed to train the deep convolutional neural network(DCNN)in order to achieve a higher rate of accuracy and convergence speed while decreas-ing the run-time of the attack.In the proposed MBMPGA,a non-linear migration model of the biogeography-based optimization(BBO)is utilized to enhance the exploitation ability of GA.A new multi-parent crossover is then introduced to enhance the exploration ability of GA.The behavior of the proposed MBMPGA is examined on two real-world optimization problems.In benchmark problems,MBMPGA outperforms other MH algorithms in convergence rate.The proposed model are also compared with previous attacking models on several simulated challenge-response pairs(CRPs).The simulation results on the XAPUF datasets show that the introduced attack in this paper obtains more than 99%modeling accuracy even on 8-XAPUF.In addition,the proposed MBMPGA-DCNN outperforms the state-of-the-art modeling attacks in a reduced timeframe and with a smaller number of required sets of CRPs.The area under the curve(AUC)of MBMPGA-DCNN outperforms other architectures.MBMPGA-DCNN achieved sensitivities,specificities,and accuracies of 99.12%,95.14%,and 98.21%,respectively,in the test datasets,establishing it as the most successful method.
文摘近年来,大语言模型(large language model,LLM)在一系列下游任务中得到了广泛应用,并在多个领域表现出了卓越的文本理解、生成与推理能力.然而,越狱攻击正成为大语言模型的新兴威胁.越狱攻击能够绕过大语言模型的安全机制,削弱价值观对齐的影响,诱使经过对齐的大语言模型产生有害输出.越狱攻击带来的滥用、劫持、泄露等问题已对基于大语言模型的对话系统与应用程序造成了严重威胁.对近年的越狱攻击研究进行了系统梳理,并基于攻击原理将其分为基于人工设计的攻击、基于模型生成的攻击与基于对抗性优化的攻击3类.详细总结了相关研究的基本原理、实施方法与研究结论,全面回顾了大语言模型越狱攻击的发展历程,为后续的研究提供了有效参考.对现有的安全措施进行了简略回顾,从内部防御与外部防御2个角度介绍了能够缓解越狱攻击并提高大语言模型生成内容安全性的相关技术,并对不同方法的利弊进行了罗列与比较.在上述工作的基础上,对大语言模型越狱攻击领域的现存问题与前沿方向进行探讨,并结合多模态、模型编辑、多智能体等方向进行研究展望.