期刊文献+
共找到295篇文章
< 1 2 15 >
每页显示 20 50 100
Adversarial attacks and defenses for digital communication signals identification
1
作者 Qiao Tian Sicheng Zhang +1 位作者 Shiwen Mao Yun Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第3期756-764,共9页
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ... As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research. 展开更多
关键词 Digital communication signals identification AI model Adversarial attacks Adversarial defenses Adversarial indicators
下载PDF
GUARDIAN: A Multi-Tiered Defense Architecture for Thwarting Prompt Injection Attacks on LLMs
2
作者 Parijat Rai Saumil Sood +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期43-68,共26页
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist... This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner. 展开更多
关键词 Large Language models (LLMs) Adversarial attack Prompt Injection Filter defense Artificial Intelligence Machine Learning CYBERSECURITY
下载PDF
Chained Dual-Generative Adversarial Network:A Generalized Defense Against Adversarial Attacks 被引量:1
3
作者 Amitoj Bir Singh Lalit Kumar Awasthi +3 位作者 Urvashi Mohammad Shorfuzzaman Abdulmajeed Alsufyani Mueen Uddin 《Computers, Materials & Continua》 SCIE EI 2023年第2期2541-2555,共15页
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio... Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training. 展开更多
关键词 Adversarial attacks GAN-based adversarial defense image classification models adversarial defense
下载PDF
Game theory in network security for digital twins in industry
4
作者 Hailin Feng Dongliang Chen +1 位作者 Haibin Lv Zhihan Lv 《Digital Communications and Networks》 SCIE CSCD 2024年第4期1068-1078,共11页
To ensure the safe operation of industrial digital twins network and avoid the harm to the system caused by hacker invasion,a series of discussions on network security issues are carried out based on game theory.From ... To ensure the safe operation of industrial digital twins network and avoid the harm to the system caused by hacker invasion,a series of discussions on network security issues are carried out based on game theory.From the perspective of the life cycle of network vulnerabilities,mining and repairing vulnerabilities are analyzed by applying evolutionary game theory.The evolution process of knowledge sharing among white hats under various conditions is simulated,and a game model of the vulnerability patch cooperative development strategy among manufacturers is constructed.On this basis,the differential evolution is introduced into the update mechanism of the Wolf Colony Algorithm(WCA)to produce better replacement individuals with greater probability from the perspective of both attack and defense.Through the simulation experiment,it is found that the convergence speed of the probability(X)of white Hat 1 choosing the knowledge sharing policy is related to the probability(x0)of white Hat 2 choosing the knowledge sharing policy initially,and the probability(y0)of white hat 2 choosing the knowledge sharing policy initially.When y0?0.9,X converges rapidly in a relatively short time.When y0 is constant and x0 is small,the probability curve of the“cooperative development”strategy converges to 0.It is concluded that the higher the trust among the white hat members in the temporary team,the stronger their willingness to share knowledge,which is conducive to the mining of loopholes in the system.The greater the probability of a hacker attacking the vulnerability before it is fully disclosed,the lower the willingness of manufacturers to choose the"cooperative development"of vulnerability patches.Applying the improved wolf colonyco-evolution algorithm can obtain the equilibrium solution of the"attack and defense game model",and allocate the security protection resources according to the importance of nodes.This study can provide an effective solution to protect the network security for digital twins in the industry. 展开更多
关键词 Digital twins Industrial internet of things Network security game theory attack and defense
下载PDF
Protecting LLMs against Privacy Attacks While Preserving Utility
5
作者 Gunika Dhingra Saumil Sood +2 位作者 Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Information Security》 2024年第4期448-473,共26页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application. 展开更多
关键词 Large Language models PII Leakage PRIVACY Memorization Membership Inference attack (MIA) defenseS Generative Adversarial Networks (GANs) Synthetic Data
下载PDF
Risk Assessment and Defense Resource Allocation of Cyber-physical Distribution Systems Under Denial-of-service Attacks
6
作者 Han Qin Jiaming Weng +2 位作者 Dong Liu Donglian Qi Yufei Wang 《CSEE Journal of Power and Energy Systems》 SCIE EI CSCD 2024年第5期2197-2207,共11页
With the help of advanced information technology,real-time monitoring and control levels of cyber-physical distribution systems(CPDS)have been significantly improved.However due to the deep integration of cyber and ph... With the help of advanced information technology,real-time monitoring and control levels of cyber-physical distribution systems(CPDS)have been significantly improved.However due to the deep integration of cyber and physical systems,attackers could still threaten the stable operation of CPDS by launching cyber-attacks,such as denial-of-service(DoS)attacks.Thus,it is necessary to study the CPDS risk assessment and defense resource allocation methods under DoS attacks.This paper analyzes the impact of DoS attacks on the physical system based on the CPDS fault self-healing control.Then,considering attacker and defender strategies and attack damage,a CPDS risk assessment framework is established.Furthermore,risk assessment and defense resource allocation methods,based on the Stackelberg dynamic game model,are proposed under conditions in which the cyber and physical systems are launched simultaneously.Finally,a simulation based on an actual CPDS is performed,and the calculation results verify the effectiveness of the algorithm. 展开更多
关键词 Cyber physical distribution system defense resource allocation denial-of-service attack risk assessment Stackelberg dynamic game model
原文传递
Mechanism and Defense on Malicious Code
7
作者 WEN Wei-ping 1,2,3, QING Si-han 1,2,31. Institute of Software, the Chinese Academy of Sciences, Beijing 100080, China 2.Engineering Research Center for Information Security Technology, the Chinese Academy of Sciences, Beijing 100080, China 3.Graduate School of the Chinese Academy of Sciences, Beijing 100080, China 《Wuhan University Journal of Natural Sciences》 EI CAS 2005年第1期83-88,共6页
With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an atta... With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an attack model of the malicious code, and discuss the critical techniques of implementation and prevention against the malicious code. The remaining problems and emerging trends in this area are also addressed in the paper. 展开更多
关键词 malicious code attacking model MECHANISM defense system security network security
下载PDF
A Distributed Strategy for Defensing Objective Function Attack in Large-scale Cognitive Networks
8
作者 Guangsheng Feng Junyu Lin +3 位作者 Huiqiang Wang Xiaoyu Zhao Hongwu Lv Qiao Zhao 《国际计算机前沿大会会议论文集》 2015年第1期4-5,共2页
Most of existed strategies for defending OFA (Objective Function Attack)are centralized, only suitable for small-scale networks and stressed on the computation complexity and traffic load are usually neglected. In thi... Most of existed strategies for defending OFA (Objective Function Attack)are centralized, only suitable for small-scale networks and stressed on the computation complexity and traffic load are usually neglected. In this paper, we pay more attentions on the OFA problem in large-scale cognitive networks, where the big data generated from the network must be considered and the traditional methods could be of helplessness. In this paper, we first analyze the interactive processes between attacker and defender in detail, and then a defense strategy for OFA based on differential game is proposed, abbreviated as DSDG. Secondly, the game saddle point and optimal defense strategy have proved to be existed simultaneously. Simulation results show that the proposed DSDG has a less influence on network performance and a lower rate of packet loss.More importantly, it can cope with the large range 展开更多
关键词 COGNITIVE NETWORKS objective FUNCTION attack game model
下载PDF
Defending Adversarial Examples by a Clipped Residual U-Net Model
9
作者 Kazim Ali Adnan N.Qureshi +2 位作者 Muhammad Shahid Bhatti Abid Sohail Mohammad Hijji 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2237-2256,共20页
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu... Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model. 展开更多
关键词 Adversarial examples adversarial attacks defense method residual learning u-net cgan cru-et model
下载PDF
面向人工智能模型的安全攻击和防御策略综述
10
作者 秦臻 庄添铭 +3 位作者 朱国淞 周尔强 丁熠 耿技 《计算机研究与发展》 EI CSCD 北大核心 2024年第10期2627-2648,共22页
近年来,以深度学习为代表的人工智能技术发展迅速,在计算机视觉、自然语言处理等多个领域得到广泛应用.然而,最新研究表明这些先进的人工智能模型存在潜在的安全隐患,可能影响人工智能技术应用的可靠性.为此,深入调研了面向人工智能模... 近年来,以深度学习为代表的人工智能技术发展迅速,在计算机视觉、自然语言处理等多个领域得到广泛应用.然而,最新研究表明这些先进的人工智能模型存在潜在的安全隐患,可能影响人工智能技术应用的可靠性.为此,深入调研了面向人工智能模型的安全攻击、攻击检测以及防御策略领域中前沿的研究成果.在模型安全攻击方面,聚焦于对抗性攻击、模型反演攻击、模型窃取攻击等方面的原理和技术现状;在模型攻击检测方面,聚焦于防御性蒸馏、正则化、异常值检测、鲁棒统计等检测方法;在模型防御策略方面,聚焦于对抗训练、模型结构防御、查询控制防御等技术手段.概括并扩展了人工智能模型安全相关的技术和方法,为模型的安全应用提供了理论支持.此外,还使研究人员能够更好地理解该领域的当前研究现状,并选择适当的未来研究方向. 展开更多
关键词 人工智能 安全攻击 攻击检测 防御策略 模型安全
下载PDF
基于博弈论的弹目攻防决策方法研究
11
作者 薛静云 刘方 张银环 《指挥控制与仿真》 2024年第3期49-55,共7页
针对空战环境中弹目攻防双方的对抗特性,提出了一种基于博弈论的弹目攻防决策方法。基于导弹目标运动数学关系得到状态方程,根据弹目攻防对抗机理建立“一对一导弹-目标”动态博弈模型,确定弹目双方策略集与收益矩阵,提出混合策略纳什... 针对空战环境中弹目攻防双方的对抗特性,提出了一种基于博弈论的弹目攻防决策方法。基于导弹目标运动数学关系得到状态方程,根据弹目攻防对抗机理建立“一对一导弹-目标”动态博弈模型,确定弹目双方策略集与收益矩阵,提出混合策略纳什均衡求解方法,并结合模型滚动预测方法获得该策略空间的纳什均衡点。算例仿真结果表明,基于混合策略下导弹制导律为该策略空间的纳什均衡点,且该方法可以减小导弹对目标的脱靶量,提高导弹的命中精度,为导弹攻防作战提供了依据。 展开更多
关键词 攻防策略 微分博弈 模型预测 NASH均衡 制导律
下载PDF
攻防实战视角下医院进攻性网络安全防御体系的构建与应用
12
作者 孙保峰 葛晓伟 +1 位作者 杨扬 李郁鸿 《中国医疗设备》 2024年第11期69-74,共6页
目的为解决医疗结构网络安全工作无法量化、被动防御体系、专业技术人员短缺的现状,探索构建医院进攻性网络安全防御体系。方法通过搭建态势感知平台,联合互联网暴露面管理、开放端口管理、渗透测试、网络区域隔离、终端威胁检测与响应... 目的为解决医疗结构网络安全工作无法量化、被动防御体系、专业技术人员短缺的现状,探索构建医院进攻性网络安全防御体系。方法通过搭建态势感知平台,联合互联网暴露面管理、开放端口管理、渗透测试、网络区域隔离、终端威胁检测与响应、安全培训等技术措施,构建医院进攻性网络安全防御体系。结果通过构建医院进攻性网络安全防御体系,医院网络安全事件主动发现数量和自动拦截数量均显著高于实施前(P<0.05),且平台具备良好的处理性能和稳定性。结论医院进攻性网络安全防御体系可改善目前医疗行业网络安全被动防御的现状,显著增强医院的网络安全防护能力。 展开更多
关键词 网络安全 渗透测试 杀伤链模型 防御体系 被动防御 网络攻击 安全防护
下载PDF
基于深度学习的自然语言处理鲁棒性研究综述 被引量:6
13
作者 桂韬 奚志恒 +5 位作者 郑锐 刘勤 马若恬 伍婷 包容 张奇 《计算机学报》 EI CAS CSCD 北大核心 2024年第1期90-112,共23页
近年来,基于深度神经网络的模型在几乎所有自然语言处理任务上都取得了非常好的效果,在很多任务上甚至超越了人类.展现了极强能力的大规模语言模型也为自然语言处理模型的发展与落地提供了新的机遇和方向.然而,这些在基准测试集合上取... 近年来,基于深度神经网络的模型在几乎所有自然语言处理任务上都取得了非常好的效果,在很多任务上甚至超越了人类.展现了极强能力的大规模语言模型也为自然语言处理模型的发展与落地提供了新的机遇和方向.然而,这些在基准测试集合上取得很好结果的模型在实际应用中的效果却经常大打折扣.近期的一些研究还发现,在测试数据上替换一个相似词语、增加一个标点符号,甚至只是修改一个字母都可能使得这些模型的预测结果发生改变,效果大幅度下降.即使是大型语言模型,也会因输入中的微小扰动而改变其预测结果.什么原因导致了这种现象的发生?深度神经网络模型真的如此脆弱吗?如何才能避免这种问题的出现?这些问题近年来受到了越来越多的关注,诸多有影响力的工作都不约而同地从不同方面讨论了自然语言处理的鲁棒性问题.在本文中,我们从自然语言处理任务的典型范式出发,从数据构建、模型表示、对抗攻防以及评估评价等四个方面对自然语言处理鲁棒性相关研究进行了总结和归纳,并对最新进展进行了介绍,最后探讨了未来的可能研究方向以及我们对自然语言处理鲁棒性问题的一些思考. 展开更多
关键词 自然语言处理 鲁棒性 深度学习 预训练语言模型 对抗攻防
下载PDF
欺骗防御技术发展及其大语言模型应用探索 被引量:1
14
作者 王瑞 阳长江 +2 位作者 邓向东 刘园 田志宏 《计算机研究与发展》 EI CSCD 北大核心 2024年第5期1230-1249,共20页
欺骗防御作为主动防御中最具发展前景的技术,帮助防御者面对高隐蔽未知威胁化被动为主动,打破攻守间天然存在的不平衡局面.面对潜在的威胁场景,如何利用欺骗防御技术有效地帮助防御者做到预知威胁、感知威胁、诱捕威胁,均为目前需要解... 欺骗防御作为主动防御中最具发展前景的技术,帮助防御者面对高隐蔽未知威胁化被动为主动,打破攻守间天然存在的不平衡局面.面对潜在的威胁场景,如何利用欺骗防御技术有效地帮助防御者做到预知威胁、感知威胁、诱捕威胁,均为目前需要解决的关键问题.博弈理论与攻击图模型在主动防御策略制定、潜在风险分析等方面提供了有力支撑,总结回顾了近年来二者在欺骗防御中的相关工作.随着大模型技术的快速发展,大模型与网络安全领域的结合也愈加紧密,通过对传统欺骗防御技术的回顾,提出了一种基于大模型的智能化外网蜜点生成技术,实验分析验证了外网蜜点捕获网络威胁的有效性,与传统Web蜜罐相比较,在仿真性、稳定性与灵活性等方面均有所提升.为增强蜜点间协同合作、提升对攻击威胁的探查与感知能力,提出蜜阵的概念.针对如何利用蜜点和蜜阵技术,对构建集威胁预测、威胁感知和威胁诱捕为一体的主动防御机制进行了展望. 展开更多
关键词 欺骗防御 大语言模型 攻击图 博弈论 蜜点 蜜阵
下载PDF
文本后门攻击与防御综述
15
作者 郑明钰 林政 +2 位作者 刘正宵 付鹏 王伟平 《计算机研究与发展》 EI CSCD 北大核心 2024年第1期221-242,共22页
深度神经网络的安全性和鲁棒性是深度学习领域的研究热点.以往工作主要从对抗攻击角度揭示神经网络的脆弱性,即通过构建对抗样本来破坏模型性能并探究如何进行防御.但随着预训练模型的广泛应用,出现了一种针对神经网络尤其是预训练模型... 深度神经网络的安全性和鲁棒性是深度学习领域的研究热点.以往工作主要从对抗攻击角度揭示神经网络的脆弱性,即通过构建对抗样本来破坏模型性能并探究如何进行防御.但随着预训练模型的广泛应用,出现了一种针对神经网络尤其是预训练模型的新型攻击方式——后门攻击.后门攻击向神经网络注入隐藏的后门,使其在处理包含触发器(攻击者预先定义的图案或文本等)的带毒样本时会产生攻击者指定的输出.目前文本领域已有大量对抗攻击与防御的研究,但对后门攻击与防御的研究尚不充分,缺乏系统性的综述.全面介绍文本领域后门攻击和防御技术.首先,介绍文本领域后门攻击基本流程,并从不同角度对文本领域后门攻击和防御方法进行分类,介绍代表性工作并分析其优缺点;之后,列举常用数据集以及评价指标,将后门攻击与对抗攻击、数据投毒2种相关安全威胁进行比较;最后,讨论文本领域后门攻击和防御面临的挑战,展望该新兴领域的未来研究方向. 展开更多
关键词 后门攻击 后门防御 自然语言处理 预训练模型 AI安全
下载PDF
神经网络后门攻击与防御综述
16
作者 汪旭童 尹捷 +4 位作者 刘潮歌 徐辰晨 黄昊 王志 张方娇 《计算机学报》 EI CAS CSCD 北大核心 2024年第8期1713-1743,共31页
当前,深度神经网络(Deep Neural Network,DNN)得到了迅速发展和广泛应用,由于其具有数据集庞大、模型架构复杂的特点,用户在训练模型的过程中通常需要依赖数据样本、预训练模型等第三方资源.然而,不可信的第三方资源为神经网络模型的安... 当前,深度神经网络(Deep Neural Network,DNN)得到了迅速发展和广泛应用,由于其具有数据集庞大、模型架构复杂的特点,用户在训练模型的过程中通常需要依赖数据样本、预训练模型等第三方资源.然而,不可信的第三方资源为神经网络模型的安全带来了巨大的威胁,最典型的是神经网络后门攻击.攻击者通过修改数据集或模型的方式实现向模型中植入后门,该后门能够与样本中的触发器(一种特定的标记)和指定类别建立强连接关系,从而使得模型对带有触发器的样本预测为指定类别.为了更深入地了解神经网络后门攻击原理与防御方法,本文对神经网络后门攻击和防御进行了体系化的梳理和分析.首先,本文提出了神经网络后门攻击的四大要素,并建立了神经网络后门攻防模型,阐述了在训练神经网络的四个常规阶段里可能受到的后门攻击方式和防御方式;其次,从神经网络后门攻击和防御两个角度,分别基于攻防者能力,从攻防方式、关键技术、应用场景三个维度对现有研究进行归纳和比较,深度剖析了神经网络后门攻击产生的原因和危害、攻击的原理和手段以及防御的要点和方法;最后,进一步探讨了神经网络后门攻击所涉及的原理在未来研究上可能带来的积极作用. 展开更多
关键词 深度神经网络 触发器 后门攻击 后门防御 攻防模型
下载PDF
基于入侵诱骗的网络拓扑污染攻击防御研究
17
作者 魏波 冯乃勤 《计算机仿真》 2024年第5期410-414,共5页
以目标为中心的攻击防御手段检测到攻击后才有所响应,攻击防御不及时,为了提升网络拓扑污染攻击防御能力,提出一种基于入侵诱骗的网络拓扑污染攻击防御方法。通过入侵诱骗系统模拟网络脆弱性,采集攻击模式,并添加到知识库中;通过多个和... 以目标为中心的攻击防御手段检测到攻击后才有所响应,攻击防御不及时,为了提升网络拓扑污染攻击防御能力,提出一种基于入侵诱骗的网络拓扑污染攻击防御方法。通过入侵诱骗系统模拟网络脆弱性,采集攻击模式,并添加到知识库中;通过多个和子空间正交的向量判断知识库内污染信息类型,完成网络拓扑污染攻击类型分类;通过分类结果量化分析网络攻击与防御的成本收益,构建成本收益量化模型;基于攻击图、防御图和博弈论构建攻击防御模型,获取最佳网络拓扑污染攻击防御策略。实验结果表明,所提方法可以有效检测主机位置劫持攻击和链路伪造攻击,提升网络拓扑污染攻击防御效果,且提升了攻击防御的及时性。 展开更多
关键词 入侵诱骗 网络拓扑 污染攻击防御 防御图 博弈论
下载PDF
基于支持隐私保护的网络信息安全传输仿真
18
作者 张婷婷 王智强 《计算机仿真》 2024年第5期415-418,464,共5页
网络规模较大、网络环境复杂等因素会加大用户隐私信息泄露风险,为了保证网络信息传输的安全性,提出一种基于支持隐私保护的网络信息安全传输方法。采集历史数据建立网络信息传输模型,分析不同类型隐私信息在传输时发生泄露的概率。将... 网络规模较大、网络环境复杂等因素会加大用户隐私信息泄露风险,为了保证网络信息传输的安全性,提出一种基于支持隐私保护的网络信息安全传输方法。采集历史数据建立网络信息传输模型,分析不同类型隐私信息在传输时发生泄露的概率。将概率因子作为传输参照阈值,处理不同类型的隐私数据,求得传输信道加权信号增大的节点,节点即为隐私泄露的关键节点。建立微分攻击博弈模型,给出正常、存在泄露节点两种传输空间,运用积分函数计算数据经过两种空间后的隐私状态迁移概率,根据概率大小给出与传输空间积分对应的防御决策行为(强、中、弱),完成网络信息安全传输。实验结果表明,上述方法在网络信息传输中针对任意数据均能实现有效防御,隐私信息保存完整度高。 展开更多
关键词 隐私保护 安全防御 概率因子 微分攻击博弈模型 决策行为
下载PDF
考虑攻防博弈的调频辅助服务市场博弈均衡分析
19
作者 陈春宇 刘一龙 +3 位作者 张凯锋 任必兴 王云鹏 戴雪梅 《电网技术》 EI CSCD 北大核心 2024年第2期679-687,共9页
通过不断健全市场交易规则,新型资源逐步参与调频市场,极大提升了调频性能与资源的灵活互济。然而,受制于成本因素,部分新型调频资源的网络安全防护等级较低,黑客可能利用网络安全漏洞破坏调频市场安全。基于此,提出一种考虑攻防博弈的... 通过不断健全市场交易规则,新型资源逐步参与调频市场,极大提升了调频性能与资源的灵活互济。然而,受制于成本因素,部分新型调频资源的网络安全防护等级较低,黑客可能利用网络安全漏洞破坏调频市场安全。基于此,提出一种考虑攻防博弈的调频市场博弈均衡分析方法。首先,分析考虑篡改防护薄弱型调频资源报价信息的黑客攻击行为;然后,从攻击者角度构建以利益攸关方获利最大化为目标的攻击模型,设计考虑攻防双方主从Stackelberg博弈的调频市场均衡模型;最后,分析基于列和约束生成(column-and-constraint generation,C&CG)算法的调频市场均衡结果,通过对30机组调频市场进行算例分析,容量收益平均偏移度从攻击下的113.89%变为防御后的12.56%,表明防御者可以抑制攻击者造成的市场均衡偏移。 展开更多
关键词 新型调频资源 网络攻击与防御 STACKELBERG博弈 调频辅助服务市场 调频市场均衡
下载PDF
马尔可夫攻防模型下网络边缘态势监控仿真
20
作者 周文粲 徐顺航 刘丽红 《计算机仿真》 2024年第10期409-413,共5页
马尔可夫攻防模型能够生成观测序列,通过对此序列的识别与预测,达到监控目的。由于网络的数据量大,各种攻击手段都对网络安全造成严重威胁。为了提高监控效果,提出一种基于马尔可夫攻防模型的网络边缘态势监控。通过设置采集平台,将马... 马尔可夫攻防模型能够生成观测序列,通过对此序列的识别与预测,达到监控目的。由于网络的数据量大,各种攻击手段都对网络安全造成严重威胁。为了提高监控效果,提出一种基于马尔可夫攻防模型的网络边缘态势监控。通过设置采集平台,将马尔科夫攻防过程看作用户与攻击者的博弈过程,利用状态空间、状态概率分布、风险指数等七元组建立马尔可夫攻防模型;确定模型参数,采用模糊层次算法选取监控指标,设计模糊矩阵,获取指标权重,计算风险指数;确立监控平台整体架构,通过上述平台呈现风险指数,实现网络边缘态势监控。实验结果表明,所提方法的监控效果好,监控的平均绝对误差小,始终低于0.2,且对所有攻击类型均适用。 展开更多
关键词 马尔可夫攻防模型 网络边缘 态势监控 风险指数 模糊层次算法
下载PDF
上一页 1 2 15 下一页 到第
使用帮助 返回顶部