期刊文献+
共找到155篇文章
< 1 2 8 >
每页显示 20 50 100
Detecting and Mitigating DDOS Attacks in SDNs Using Deep Neural Network
1
作者 Gul Nawaz Muhammad Junaid +5 位作者 Adnan Akhunzada Abdullah Gani Shamyla Nawazish Asim Yaqub Adeel Ahmed Huma Ajab 《Computers, Materials & Continua》 SCIE EI 2023年第11期2157-2178,共22页
Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks... Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems. 展开更多
关键词 Distributed denial of service(DDoS)attacks software-defined networking(SDN) classification deep neural network(DNN)
下载PDF
Unknown DDoS Attack Detection with Fuzzy C-Means Clustering and Spatial Location Constraint Prototype Loss
2
作者 Thanh-Lam Nguyen HaoKao +2 位作者 Thanh-Tuan Nguyen Mong-Fong Horng Chin-Shiuh Shieh 《Computers, Materials & Continua》 SCIE EI 2024年第2期2181-2205,共25页
Since its inception,the Internet has been rapidly evolving.With the advancement of science and technology and the explosive growth of the population,the demand for the Internet has been on the rise.Many applications i... Since its inception,the Internet has been rapidly evolving.With the advancement of science and technology and the explosive growth of the population,the demand for the Internet has been on the rise.Many applications in education,healthcare,entertainment,science,and more are being increasingly deployed based on the internet.Concurrently,malicious threats on the internet are on the rise as well.Distributed Denial of Service(DDoS)attacks are among the most common and dangerous threats on the internet today.The scale and complexity of DDoS attacks are constantly growing.Intrusion Detection Systems(IDS)have been deployed and have demonstrated their effectiveness in defense against those threats.In addition,the research of Machine Learning(ML)and Deep Learning(DL)in IDS has gained effective results and significant attention.However,one of the challenges when applying ML and DL techniques in intrusion detection is the identification of unknown attacks.These attacks,which are not encountered during the system’s training,can lead to misclassification with significant errors.In this research,we focused on addressing the issue of Unknown Attack Detection,combining two methods:Spatial Location Constraint Prototype Loss(SLCPL)and Fuzzy C-Means(FCM).With the proposed method,we achieved promising results compared to traditional methods.The proposed method demonstrates a very high accuracy of up to 99.8%with a low false positive rate for known attacks on the Intrusion Detection Evaluation Dataset(CICIDS2017)dataset.Particularly,the accuracy is also very high,reaching 99.7%,and the precision goes up to 99.9%for unknown DDoS attacks on the DDoS Evaluation Dataset(CICDDoS2019)dataset.The success of the proposed method is due to the combination of SLCPL,an advanced Open-Set Recognition(OSR)technique,and FCM,a traditional yet highly applicable clustering technique.This has yielded a novel method in the field of unknown attack detection.This further expands the trend of applying DL and ML techniques in the development of intrusion detection systems and cybersecurity.Finally,implementing the proposed method in real-world systems can enhance the security capabilities against increasingly complex threats on computer networks. 展开更多
关键词 CYBERSECURITY DDoS unknown attack detection machine learning deep learning incremental learning convolutional neural networks(CNN) open-set recognition(OSR) spatial location constraint prototype loss fuzzy c-means CICIDS2017 CICDDoS2019
下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:17
3
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning deep neural network Adversarial example Adversarial attack Adversarial defense
下载PDF
Insider Attack Detection Using Deep Belief Neural Network in Cloud Computing
4
作者 A.S.Anakath R.Kannadasan +2 位作者 Niju P.Joseph P.Boominathan G.R.Sreekanth 《Computer Systems Science & Engineering》 SCIE EI 2022年第5期479-492,共14页
Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase ... Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase of information in every field and the need for a centralized location for processing efficiently.This cloud is nowadays highly affected by internal threats of the user.Sensitive applications such as banking,hospital,and business are more likely affected by real user threats.An intruder is presented as a user and set as a member of the network.After becoming an insider in the network,they will try to attack or steal sensitive data during information sharing or conversation.The major issue in today's technological development is identifying the insider threat in the cloud network.When data are lost,compromising cloud users is difficult.Privacy and security are not ensured,and then,the usage of the cloud is not trusted.Several solutions are available for the external security of the cloud network.However,insider or internal threats need to be addressed.In this research work,we focus on a solution for identifying an insider attack using the artificial intelligence technique.An insider attack is possible by using nodes of weak users’systems.They will log in using a weak user id,connect to a network,and pretend to be a trusted node.Then,they can easily attack and hack information as an insider,and identifying them is very difficult.These types of attacks need intelligent solutions.A machine learning approach is widely used for security issues.To date,the existing lags can classify the attackers accurately.This information hijacking process is very absurd,which motivates young researchers to provide a solution for internal threats.In our proposed work,we track the attackers using a user interaction behavior pattern and deep learning technique.The usage of mouse movements and clicks and keystrokes of the real user is stored in a database.The deep belief neural network is designed using a restricted Boltzmann machine(RBM)so that the layer of RBM communicates with the previous and subsequent layers.The result is evaluated using a Cooja simulator based on the cloud environment.The accuracy and F-measure are highly improved compared with when using the existing long short-term memory and support vector machine. 展开更多
关键词 Cloud computing security insider attack network security PRIVACY user interaction behavior deep belief neural network
下载PDF
Deep reinforcement learning and its application in autonomous fitting optimization for attack areas of UCAVs 被引量:12
5
作者 LI Yue QIU Xiaohui +1 位作者 LIU Xiaodong XIA Qunli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第4期734-742,共9页
The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies wh... The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies while increasing the tactical capabilities of combat aircraft. As a research object, common UCAV uses the neural network fitting strategy to obtain values of attack areas. However, this simple strategy cannot cope with complex environmental changes and autonomously optimize decision-making problems. To solve the problem, this paper proposes a new deep deterministic policy gradient(DDPG) strategy based on deep reinforcement learning for the attack area fitting of UCAVs in the future battlefield. Simulation results show that the autonomy and environmental adaptability of UCAVs in the future battlefield will be improved based on the new DDPG algorithm and the training process converges quickly. We can obtain the optimal values of attack areas in real time during the whole flight with the well-trained deep network. 展开更多
关键词 attack area neural network deep deterministic policy gradient(DDPG) unmanned combat aerial vehicle(UCAV)
下载PDF
Progressive Transfer Learning-based Deep Q Network for DDOS Defence in WSN
6
作者 S.Rameshkumar R.Ganesan A.Merline 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2379-2394,共16页
In The Wireless Multimedia Sensor Network(WNSMs)have achieved popularity among diverse communities as a result of technological breakthroughs in sensor and current gadgets.By utilising portable technologies,it achieve... In The Wireless Multimedia Sensor Network(WNSMs)have achieved popularity among diverse communities as a result of technological breakthroughs in sensor and current gadgets.By utilising portable technologies,it achieves solid and significant results in wireless communication,media transfer,and digital transmission.Sensor nodes have been used in agriculture and industry to detect characteristics such as temperature,moisture content,and other environmental conditions in recent decades.WNSMs have also made apps easier to use by giving devices self-governing access to send and process data connected with appro-priate audio and video information.Many video sensor network studies focus on lowering power consumption and increasing transmission capacity,but the main demand is data reliability.Because of the obstacles in the sensor nodes,WMSN is subjected to a variety of attacks,including Denial of Service(DoS)attacks.Deep Convolutional Neural Network is designed with the stateaction relationship mapping which is used to identify the DDOS Attackers present in the Wireless Sensor Networks for Smart Agriculture.The Proposed work it performs the data collection about the traffic conditions and identifies the deviation between the network conditions such as packet loss due to network congestion and the presence of attackers in the network.It reduces the attacker detection delay and improves the detection accuracy.In order to protect the network against DoS assaults,an improved machine learning technique must be offered.An efficient Deep Neural Network approach is provided for detecting DoS in WMSN.The required parameters are selected using an adaptive particle swarm optimization technique.The ratio of packet transmission,energy consumption,latency,network length,and throughput will be used to evaluate the approach’s efficiency. 展开更多
关键词 DOS attack wireless sensor networks for smart agriculture deep neural network machine learning technique
下载PDF
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks 被引量:1
7
作者 Kazim Ali Adnan N.Quershi +3 位作者 Ahmad Alauddin Bin Arifin Muhammad Shahid Bhatti Abid Sohail Rohail Hassan 《Computers, Materials & Continua》 SCIE EI 2022年第5期2209-2224,共16页
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li... These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods. 展开更多
关键词 Computer vision deep learning convolutional neural networks adversarial examples adversarial attacks adversarial defenses
下载PDF
Forecasting Shark Attack Risk Using AI: A Deep Learning Approach
8
作者 Evan Valenti 《Journal of Data Analysis and Information Processing》 2023年第4期360-370,共11页
This study aimed to develop a predictive model utilizing available data to forecast the risk of future shark attacks, making this critical information accessible for everyday public use. Employing a deep learning/neur... This study aimed to develop a predictive model utilizing available data to forecast the risk of future shark attacks, making this critical information accessible for everyday public use. Employing a deep learning/neural network methodology, the system was designed to produce a binary output that is subsequently classified into categories of low, medium, or high risk. A significant challenge encountered during the study was the identification and procurement of appropriate historical and forecasted marine weather data, which is integral to the model’s accuracy. Despite these challenges, the results of the study were startlingly optimistic, showcasing the model’s ability to predict with impressive accuracy. In conclusion, the developed forecasting tool not only offers promise in its immediate application but also sets a robust precedent for the adoption and adaptation of similar predictive systems in various analogous use cases in the marine environment and beyond. 展开更多
关键词 deep learning shark research predictive ai marine biology neural network machine learning shark attacks data science shark biology forecasting
下载PDF
Black Box Adversarial Defense Based on Image Denoising and Pix2Pix
9
作者 Zhenyong Rui Xiugang Gong 《Journal of Computer and Communications》 2023年第12期14-30,共17页
Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the f... Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the field of AI security. Currently, robustness defense techniques for models often rely on adversarial training, a method that tends to only defend against specific types of attacks and lacks strong generalization. In response to this challenge, this paper proposes a black-box defense method based on Image Denoising and Pix2Pix (IDP) technology. This method does not require prior knowledge of the specific attack type and eliminates the need for cumbersome adversarial training. When making predictions on unknown samples, the IDP method first undergoes denoising processing, followed by inputting the processed image into a trained Pix2Pix model for image transformation. Finally, the image generated by Pix2Pix is input into the classification model for prediction. This versatile defense approach demonstrates excellent defensive performance against common attack methods such as FGSM, I-FGSM, DeepFool, and UPSET, showcasing high flexibility and transferability. In summary, the IDP method introduces new perspectives and possibilities for adversarial sample defense, alleviating the limitations of traditional adversarial training methods and enhancing the overall robustness of models. 展开更多
关键词 deep neural networks (DNN) Adversarial attack Adversarial Training Fourier Transform Robust defense
下载PDF
Detection and Defense Method Against False Data Injection Attacks for Distributed Load Frequency Control System in Microgrid
10
作者 Zhixun Zhang Jianqiang Hu +3 位作者 Jianquan Lu Jie Yu Jinde Cao Ardak Kashkynbayev 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2024年第3期913-924,共12页
In the realm of microgrid(MG),the distributed load frequency control(LFC)system has proven to be highly susceptible to the negative effects of false data injection attacks(FDIAs).Considering the significant responsibi... In the realm of microgrid(MG),the distributed load frequency control(LFC)system has proven to be highly susceptible to the negative effects of false data injection attacks(FDIAs).Considering the significant responsibility of the distributed LFC system for maintaining frequency stability within the MG,this paper proposes a detection and defense method against unobservable FDIAs in the distributed LFC system.Firstly,the method integrates a bi-directional long short-term memory(Bi LSTM)neural network and an improved whale optimization algorithm(IWOA)into the LFC controller to detect and counteract FDIAs.Secondly,to enable the Bi LSTM neural network to proficiently detect multiple types of FDIAs with utmost precision,the model employs a historical MG dataset comprising the frequency and power variances.Finally,the IWOA is utilized to optimize the proportional-integral-derivative(PID)controller parameters to counteract the negative impacts of FDIAs.The proposed detection and defense method is validated by building the distributed LFC system in Simulink. 展开更多
关键词 MICROGRID load frequency control false data injection attack bi-directional long short-term memory(BiLSTM)neural network improved whale optimization algorithm(IWOA) detection and defense
原文传递
HDLIDP: A Hybrid Deep Learning Intrusion Detection and Prevention Framework
11
作者 Magdy M.Fadel Sally M.El-Ghamrawy +2 位作者 Amr M.T.Ali-Eldin Mohammed K.Hassan Ali I.El-Desoky 《Computers, Materials & Continua》 SCIE EI 2022年第11期2293-2312,共20页
Distributed denial-of-service(DDoS)attacks are designed to interrupt network services such as email servers and webpages in traditional computer networks.Furthermore,the enormous number of connected devices makes it d... Distributed denial-of-service(DDoS)attacks are designed to interrupt network services such as email servers and webpages in traditional computer networks.Furthermore,the enormous number of connected devices makes it difficult to operate such a network effectively.Software defined networks(SDN)are networks that are managed through a centralized control system,according to researchers.This controller is the brain of any SDN,composing the forwarding table of all data plane network switches.Despite the advantages of SDN controllers,DDoS attacks are easier to perpetrate than on traditional networks.Because the controller is a single point of failure,if it fails,the entire network will fail.This paper offers a Hybrid Deep Learning Intrusion Detection and Prevention(HDLIDP)framework,which blends signature-based and deep learning neural networks to detect and prevent intrusions.This framework improves detection accuracy while addressing all of the aforementioned problems.To validate the framework,experiments are done on both traditional and SDN datasets;the findings demonstrate a significant improvement in classification accuracy. 展开更多
关键词 Software defined networks(SDN) distributed denial of service attack(DDoS) signature-based detection whale optimization algorism(WOA) deep learning neural network classifier
下载PDF
基于损失平滑的对抗样本攻击方法
12
作者 黎妹红 金双 杜晔 《北京航空航天大学学报》 EI CAS CSCD 北大核心 2024年第2期663-670,共8页
深度神经网络(DNNs)容易受到对抗样本的攻击,现有基于动量的对抗样本生成方法虽然可以达到接近100%的白盒攻击成功率,但是在攻击其他模型时效果仍不理想,黑盒攻击成功率较低。针对此,提出一种基于损失平滑的对抗样本攻击方法来提高对抗... 深度神经网络(DNNs)容易受到对抗样本的攻击,现有基于动量的对抗样本生成方法虽然可以达到接近100%的白盒攻击成功率,但是在攻击其他模型时效果仍不理想,黑盒攻击成功率较低。针对此,提出一种基于损失平滑的对抗样本攻击方法来提高对抗样本的可迁移性。在每一步计算梯度的迭代过程中,不直接使用当前梯度,而是使用局部平均梯度来累积动量,以此来抑制损失函数曲面存在的局部振荡现象,从而稳定更新方向,逃离局部极值点。在ImageNet数据集上的大量实验结果表明:所提方法与现有基于动量的方法相比,在单个模型攻击实验中的平均黑盒攻击成功率分别提升了38.07%和27.77%,在集成模型攻击实验中的平均黑盒攻击成功率分别提升了32.50%和28.63%。 展开更多
关键词 深度神经网络 对抗样本 黑盒攻击 损失平滑 人工智能安全
下载PDF
基于雅可比显著图的电磁信号快速对抗攻击方法
13
作者 张剑 周侠 +1 位作者 张一然 王梓聪 《通信学报》 EI CSCD 北大核心 2024年第1期180-193,共14页
为了生成高质量的电磁信号对抗样本,提出了快速雅可比显著图攻击(FJSMA)方法。FJSMA通过计算攻击目标类别的雅可比矩阵,并根据该矩阵生成特征显著图,之后迭代选取显著性最强的特征点及其邻域内连续特征点添加扰动,同时引入单点扰动限制... 为了生成高质量的电磁信号对抗样本,提出了快速雅可比显著图攻击(FJSMA)方法。FJSMA通过计算攻击目标类别的雅可比矩阵,并根据该矩阵生成特征显著图,之后迭代选取显著性最强的特征点及其邻域内连续特征点添加扰动,同时引入单点扰动限制,最后生成对抗样本。实验结果表明,与雅可比显著图攻击方法相比,FJSMA在保持与之相同的高攻击成功率的同时,生成速度提升了约10倍,相似度提升了超过11%;与其他基于梯度的方法相比,攻击成功率提升了超过20%,相似度提升了20%~30%。 展开更多
关键词 深度神经网络 对抗样本 电磁信号调制识别 雅可比显著图 目标攻击
下载PDF
基于可攻击空间假设的陷阱式集成对抗防御网络
14
作者 孙家泽 温苏雷 +1 位作者 郑炜 陈翔 《软件学报》 EI CSCD 北大核心 2024年第4期1861-1884,共24页
如今,深度神经网络在各个领域取得了广泛的应用.然而研究表明,深度神经网络容易受到对抗样本的攻击,严重威胁着深度神经网络的应用和发展.现有的对抗防御方法大多需要以牺牲部分原始分类精度为代价,且强依赖于已有生成的对抗样本所提供... 如今,深度神经网络在各个领域取得了广泛的应用.然而研究表明,深度神经网络容易受到对抗样本的攻击,严重威胁着深度神经网络的应用和发展.现有的对抗防御方法大多需要以牺牲部分原始分类精度为代价,且强依赖于已有生成的对抗样本所提供的信息,无法兼顾防御的效力与效率.因此基于流形学习,从特征空间的角度提出可攻击空间对抗样本成因假设,并据此提出一种陷阱式集成对抗防御网络Trap-Net. Trap-Net在原始模型的基础上向训练数据添加陷阱类数据,使用陷阱式平滑损失函数建立目标数据类别与陷阱数据类别间的诱导关系以生成陷阱式网络.针对原始分类精度损失问题,利用集成学习的方式集成多个陷阱式网络以在不损失原始分类精度的同时,扩大陷阱类标签于特征空间所定义的靶标可攻击空间.最终, Trap-Net通过探测输入数据是否命中靶标可攻击空间以判断数据是否为对抗样本.基于MNIST、K-MNIST、F-MNIST、CIFAR-10和CIFAR-100数据集的实验表明, Trap-Net可在不损失干净样本分类精确度的同时具有很强的对抗样本防御泛化性,且实验结果验证可攻击空间对抗成因假设.在低扰动的白盒攻击场景中, Trap-Net对对抗样本的探测率高达85%以上.在高扰动的白盒攻击和黑盒攻击场景中, Trap-Net对对抗样本的探测率几乎高达100%.与其他探测式对抗防御方法相比, Trap-Net对白盒和黑盒对抗攻击皆有很强的防御效力.为对抗环境下深度神经网络提供一种高效的鲁棒性优化方法. 展开更多
关键词 深度神经网络 对抗样本 集成学习 对抗防御 鲁棒性优化
下载PDF
基于随机平滑的通用黑盒认证防御
15
作者 李瞧 陈晶 +3 位作者 张子君 何琨 杜瑞颖 汪欣欣 《计算机学报》 EI CAS CSCD 北大核心 2024年第3期690-702,共13页
近年来,基于深度神经网络(DNNs)的图像分类模型在人脸识别、自动驾驶等关键领域得到了广泛应用,并展现出卓越的性能.然而,深度神经网络容易受到对抗样本攻击,从而导致模型错误分类.为此,提升模型自身的鲁棒性已成为一个主要的研究方向.... 近年来,基于深度神经网络(DNNs)的图像分类模型在人脸识别、自动驾驶等关键领域得到了广泛应用,并展现出卓越的性能.然而,深度神经网络容易受到对抗样本攻击,从而导致模型错误分类.为此,提升模型自身的鲁棒性已成为一个主要的研究方向.目前大部分的防御方法,特别是经验防御方法,都基于白盒假设,即防御者拥有模型的详细信息,如模型架构和参数等.然而,模型所有者基于隐私保护的考虑不愿意共享模型信息.即使现有的黑盒假设的防御方法,也无法防御所有范数扰动的攻击,缺乏通用性.因此,本文提出了一种适用于黑盒模型的通用认证防御方法.具体而言,本文首先设计了一个基于查询的无数据替代模型生成方案,在无需模型的训练数据与结构等先验知识的情况下,利用查询和零阶优化生成高质量的替代模型,将认证防御场景转化为白盒,确保模型的隐私安全.其次,本文提出了基于白盒替代模型的随机平滑和噪声选择方法,构建了一个能够抵御任意范数扰动攻击的通用认证防御方案.本文通过分析比较原模型和替代模型在白盒认证防御上的性能,确保了替代模型的有效性.相较于现有方法,本文提出的通用黑盒认证防御方案在CIFAR10数据集上的效果取得了显著的提升.实验结果表明,本文方案可以保持与白盒认证防御方法相似的效果.与之前基于黑盒的认证防御方法相比,本文方案在实现了所有L p的认证防御的同时,认证准确率提升了20%以上.此外,本文方案还能有效保护原始模型的隐私,与原始模型相比,本文方案使成员推理攻击的成功率下降了5.48%. 展开更多
关键词 深度神经网络 认证防御 随机平滑 黑盒模型 替代模型
下载PDF
基于Transformer和GAN的对抗样本生成算法
16
作者 刘帅威 李智 +1 位作者 王国美 张丽 《计算机工程》 CAS CSCD 北大核心 2024年第2期180-187,共8页
对抗攻击与防御是计算机安全领域的一个热门研究方向。针对现有基于梯度的对抗样本生成方法可视质量差、基于优化的方法生成效率低的问题,提出基于Transformer和生成对抗网络(GAN)的对抗样本生成算法Trans-GAN。首先利用Transformer强... 对抗攻击与防御是计算机安全领域的一个热门研究方向。针对现有基于梯度的对抗样本生成方法可视质量差、基于优化的方法生成效率低的问题,提出基于Transformer和生成对抗网络(GAN)的对抗样本生成算法Trans-GAN。首先利用Transformer强大的视觉表征能力,将其作为重构网络,用于接收干净图像并生成攻击噪声;其次将Transformer重构网络作为生成器,与基于深度卷积网络的鉴别器相结合组成GAN网络架构,提高生成图像的真实性并保证训练的稳定性,同时提出改进的注意力机制Targeted Self-Attention,在训练网络时引入目标标签作为先验知识,指导网络模型学习生成具有特定攻击目标的对抗扰动;最后利用跳转连接将对抗噪声施加在干净样本上,形成对抗样本,攻击目标分类网络。实验结果表明:Trans-GAN算法针对MNIST数据集中2种模型的攻击成功率都达到99.9%以上,针对CIFAR10数据集中2种模型的攻击成功率分别达到96.36%和98.47%,优于目前先进的基于生成式的对抗样本生成方法;相比快速梯度符号法和投影梯度下降法,Trans-GAN算法生成的对抗噪声扰动量更小,形成的对抗样本更加自然,满足人类视觉不易分辨的要求。 展开更多
关键词 深度神经网络 对抗样本 对抗攻击 Transformer模型 生成对抗网络 注意力机制
下载PDF
基于多模型正交化的深度图像识别对抗鲁棒性增强技术
17
作者 逯子豪 徐延杰 +2 位作者 孙浩 计科峰 匡纲要 《信号处理》 CSCD 北大核心 2024年第3期503-515,共13页
近年来,深度神经网络(Deep Neural Networks,DNN)已被广泛应用于图像识别,目标检测,图像分割等多种计算机视觉任务中,并取得了巨大成功。然而,DNN模型因其本身的脆弱性,仍面临着对抗攻击等技术手段带来的安全隐患。攻击者在图像上恶意... 近年来,深度神经网络(Deep Neural Networks,DNN)已被广泛应用于图像识别,目标检测,图像分割等多种计算机视觉任务中,并取得了巨大成功。然而,DNN模型因其本身的脆弱性,仍面临着对抗攻击等技术手段带来的安全隐患。攻击者在图像上恶意地添加微小且人眼难以识别的扰动,可以让模型产生高置信度的错误输出。针对上述问题,集成多个DNN模型来提升对抗鲁棒性已成为有效的解决方案之一。但是,对抗样本在集成模型中的子模型间存在对抗迁移现象,可能使集成模型的防御效能大大降低,而且目前仍缺乏能够降低集成防御内部对抗迁移性的直观理论分析。本文引入损失场的概念并定量描述DNN模型间的对抗迁移性,重点关注和推导对抗迁移表达式的上界,发现促进模型损失场之间的正交性以及降低模型损失场的强度(Promoting Orthogonality and Reducing Strength,PORS)可以限制其上界大小,进而限制DNN模型间对抗迁移性。本文引入PORS惩罚项至原损失函数中,使集成模型能够保持在原始数据上的识别性能的同时,通过降低子模型间的对抗迁移性来增强整体的对抗鲁棒性。文章在CIFAR-10和MNIST数据集上对由PORS训练得到的集成模型开展实验,分别在白盒和黑盒攻击环境下与其他先进的集成防御方法进行对比实验,实验结果表明PORS可以显著提高对抗鲁棒性,在白盒攻击和原始数据集上能保持非常高的识别精度,尤其在黑盒迁移攻击中极为有效,在所有集成防御方法中表现最为稳定。 展开更多
关键词 深度神经网络 图像识别 对抗迁移性 集成防御 损失场
下载PDF
基于掩码语言模型的中文BERT攻击方法
18
作者 张云婷 叶麟 +2 位作者 唐浩林 张宏莉 李尚 《软件学报》 EI CSCD 北大核心 2024年第7期3392-3409,共18页
对抗文本是一种能够使深度学习分类器作出错误判断的恶意样本,敌手通过向原始文本中加入人类难以察觉的微小扰动制作出能欺骗目标模型的对抗文本.研究对抗文本生成方法,能对深度神经网络的鲁棒性进行评价,并助力于模型后续的鲁棒性提升... 对抗文本是一种能够使深度学习分类器作出错误判断的恶意样本,敌手通过向原始文本中加入人类难以察觉的微小扰动制作出能欺骗目标模型的对抗文本.研究对抗文本生成方法,能对深度神经网络的鲁棒性进行评价,并助力于模型后续的鲁棒性提升工作.当前针对中文文本设计的对抗文本生成方法中,很少有方法将鲁棒性较强的中文BERT模型作为目标模型进行攻击.面向中文文本分类任务,提出一种针对中文BERT的攻击方法Chinese BERT Tricker.该方法使用一种汉字级词语重要性打分方法——重要汉字定位法;同时基于掩码语言模型设计一种包含两类策略的适用于中文的词语级扰动方法实现对重要词语的替换.实验表明,针对文本分类任务,所提方法在两个真实数据集上均能使中文BERT模型的分类准确率大幅下降至40%以下,且其多种攻击性能明显强于其他基线方法. 展开更多
关键词 深度神经网络 对抗样本 文本对抗攻击 中文BERT 掩码语言模型
下载PDF
梯度聚合增强对抗样本迁移性方法
19
作者 邓诗芸 凌捷 《计算机工程与应用》 CSCD 北大核心 2024年第14期275-282,共8页
基于深度神经网络的图像分类模型容易受到对抗样本的攻击。现有研究表明,白盒攻击已经能够实现较高的攻击成功率,但在攻击其他模型时对抗样本的可迁移性较低。为提高对抗攻击的可迁移性,提出一种梯度聚合增强对抗样本迁移性方法。将原... 基于深度神经网络的图像分类模型容易受到对抗样本的攻击。现有研究表明,白盒攻击已经能够实现较高的攻击成功率,但在攻击其他模型时对抗样本的可迁移性较低。为提高对抗攻击的可迁移性,提出一种梯度聚合增强对抗样本迁移性方法。将原始图像与其他类别图像以特定比例进行混合,得到混合图像。通过综合考虑不同类别图像的信息,并平衡各类别之间的梯度贡献,可以避免局部振荡的影响。在迭代过程中聚合当前点的邻域其他数据点的梯度信息以优化梯度方向,避免对单一数据点的过度依赖,从而生成具有更强迁移性的对抗样本。在ImageNet数据集上的实验结果表明,所提方法显著提高了黑盒攻击的成功率和对抗样本的可迁移性。在单模型攻击上,该方法在四种常规训练模型的平均攻击成功率为88.5%,相比Admix方法提升了2.7个百分点;在集成模型攻击上平均攻击成功率达到了92.7%。此外,该方法可以与基于转换的对抗攻击方法相融合,在三种对抗训练模型上平均攻击成功率相较Admix方法提高了10.1个百分点,增强了对抗攻击的可迁移性。 展开更多
关键词 深度神经网络 对抗攻击 可迁移性 梯度聚合
下载PDF
一种多模型的调度优化对抗攻击算法
20
作者 王永 柳毅 《信息安全研究》 CSCD 北大核心 2024年第5期403-410,共8页
对抗样本可通过单模型和集成模型这2种方式生成,其中集成模型生成的对抗样本往往具有更强的攻击成功率.目前集成模型的相关研究较少,现有的集成模型方式大多是在迭代中同时使用所有模型,没有合理考虑不同模型的差异问题,导致集成模型生... 对抗样本可通过单模型和集成模型这2种方式生成,其中集成模型生成的对抗样本往往具有更强的攻击成功率.目前集成模型的相关研究较少,现有的集成模型方式大多是在迭代中同时使用所有模型,没有合理考虑不同模型的差异问题,导致集成模型生成的对抗样本攻击成功率较低.为了进一步提高集成模型的攻击成功率,提出一种多模型的调度优化对抗攻击算法.首先通过计算各个模型的损失梯度差异进行模型的调度选择,在每轮迭代选择最优模型组合进行集成攻击得到最优梯度.其次使用前一阶段的动量项更新当前数据点,在更新后的数据点上使用当前阶段模型组合计算得到优化梯度.利用优化梯度结合变换梯度来调整得到最终梯度方向.在ImageNet数据集进行大量实验,结果表明:所提的集成算法以更少扰动得到更高的黑盒攻击成功率.与主流的全模型集成方法对比,黑盒攻击正常训练模型和经过对抗训练模型的平均成功率分别提高了3.4%和12%,且生成的对抗样本有更好的视觉效果. 展开更多
关键词 对抗样本 神经网络 深度学习 黑盒攻击 集成模型
下载PDF
上一页 1 2 8 下一页 到第
使用帮助 返回顶部