期刊文献+
共找到138篇文章
< 1 2 7 >
每页显示 20 50 100
A backdoor attack against quantum neural networks with limited information
1
作者 黄晨猗 张仕斌 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第10期219-228,共10页
Backdoor attacks are emerging security threats to deep neural networks.In these attacks,adversaries manipulate the network by constructing training samples embedded with backdoor triggers.The backdoored model performs... Backdoor attacks are emerging security threats to deep neural networks.In these attacks,adversaries manipulate the network by constructing training samples embedded with backdoor triggers.The backdoored model performs as expected on clean test samples but consistently misclassifies samples containing the backdoor trigger as a specific target label.While quantum neural networks(QNNs)have shown promise in surpassing their classical counterparts in certain machine learning tasks,they are also susceptible to backdoor attacks.However,current attacks on QNNs are constrained by the adversary's understanding of the model structure and specific encoding methods.Given the diversity of encoding methods and model structures in QNNs,the effectiveness of such backdoor attacks remains uncertain.In this paper,we propose an algorithm that leverages dataset-based optimization to initiate backdoor attacks.A malicious adversary can embed backdoor triggers into a QNN model by poisoning only a small portion of the data.The victim QNN maintains high accuracy on clean test samples without the trigger but outputs the target label set by the adversary when predicting samples with the trigger.Furthermore,our proposed attack cannot be easily resisted by existing backdoor detection methods. 展开更多
关键词 backdoor attack quantum artificial intelligence security quantum neural network variational quantum circuit
下载PDF
Insider Attack Detection Using Deep Belief Neural Network in Cloud Computing
2
作者 A.S.Anakath R.Kannadasan +2 位作者 Niju P.Joseph P.Boominathan G.R.Sreekanth 《Computer Systems Science & Engineering》 SCIE EI 2022年第5期479-492,共14页
Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase ... Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase of information in every field and the need for a centralized location for processing efficiently.This cloud is nowadays highly affected by internal threats of the user.Sensitive applications such as banking,hospital,and business are more likely affected by real user threats.An intruder is presented as a user and set as a member of the network.After becoming an insider in the network,they will try to attack or steal sensitive data during information sharing or conversation.The major issue in today's technological development is identifying the insider threat in the cloud network.When data are lost,compromising cloud users is difficult.Privacy and security are not ensured,and then,the usage of the cloud is not trusted.Several solutions are available for the external security of the cloud network.However,insider or internal threats need to be addressed.In this research work,we focus on a solution for identifying an insider attack using the artificial intelligence technique.An insider attack is possible by using nodes of weak users’systems.They will log in using a weak user id,connect to a network,and pretend to be a trusted node.Then,they can easily attack and hack information as an insider,and identifying them is very difficult.These types of attacks need intelligent solutions.A machine learning approach is widely used for security issues.To date,the existing lags can classify the attackers accurately.This information hijacking process is very absurd,which motivates young researchers to provide a solution for internal threats.In our proposed work,we track the attackers using a user interaction behavior pattern and deep learning technique.The usage of mouse movements and clicks and keystrokes of the real user is stored in a database.The deep belief neural network is designed using a restricted Boltzmann machine(RBM)so that the layer of RBM communicates with the previous and subsequent layers.The result is evaluated using a Cooja simulator based on the cloud environment.The accuracy and F-measure are highly improved compared with when using the existing long short-term memory and support vector machine. 展开更多
关键词 Cloud computing security insider attack network security PRIVACY user interaction behavior deep belief neural network
下载PDF
Detecting and Mitigating DDOS Attacks in SDNs Using Deep Neural Network
3
作者 Gul Nawaz Muhammad Junaid +5 位作者 Adnan Akhunzada Abdullah Gani Shamyla Nawazish Asim Yaqub Adeel Ahmed Huma Ajab 《Computers, Materials & Continua》 SCIE EI 2023年第11期2157-2178,共22页
Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks... Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems. 展开更多
关键词 Distributed denial of service(DDoS)attacks software-defined networking(SDN) classification deep neural network(DNN)
下载PDF
Detection and defending the XSS attack using novel hybrid stacking ensemble learning-based DNN approach 被引量:1
4
作者 Muralitharan Krishnan Yongdo Lim +1 位作者 Seethalakshmi Perumal Gayathri Palanisamy 《Digital Communications and Networks》 SCIE CSCD 2024年第3期716-727,共12页
Existing web-based security applications have failed in many situations due to the great intelligence of attackers.Among web applications,Cross-Site Scripting(XSS)is one of the dangerous assaults experienced while mod... Existing web-based security applications have failed in many situations due to the great intelligence of attackers.Among web applications,Cross-Site Scripting(XSS)is one of the dangerous assaults experienced while modifying an organization's or user's information.To avoid these security challenges,this article proposes a novel,all-encompassing combination of machine learning(NB,SVM,k-NN)and deep learning(RNN,CNN,LSTM)frameworks for detecting and defending against XSS attacks with high accuracy and efficiency.Based on the representation,a novel idea for merging stacking ensemble with web applications,termed“hybrid stacking”,is proposed.In order to implement the aforementioned methods,four distinct datasets,each of which contains both safe and unsafe content,are considered.The hybrid detection method can adaptively identify the attacks from the URL,and the defense mechanism inherits the advantages of URL encoding with dictionary-based mapping to improve prediction accuracy,accelerate the training process,and effectively remove the unsafe JScript/JavaScript keywords from the URL.The simulation results show that the proposed hybrid model is more efficient than the existing detection methods.It produces more than 99.5%accurate XSS attack classification results(accuracy,precision,recall,f1_score,and Receiver Operating Characteristic(ROC))and is highly resistant to XSS attacks.In order to ensure the security of the server's information,the proposed hybrid approach is demonstrated in a real-time environment. 展开更多
关键词 Machine learning deep neural networks Classification Stacking ensemble XSS attack URL encoding JScript/JavaScript Web security
下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:19
5
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning deep neural network Adversarial example Adversarial attack Adversarial defense
下载PDF
Deep reinforcement learning and its application in autonomous fitting optimization for attack areas of UCAVs 被引量:12
6
作者 LI Yue QIU Xiaohui +1 位作者 LIU Xiaodong XIA Qunli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第4期734-742,共9页
The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies wh... The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies while increasing the tactical capabilities of combat aircraft. As a research object, common UCAV uses the neural network fitting strategy to obtain values of attack areas. However, this simple strategy cannot cope with complex environmental changes and autonomously optimize decision-making problems. To solve the problem, this paper proposes a new deep deterministic policy gradient(DDPG) strategy based on deep reinforcement learning for the attack area fitting of UCAVs in the future battlefield. Simulation results show that the autonomy and environmental adaptability of UCAVs in the future battlefield will be improved based on the new DDPG algorithm and the training process converges quickly. We can obtain the optimal values of attack areas in real time during the whole flight with the well-trained deep network. 展开更多
关键词 attack area neural network deep deterministic policy gradient(DDPG) unmanned combat aerial vehicle(UCAV)
下载PDF
An Improved Optimized Model for Invisible Backdoor Attack Creation Using Steganography 被引量:2
7
作者 Daniyal M.Alghazzawi Osama Bassam J.Rabie +1 位作者 Surbhi Bhatia Syed Hamid Hasan 《Computers, Materials & Continua》 SCIE EI 2022年第7期1173-1193,共21页
The Deep Neural Networks(DNN)training process is widely affected by backdoor attacks.The backdoor attack is excellent at concealing its identity in the DNN by performing well on regular samples and displaying maliciou... The Deep Neural Networks(DNN)training process is widely affected by backdoor attacks.The backdoor attack is excellent at concealing its identity in the DNN by performing well on regular samples and displaying malicious behavior with data poisoning triggers.The state-of-art backdoor attacks mainly follow a certain assumption that the trigger is sample-agnostic and different poisoned samples use the same trigger.To overcome this problem,in this work we are creating a backdoor attack to check their strength to withstand complex defense strategies,and in order to achieve this objective,we are developing an improved Convolutional Neural Network(ICNN)model optimized using a Gradient-based Optimization(GBO)(ICNN-GBO)algorithm.In the ICNN-GBO model,we are injecting the triggers via a steganography and regularization technique.We are generating triggers using a single-pixel,irregular shape,and different sizes.The performance of the proposed methodology is evaluated using different performance metrics such as Attack success rate,stealthiness,pollution index,anomaly index,entropy index,and functionality.When the CNN-GBO model is trained with the poisoned dataset,it will map the malicious code to the target label.The proposed scheme’s effectiveness is verified by the experiments conducted on both the benchmark datasets namely CIDAR-10 andMSCELEB 1M dataset.The results demonstrate that the proposed methodology offers significant defense against the conventional backdoor attack detection frameworks such as STRIP and Neutral cleanse. 展开更多
关键词 Convolutional neural network gradient-based optimization STEGANOGRAPHY backdoor attack and regularization attack
下载PDF
Unknown DDoS Attack Detection with Fuzzy C-Means Clustering and Spatial Location Constraint Prototype Loss
8
作者 Thanh-Lam Nguyen HaoKao +2 位作者 Thanh-Tuan Nguyen Mong-Fong Horng Chin-Shiuh Shieh 《Computers, Materials & Continua》 SCIE EI 2024年第2期2181-2205,共25页
Since its inception,the Internet has been rapidly evolving.With the advancement of science and technology and the explosive growth of the population,the demand for the Internet has been on the rise.Many applications i... Since its inception,the Internet has been rapidly evolving.With the advancement of science and technology and the explosive growth of the population,the demand for the Internet has been on the rise.Many applications in education,healthcare,entertainment,science,and more are being increasingly deployed based on the internet.Concurrently,malicious threats on the internet are on the rise as well.Distributed Denial of Service(DDoS)attacks are among the most common and dangerous threats on the internet today.The scale and complexity of DDoS attacks are constantly growing.Intrusion Detection Systems(IDS)have been deployed and have demonstrated their effectiveness in defense against those threats.In addition,the research of Machine Learning(ML)and Deep Learning(DL)in IDS has gained effective results and significant attention.However,one of the challenges when applying ML and DL techniques in intrusion detection is the identification of unknown attacks.These attacks,which are not encountered during the system’s training,can lead to misclassification with significant errors.In this research,we focused on addressing the issue of Unknown Attack Detection,combining two methods:Spatial Location Constraint Prototype Loss(SLCPL)and Fuzzy C-Means(FCM).With the proposed method,we achieved promising results compared to traditional methods.The proposed method demonstrates a very high accuracy of up to 99.8%with a low false positive rate for known attacks on the Intrusion Detection Evaluation Dataset(CICIDS2017)dataset.Particularly,the accuracy is also very high,reaching 99.7%,and the precision goes up to 99.9%for unknown DDoS attacks on the DDoS Evaluation Dataset(CICDDoS2019)dataset.The success of the proposed method is due to the combination of SLCPL,an advanced Open-Set Recognition(OSR)technique,and FCM,a traditional yet highly applicable clustering technique.This has yielded a novel method in the field of unknown attack detection.This further expands the trend of applying DL and ML techniques in the development of intrusion detection systems and cybersecurity.Finally,implementing the proposed method in real-world systems can enhance the security capabilities against increasingly complex threats on computer networks. 展开更多
关键词 CYBERSECURITY DDoS unknown attack detection machine learning deep learning incremental learning convolutional neural networks(CNN) open-set recognition(OSR) spatial location constraint prototype loss fuzzy c-means CICIDS2017 CICDDoS2019
下载PDF
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks 被引量:1
9
作者 Kazim Ali Adnan N.Quershi +3 位作者 Ahmad Alauddin Bin Arifin Muhammad Shahid Bhatti Abid Sohail Rohail Hassan 《Computers, Materials & Continua》 SCIE EI 2022年第5期2209-2224,共16页
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li... These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods. 展开更多
关键词 Computer vision deep learning convolutional neural networks adversarial examples adversarial attacks adversarial defenses
下载PDF
Progressive Transfer Learning-based Deep Q Network for DDOS Defence in WSN
10
作者 S.Rameshkumar R.Ganesan A.Merline 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2379-2394,共16页
In The Wireless Multimedia Sensor Network(WNSMs)have achieved popularity among diverse communities as a result of technological breakthroughs in sensor and current gadgets.By utilising portable technologies,it achieve... In The Wireless Multimedia Sensor Network(WNSMs)have achieved popularity among diverse communities as a result of technological breakthroughs in sensor and current gadgets.By utilising portable technologies,it achieves solid and significant results in wireless communication,media transfer,and digital transmission.Sensor nodes have been used in agriculture and industry to detect characteristics such as temperature,moisture content,and other environmental conditions in recent decades.WNSMs have also made apps easier to use by giving devices self-governing access to send and process data connected with appro-priate audio and video information.Many video sensor network studies focus on lowering power consumption and increasing transmission capacity,but the main demand is data reliability.Because of the obstacles in the sensor nodes,WMSN is subjected to a variety of attacks,including Denial of Service(DoS)attacks.Deep Convolutional Neural Network is designed with the stateaction relationship mapping which is used to identify the DDOS Attackers present in the Wireless Sensor Networks for Smart Agriculture.The Proposed work it performs the data collection about the traffic conditions and identifies the deviation between the network conditions such as packet loss due to network congestion and the presence of attackers in the network.It reduces the attacker detection delay and improves the detection accuracy.In order to protect the network against DoS assaults,an improved machine learning technique must be offered.An efficient Deep Neural Network approach is provided for detecting DoS in WMSN.The required parameters are selected using an adaptive particle swarm optimization technique.The ratio of packet transmission,energy consumption,latency,network length,and throughput will be used to evaluate the approach’s efficiency. 展开更多
关键词 DOS attack wireless sensor networks for smart agriculture deep neural network machine learning technique
下载PDF
Forecasting Shark Attack Risk Using AI: A Deep Learning Approach
11
作者 Evan Valenti 《Journal of Data Analysis and Information Processing》 2023年第4期360-370,共11页
This study aimed to develop a predictive model utilizing available data to forecast the risk of future shark attacks, making this critical information accessible for everyday public use. Employing a deep learning/neur... This study aimed to develop a predictive model utilizing available data to forecast the risk of future shark attacks, making this critical information accessible for everyday public use. Employing a deep learning/neural network methodology, the system was designed to produce a binary output that is subsequently classified into categories of low, medium, or high risk. A significant challenge encountered during the study was the identification and procurement of appropriate historical and forecasted marine weather data, which is integral to the model’s accuracy. Despite these challenges, the results of the study were startlingly optimistic, showcasing the model’s ability to predict with impressive accuracy. In conclusion, the developed forecasting tool not only offers promise in its immediate application but also sets a robust precedent for the adoption and adaptation of similar predictive systems in various analogous use cases in the marine environment and beyond. 展开更多
关键词 deep learning shark research predictive ai marine biology neural network machine learning shark attacks data science shark biology forecasting
下载PDF
Cryptographic Based Secure Model on Dataset for Deep Learning Algorithms
12
作者 Muhammad Tayyab Mohsen Marjani +3 位作者 N.Z.Jhanjhi Ibrahim Abaker Targio Hashim Abdulwahab Ali Almazroi Abdulaleem Ali Almazroi 《Computers, Materials & Continua》 SCIE EI 2021年第10期1183-1200,共18页
Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy proble... Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models.The two most common active attacks are poisoning and evasion attacks,which can cause various problems,including wrong prediction and misclassification of decision-based models.Therefore,to design an efficient DL model,it is crucial to mitigate these attacks.In this regard,this study proposes a secure neural network(NN)model that provides data security during model training and testing phases.The main idea is to use cryptographic functions,such as hash function(SHA512)and homomorphic encryption(HE)scheme,to provide authenticity,integrity,and confidentiality of data.The performance of the proposed model is evaluated by experiments based on accuracy,precision,attack detection rate(ADR),and computational cost.The results show that the proposed model has achieved an accuracy of 98%,a precision of 0.97,and an ADR of 98%,even for a large number of attacks.Hence,the proposed model can be used to detect attacks and mitigate the attacker motives.The results also show that the computational cost of the proposed model does not increase with model complexity. 展开更多
关键词 deep learning(DL) poisoning attacks evasion attacks neural network hash functions SHA512 homomorphic encryption scheme
下载PDF
HDLIDP: A Hybrid Deep Learning Intrusion Detection and Prevention Framework
13
作者 Magdy M.Fadel Sally M.El-Ghamrawy +2 位作者 Amr M.T.Ali-Eldin Mohammed K.Hassan Ali I.El-Desoky 《Computers, Materials & Continua》 SCIE EI 2022年第11期2293-2312,共20页
Distributed denial-of-service(DDoS)attacks are designed to interrupt network services such as email servers and webpages in traditional computer networks.Furthermore,the enormous number of connected devices makes it d... Distributed denial-of-service(DDoS)attacks are designed to interrupt network services such as email servers and webpages in traditional computer networks.Furthermore,the enormous number of connected devices makes it difficult to operate such a network effectively.Software defined networks(SDN)are networks that are managed through a centralized control system,according to researchers.This controller is the brain of any SDN,composing the forwarding table of all data plane network switches.Despite the advantages of SDN controllers,DDoS attacks are easier to perpetrate than on traditional networks.Because the controller is a single point of failure,if it fails,the entire network will fail.This paper offers a Hybrid Deep Learning Intrusion Detection and Prevention(HDLIDP)framework,which blends signature-based and deep learning neural networks to detect and prevent intrusions.This framework improves detection accuracy while addressing all of the aforementioned problems.To validate the framework,experiments are done on both traditional and SDN datasets;the findings demonstrate a significant improvement in classification accuracy. 展开更多
关键词 Software defined networks(SDN) distributed denial of service attack(DDoS) signature-based detection whale optimization algorism(WOA) deep learning neural network classifier
下载PDF
Deep learning methods for noisy sperm image classification from convolutional neural network to visual transformer:a comprehensive comparative study
14
作者 Ao Chen Chen Li +9 位作者 Md Mamunur Rahaman Yudong Yao Haoyuan Chen Hechen Yang Peng Zhao Weiming Hu Wanli Liu Shuojia Zou Ning Xu Marcin Grzegorzek 《Intelligent Medicine》 EI CSCD 2024年第2期114-127,共14页
Background With the gradual increase of infertility in the world,among which male sperm problems are the main factor for infertility,more and more couples are using computer-assisted sperm analysis(CASA)to assist in t... Background With the gradual increase of infertility in the world,among which male sperm problems are the main factor for infertility,more and more couples are using computer-assisted sperm analysis(CASA)to assist in the analysis and treatment of infertility.Meanwhile,the rapid development of deep learning(DL)has led to strong results in image classification tasks.However,the classification of sperm images has not been well studied in current deep learning methods,and the sperm images are often affected by noise in practical CASA applications.The purpose of this article is to investigate the anti-noise robustness of deep learning classification methods applied on sperm images.Methods The SVIA dataset is a publicly available large-scale sperm dataset containing three subsets.In this work,we used subset-C,which provides more than 125,000 independent images of sperms and impurities,including 121,401 sperm images and 4,479 impurity images.To investigate the anti-noise robustness of deep learning classification methods applied on sperm images,we conducted a comprehensive comparative study of sperm images using many convolutional neural network(CNN)and visual transformer(VT)deep learning methods to find the deep learning model with the most stable anti-noise robustness.Results This study proved that VT had strong robustness for the classification of tiny object(sperm and impurity)image datasets under some types of conventional noise and some adversarial attacks.In particular,under the influence of Poisson noise,accuracy changed from 91.45%to 91.08%,impurity precison changed from 92.7%to 91.3%,impurity recall changed from 88.8%to 89.5%,and impurity F1-score changed 90.7%to 90.4%.Meanwhile,sperm precision changed from 90.9%to 90.5%,sperm recall changed from 92.5%to 93.8%,and sperm F1-score changed from 92.1%to 90.4%.Conclusion Sperm image classification may be strongly affected by noise in current deep learning methods;the robustness with regard to noise of VT methods based on global information is greater than that of CNN methods based on local information,indicating that the robustness with regard to noise is reflected mainly in global information. 展开更多
关键词 Computer-assisted sperm analysis ANTI-NOISE Robustness deep learning .Image classification Sperm image Conventional noise Adversarial attacks Convolutional neural network Visual transformer
原文传递
基于SincNet的侧信道攻击 被引量:5
15
作者 陈平 汪平 +1 位作者 董高峰 胡红钢 《密码学报》 CSCD 2020年第5期583-594,共12页
侧信道攻击利用密码算法在物联网设备上执行时产生的时间、功耗、电磁辐射和故障输出等泄露来恢复密钥或者其他敏感信息,它已经成为了加密安全设备的重要威胁之一.近年来,建模类侧信道攻击在加密算法安全性评估中发挥着重要的作用,它被... 侧信道攻击利用密码算法在物联网设备上执行时产生的时间、功耗、电磁辐射和故障输出等泄露来恢复密钥或者其他敏感信息,它已经成为了加密安全设备的重要威胁之一.近年来,建模类侧信道攻击在加密算法安全性评估中发挥着重要的作用,它被认为是现阶段最强大的攻击方法.随后,深度学习技术应用于建模类侧信道攻击,并且在公开数据集上取得了良好的效果.在本文中,我们提出了一种优化的卷积神经网络侧信道攻击方法,该方法将一种新的网络结构SincNet应用于侧信道攻击,SincNet卷积层只需要学习滤波器的高和低两个截止频率,相比于传统的卷积层,学习的参数量更少.为了检验该攻击方法的有效性,我们使用公开的ASCAD数据集和DPA contest v4.1数据集对其进行评估.实验结果表明,我们在ASCAD.h5上仅需要170条能量轨迹就能恢复出正确的子密钥.另外,我们也在ASCAD_desync50.h5和ASCAD_desync100.h5这两个轨迹非对齐的数据集上进行评估,该方法有效地缓解了轨迹非对齐造成的影响,得到了优于Prouff等人在2018年的实验结果.对于DPA contest v4.1数据集,我们使用了CNN网络和SincNet网络对其进行训练和测试,均可以达到很好的攻击效果,仅需要一条能量轨迹就可以恢复出子密钥,为了证明SincNet网络的有效性,我们减少训练轨迹的条数,发现SincNet网络能够使用更少的训练轨迹条数恢复出子密钥,然后我们对经过SincNet层处理之后的能量轨迹作了相关性分析,发现相关性得到了一定的提升. 展开更多
关键词 侧信道攻击 卷积神经网络 深度学习
下载PDF
Defend Against Adversarial Samples by Using Perceptual Hash 被引量:1
16
作者 Changrui Liu Dengpan Ye +4 位作者 Yueyun Shang Shunzhi Jiang Shiyu Li Yuan Mei Liqiang Wang 《Computers, Materials & Continua》 SCIE EI 2020年第3期1365-1386,共22页
Image classifiers that based on Deep Neural Networks(DNNs)have been proved to be easily fooled by well-designed perturbations.Previous defense methods have the limitations of requiring expensive computation or reducin... Image classifiers that based on Deep Neural Networks(DNNs)have been proved to be easily fooled by well-designed perturbations.Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers.In this paper,we propose a novel defense method which based on perceptual hash.Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense.To verify our idea,we defended against two main attack methods(a white-box attack and a black-box attack)in different DNN-based image classifiers and show that,after using our defense method,the attack-success-rate for all DNN-based image classifiers decreases significantly.More specifically,for the white-box attack,the attack-success-rate is reduced by an average of 36.3%.For the black-box attack,the average attack-success-rate of targeted attack and non-targeted attack has been reduced by 72.8%and 76.7%respectively.The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples. 展开更多
关键词 Image classifiers deep neural networks adversarial samples attack defense perceptual hash image similarity
下载PDF
深度卷积神经网络对Perlin噪声的敏感性研究 被引量:1
17
作者 刘思婷 葛万成 《通信技术》 2021年第3期672-678,共7页
自动驾驶技术的安全性和可靠性问题一直是当下研究的热点,也是自动驾驶汽车实现市场普遍性应用前必须要克服的一大难题。针对当前自动驾驶领域面临的这个困境,提出一种基于Perlin噪声的攻击方法。此方法利用Perlin噪声生成自然纹理噪声... 自动驾驶技术的安全性和可靠性问题一直是当下研究的热点,也是自动驾驶汽车实现市场普遍性应用前必须要克服的一大难题。针对当前自动驾驶领域面临的这个困境,提出一种基于Perlin噪声的攻击方法。此方法利用Perlin噪声生成自然纹理噪声来模拟自然场景下可能遇到的扰动,使神经网络的分类器发生误判,达到降低神经网络检测精度的效果。通过在3种不同结构的神经网络结构上进行测试,发现不同结构的神经网络对Perlin噪声的敏感程度不同,且量化分析噪声参数对网络稳定性的影响效果,同时证明了所提方法的可行性和泛化性,对以后研究神经网络的性能具有一定的参考意义。 展开更多
关键词 PERLin噪声 黑盒攻击 神经网络 鲁棒性
下载PDF
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
18
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
下载PDF
Black Box Adversarial Defense Based on Image Denoising and Pix2Pix
19
作者 Zhenyong Rui Xiugang Gong 《Journal of Computer and Communications》 2023年第12期14-30,共17页
Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the f... Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the field of AI security. Currently, robustness defense techniques for models often rely on adversarial training, a method that tends to only defend against specific types of attacks and lacks strong generalization. In response to this challenge, this paper proposes a black-box defense method based on Image Denoising and Pix2Pix (IDP) technology. This method does not require prior knowledge of the specific attack type and eliminates the need for cumbersome adversarial training. When making predictions on unknown samples, the IDP method first undergoes denoising processing, followed by inputting the processed image into a trained Pix2Pix model for image transformation. Finally, the image generated by Pix2Pix is input into the classification model for prediction. This versatile defense approach demonstrates excellent defensive performance against common attack methods such as FGSM, I-FGSM, DeepFool, and UPSET, showcasing high flexibility and transferability. In summary, the IDP method introduces new perspectives and possibilities for adversarial sample defense, alleviating the limitations of traditional adversarial training methods and enhancing the overall robustness of models. 展开更多
关键词 deep neural networks (DNN) Adversarial attack Adversarial Training Fourier Transform Robust Defense
下载PDF
AMS-FGSM:一种对抗样本生成的梯度参数更新方法
20
作者 诸云 吴祎楠 +1 位作者 郭佳 王建宇 《南京理工大学学报》 CAS CSCD 北大核心 2024年第5期635-641,共7页
深度神经网络在多种模式识别任务上均取得卓越表现,然而相关研究表明深度神经网络非常脆弱,极易受到对抗样本的攻击。人眼不易察觉的对抗样本还具有迁移性,即针对某个模型生成的对抗样本会使其他不同的深度模型产生误判。该文针对对抗... 深度神经网络在多种模式识别任务上均取得卓越表现,然而相关研究表明深度神经网络非常脆弱,极易受到对抗样本的攻击。人眼不易察觉的对抗样本还具有迁移性,即针对某个模型生成的对抗样本会使其他不同的深度模型产生误判。该文针对对抗样本的迁移性,提出了基于Adam优化算法的快速梯度符号方法(AMS-FGSM),可替代原有的迭代梯度符号方法(I-FGSM)。不同于I-FGSM,AMS-FGSM结合了动量与AMSGrad算法的优势。在手写数据集MNIST上的实验表明,结合了AMS-FGSM的对抗样本生成方法能更快速地生成攻击成功率更高的对抗样本,在训练模型上的平均成功率达到98.1%,对模型的攻击成功率随扰动次数的增加而保持稳定,表现较好。 展开更多
关键词 对抗样本 梯度更新 黑盒攻击 深度神经网络 人工智能
下载PDF
上一页 1 2 7 下一页 到第
使用帮助 返回顶部