期刊文献+
共找到1,724篇文章
< 1 2 87 >
每页显示 20 50 100
GUARDIAN: A Multi-Tiered Defense Architecture for Thwarting Prompt Injection Attacks on LLMs
1
作者 Parijat Rai Saumil Sood +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期43-68,共26页
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist... This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner. 展开更多
关键词 Large Language Models (LLMs) Adversarial attack Prompt Injection Filter defense artificial Intelligence Machine Learning CYBERSECURITY
下载PDF
Beyond Defense: Proactive Approaches to Disaster Recovery and Threat Intelligence in Modern Enterprises
2
作者 Meysam Tahmasebi 《Journal of Information Security》 2024年第2期106-133,共28页
As cyber threats keep changing and business environments adapt, a comprehensive approach to disaster recovery involves more than just defensive measures. This research delves deep into the strategies required to respo... As cyber threats keep changing and business environments adapt, a comprehensive approach to disaster recovery involves more than just defensive measures. This research delves deep into the strategies required to respond to threats and anticipate and mitigate them proactively. Beginning with understanding the critical need for a layered defense and the intricacies of the attacker’s journey, the research offers insights into specialized defense techniques, emphasizing the importance of timely and strategic responses during incidents. Risk management is brought to the forefront, underscoring businesses’ need to adopt mature risk assessment practices and understand the potential risk impact areas. Additionally, the value of threat intelligence is explored, shedding light on the importance of active engagement within sharing communities and the vigilant observation of adversary motivations. “Beyond Defense: Proactive Approaches to Disaster Recovery and Threat Intelligence in Modern Enterprises” is a comprehensive guide for organizations aiming to fortify their cybersecurity posture, marrying best practices in proactive and reactive measures in the ever-challenging digital realm. 展开更多
关键词 Advanced Persistent Threats (APT) attack Phases attack Surface defense-IN-DEPTH Disaster Recovery (DR) Incident Response Plan (IRP) Intrusion Detection Systems (IDS) Intrusion Prevention System (IPS) Key Risk Indicator (KRI) Layered defense Lockheed Martin Kill Chain Proactive defense Redundancy Risk Management Threat Intelligence
下载PDF
Primary User Adversarial Attacks on Deep Learning-Based Spectrum Sensing and the Defense Method 被引量:3
3
作者 Shilian Zheng Linhui Ye +5 位作者 Xuanye Wang Jinyin Chen Huaji Zhou Caiyi Lou Zhijin Zhao Xiaoniu Yang 《China Communications》 SCIE CSCD 2021年第12期94-107,共14页
The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the rob... The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model. 展开更多
关键词 spectrum sensing cognitive radio deep learning adversarial attack autoencoder defense
下载PDF
Address Resolution Protocol (ARP): Spoofing Attack and Proposed Defense
4
作者 Ghazi Al Sukkar Ramzi Saifan +2 位作者 Sufian Khwaldeh Mahmoud Maqableh Iyad Jafar 《Communications and Network》 2016年第3期118-130,共13页
Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the ... Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the process of establishing these networks. Nevertheless, security-wise precautions were not taken in some of them. In this paper, we expose some of the vulnerability that exists in a commonly and widely used network protocol, the Address Resolution Protocol (ARP) protocol. Effectively, we will implement a user friendly and an easy-to-use tool that exploits the weaknesses of this protocol to deceive a victim’s machine and a router through creating a sort of Man-in-the-Middle (MITM) attack. In MITM, all of the data going out or to the victim machine will pass first through the attacker’s machine. This enables the attacker to inspect victim’s data packets, extract valuable data (like passwords) that belong to the victim and manipulate these data packets. We suggest and implement a defense mechanism and tool that counters this attack, warns the user, and exposes some information about the attacker to isolate him. GNU/Linux is chosen as an operating system to implement both the attack and the defense tools. The results show the success of the defense mechanism in detecting the ARP related attacks in a very simple and efficient way. 展开更多
关键词 Address Resolution Protocol ARP Spoofing Security attack and defense Man in the Middle attack
下载PDF
An Overview of Adversarial Attacks and Defenses
5
作者 Kai Chen Jinwei Wang Jiawei Zhang 《Journal of Information Hiding and Privacy Protection》 2022年第1期15-24,共10页
In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classificat... In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods. 展开更多
关键词 Deep learning adversarial example adversarial attacks adversarial defenses
下载PDF
Chained Dual-Generative Adversarial Network:A Generalized Defense Against Adversarial Attacks 被引量:1
6
作者 Amitoj Bir Singh Lalit Kumar Awasthi +3 位作者 Urvashi Mohammad Shorfuzzaman Abdulmajeed Alsufyani Mueen Uddin 《Computers, Materials & Continua》 SCIE EI 2023年第2期2541-2555,共15页
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio... Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training. 展开更多
关键词 Adversarial attacks GAN-based adversarial defense image classification models adversarial defense
下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:17
7
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning Deep neural network Adversarial example Adversarial attack Adversarial defense
下载PDF
Black Box Adversarial Defense Based on Image Denoising and Pix2Pix
8
作者 Zhenyong Rui Xiugang Gong 《Journal of Computer and Communications》 2023年第12期14-30,共17页
Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the f... Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the field of AI security. Currently, robustness defense techniques for models often rely on adversarial training, a method that tends to only defend against specific types of attacks and lacks strong generalization. In response to this challenge, this paper proposes a black-box defense method based on Image Denoising and Pix2Pix (IDP) technology. This method does not require prior knowledge of the specific attack type and eliminates the need for cumbersome adversarial training. When making predictions on unknown samples, the IDP method first undergoes denoising processing, followed by inputting the processed image into a trained Pix2Pix model for image transformation. Finally, the image generated by Pix2Pix is input into the classification model for prediction. This versatile defense approach demonstrates excellent defensive performance against common attack methods such as FGSM, I-FGSM, DeepFool, and UPSET, showcasing high flexibility and transferability. In summary, the IDP method introduces new perspectives and possibilities for adversarial sample defense, alleviating the limitations of traditional adversarial training methods and enhancing the overall robustness of models. 展开更多
关键词 Deep Neural Networks (DNN) Adversarial attack Adversarial Training Fourier Transform Robust defense
下载PDF
CORMAND2--针对工业机器人的欺骗攻击
9
作者 Hongyi Pu Liang He +2 位作者 Peng Cheng Jiming Chen Youxian Sun 《Engineering》 SCIE EI CAS CSCD 2024年第1期186-201,共16页
Industrial robots are becoming increasingly vulnerable to cyber incidents and attacks,particularly with the dawn of the Industrial Internet-of-Things(IIoT).To gain a comprehensive understanding of these cyber risks,vu... Industrial robots are becoming increasingly vulnerable to cyber incidents and attacks,particularly with the dawn of the Industrial Internet-of-Things(IIoT).To gain a comprehensive understanding of these cyber risks,vulnerabilities of industrial robots were analyzed empirically,using more than three million communication packets collected with testbeds of two ABB IRB120 robots and five other robots from various original equipment manufacturers(OEMs).This analysis,guided by the confidentiality-integrity-availability(CIA)triad,uncovers robot vulnerabilities in three dimensions:confidentiality,integrity,and availability.These vulnerabilities were used to design Covering Robot Manipulation via Data Deception(CORMAND2),an automated cyber-physical attack against industrial robots.CORMAND2 manipulates robot operation while deceiving the Supervisory Control and Data Acquisition(SCADA)system that the robot is operating normally by modifying the robot’s movement data and data deception.CORMAND2 and its capability of degrading the manufacturing was validated experimentally using the aforementioned seven robots from six different OEMs.CORMAND2 unveils the limitations of existing anomaly detection systems,more specifically the assumption of the authenticity of SCADA-received movement data,to which we propose mitigations for. 展开更多
关键词 Industrial robots Vulnerability analysis Deception attacks defenseS
下载PDF
ATSSC:An Attack Tolerant System in Serverless Computing
10
作者 Zhang Shuai Guo Yunfei +2 位作者 Hu Hongchao Liu Wenyan Wang Yawen 《China Communications》 SCIE CSCD 2024年第6期192-205,共14页
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ... Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs. 展开更多
关键词 active defense attack tolerant cloud computing SECURITY serverless computing
下载PDF
Calculation of the Behavior Utility of a Network System: Conception and Principle 被引量:4
11
作者 Changzhen Hu 《Engineering》 2018年第1期78-84,共7页
关键词 NETWORK metric evaluation Differential MANIFOLD NETWORK BEHAVIOR UTILITY NETWORK attack-defense CONFRONTATION
下载PDF
Research on Cyberspace Attack and Defense Confrontation Technology
12
作者 Chengjun ZHOU 《International Journal of Technology Management》 2015年第3期11-14,共4页
关键词 网络空间安全 对抗技术 攻防对抗 对抗系统 互动空间 攻击防御 技术支持 安全产业
下载PDF
Discussion and Research on Information Security Attack and Defense Platform Construction in Universities Based on Cloud Computing and Virtualization
13
作者 Xiancheng Ding 《Journal of Information Security》 2016年第5期297-303,共7页
This paper puts forward the plan on constructing information security attack and defense platform based on cloud computing and virtualization, provides the hardware topology structure of the platform and technical fra... This paper puts forward the plan on constructing information security attack and defense platform based on cloud computing and virtualization, provides the hardware topology structure of the platform and technical framework of the system and the experimental process and technical principle of the platform. The experiment platform can provide more than 20 attack classes. Using the virtualization technology can build hypothesized target of various types in the laboratory and diversified network structure to carry out attack and defense experiment. 展开更多
关键词 Information Security Network attack and defense VIRTUALIZATION Experiment Platform
下载PDF
Towards the universal defense for query-based audio adversarial attacks on speech recognition system
14
作者 Feng Guo Zheng Sun +1 位作者 Yuxuan Chen Lei Ju 《Cybersecurity》 EI CSCD 2024年第1期53-70,共18页
Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pos... Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices.The existing defense methods are either limited in application or only defend on results,but not on process.In this work,we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process.The insight of this method is based on the observation:many existing audio AE attacks utilize query-based methods,which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process.Inspired by this observation,We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query.Thus,we can identify when a sequence of queries appears to be suspectable to generate audio AEs.Through extensive evaluation on four state-of-the-art audio AE attacks,we demonstrate that on average our defense identify the adversary’s intent with over 90%accuracy.With careful regard for robustness evaluations,we also analyze our proposed defense and its strength to withstand two adaptive attacks.Finally,our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining. 展开更多
关键词 Adversarial attacks defense Memory mechanism Query-based
原文传递
Hawk mimicry does not reduce attacks of cuckoos by highly aggressive hosts 被引量:8
15
作者 Laikun Ma Canchao Yang Wei Liang 《Avian Research》 CSCD 2018年第4期299-305,共7页
Background: Resemblance to raptors such as hawks(Accipiter spp.) is considered to be an adaptive strategy of cuckoos(Cuculus spp.), which has evolved to protect cuckoos against host attacks. However, the effectiveness... Background: Resemblance to raptors such as hawks(Accipiter spp.) is considered to be an adaptive strategy of cuckoos(Cuculus spp.), which has evolved to protect cuckoos against host attacks. However, the effectiveness of the mimicry remains controversial, and is not yet fully studied for highly aggressive hosts.Methods: We evaluated the effectiveness of sparrowhawk(Accipiter nisus) mimicry by common cuckoos(Cuculus canorus) in oriental reed warblers(Acrocephaus orientalis), which are highly aggressive hosts. Using a both the single and the paired dummy experiment, defense behaviors and attack intensities of oriental reed warblers against common cuckoos, sparrowhawks and oriental turtle doves(Streptopelia orientalis) were assessed.Results: Oriental reed warblers exhibit strong nest defense behaviors, and such behaviors do not change with breeding stage(i.e., egg stage and nestling stage). Furthermore, assistance from conspecific helpers may increase attack intensities. However, they were deterred from mobbing overall by the presence of the hawk.Conclusions: Oriental reed warblers are able to distinguish cuckoos from harmless doves. However, they may be deterred from mobbing by the presence of the predatory hawk, suggesting hawk mimicry may be ineffective and does not reduce attacks of cuckoos by highly aggressive hosts. 展开更多
关键词 attack BROOD PARASITISM Common CUCKOO MOBBING Nest defense Oriental reed WARBLER
下载PDF
Design and Implementation of an SDN-Enabled DNS Security Framework 被引量:4
16
作者 Zhenpeng Wang Hongchao Hu Guozhen Cheng 《China Communications》 SCIE CSCD 2019年第2期233-245,共13页
The Domain Name System(DNS) is suffering from the vulnerabilities exploited to launch the cache poisoning attack. Inspired by biodiversity, we design and implement a non-intrusive and tolerant secure architecture Mult... The Domain Name System(DNS) is suffering from the vulnerabilities exploited to launch the cache poisoning attack. Inspired by biodiversity, we design and implement a non-intrusive and tolerant secure architecture Multi-DNS(MDNS) to deal with it. MDNS consists of Scheduling Proxy and DNS server pool with heterogeneous DNSs in it. And the Scheduling Proxy dynamically schedules m DNSs to provide service in parallel and adopts the vote results from majority of DNSs to decide valid replies. And benefit from the centralized control of software defined networking(SDN), we implement a proof of concept for it. Evaluation results prove the validity and availability of MDNS and its intrusion/fault tolerance, while the average delay can be controlled in 0.3s. 展开更多
关键词 DNS CACHE POISONING attack software defined NETWORKING moving target defense dynamic heterogeneous REDUNDANT
下载PDF
Detection and Defense Method Against False Data Injection Attacks for Distributed Load Frequency Control System in Microgrid
17
作者 Zhixun Zhang Jianqiang Hu +3 位作者 Jianquan Lu Jie Yu Jinde Cao Ardak Kashkynbayev 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2024年第3期913-924,共12页
In the realm of microgrid(MG),the distributed load frequency control(LFC)system has proven to be highly susceptible to the negative effects of false data injection attacks(FDIAs).Considering the significant responsibi... In the realm of microgrid(MG),the distributed load frequency control(LFC)system has proven to be highly susceptible to the negative effects of false data injection attacks(FDIAs).Considering the significant responsibility of the distributed LFC system for maintaining frequency stability within the MG,this paper proposes a detection and defense method against unobservable FDIAs in the distributed LFC system.Firstly,the method integrates a bi-directional long short-term memory(Bi LSTM)neural network and an improved whale optimization algorithm(IWOA)into the LFC controller to detect and counteract FDIAs.Secondly,to enable the Bi LSTM neural network to proficiently detect multiple types of FDIAs with utmost precision,the model employs a historical MG dataset comprising the frequency and power variances.Finally,the IWOA is utilized to optimize the proportional-integral-derivative(PID)controller parameters to counteract the negative impacts of FDIAs.The proposed detection and defense method is validated by building the distributed LFC system in Simulink. 展开更多
关键词 MICROGRID load frequency control false data injection attack bi-directional long short-term memory(BiLSTM)neural network improved whale optimization algorithm(IWOA) detection and defense
原文传递
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks 被引量:1
18
作者 Kazim Ali Adnan N.Quershi +3 位作者 Ahmad Alauddin Bin Arifin Muhammad Shahid Bhatti Abid Sohail Rohail Hassan 《Computers, Materials & Continua》 SCIE EI 2022年第5期2209-2224,共16页
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li... These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods. 展开更多
关键词 Computer vision deep learning convolutional neural networks adversarial examples adversarial attacks adversarial defenses
下载PDF
Cross-Site Scripting Attacks and Defensive Techniques: A Comprehensive Survey* 被引量:1
19
作者 Sonkarlay J. Y. Weamie 《International Journal of Communications, Network and System Sciences》 2022年第8期126-148,共23页
The advancement of technology and the digitization of organizational functions and services have propelled the world into a new era of computing capability and sophistication. The proliferation and usability of such c... The advancement of technology and the digitization of organizational functions and services have propelled the world into a new era of computing capability and sophistication. The proliferation and usability of such complex technological services raise several security concerns. One of the most critical concerns is cross-site scripting (XSS) attacks. This paper has concentrated on revealing and comprehensively analyzing XSS injection attacks, detection, and prevention concisely and accurately. I have done a thorough study and reviewed several research papers and publications with a specific focus on the researchers’ defensive techniques for preventing XSS attacks and subdivided them into five categories: machine learning techniques, server-side techniques, client-side techniques, proxy-based techniques, and combined approaches. The majority of existing cutting-edge XSS defensive approaches carefully analyzed in this paper offer protection against the traditional XSS attacks, such as stored and reflected XSS. There is currently no reliable solution to provide adequate protection against the newly discovered XSS attack known as DOM-based and mutation-based XSS attacks. After reading all of the proposed models and identifying their drawbacks, I recommend a combination of static, dynamic, and code auditing in conjunction with secure coding and continuous user awareness campaigns about XSS emerging attacks. 展开更多
关键词 XSS attacks Defensive Techniques VULNERABILITIES Web Application Security
下载PDF
Mechanism and Defense on Malicious Code
20
作者 WEN Wei-ping 1,2,3, QING Si-han 1,2,31. Institute of Software, the Chinese Academy of Sciences, Beijing 100080, China 2.Engineering Research Center for Information Security Technology, the Chinese Academy of Sciences, Beijing 100080, China 3.Graduate School of the Chinese Academy of Sciences, Beijing 100080, China 《Wuhan University Journal of Natural Sciences》 EI CAS 2005年第1期83-88,共6页
With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an atta... With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an attack model of the malicious code, and discuss the critical techniques of implementation and prevention against the malicious code. The remaining problems and emerging trends in this area are also addressed in the paper. 展开更多
关键词 malicious code attacking model MECHANISM defense system security network security
下载PDF
上一页 1 2 87 下一页 到第
使用帮助 返回顶部