This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist...This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.展开更多
As cyber threats keep changing and business environments adapt, a comprehensive approach to disaster recovery involves more than just defensive measures. This research delves deep into the strategies required to respo...As cyber threats keep changing and business environments adapt, a comprehensive approach to disaster recovery involves more than just defensive measures. This research delves deep into the strategies required to respond to threats and anticipate and mitigate them proactively. Beginning with understanding the critical need for a layered defense and the intricacies of the attacker’s journey, the research offers insights into specialized defense techniques, emphasizing the importance of timely and strategic responses during incidents. Risk management is brought to the forefront, underscoring businesses’ need to adopt mature risk assessment practices and understand the potential risk impact areas. Additionally, the value of threat intelligence is explored, shedding light on the importance of active engagement within sharing communities and the vigilant observation of adversary motivations. “Beyond Defense: Proactive Approaches to Disaster Recovery and Threat Intelligence in Modern Enterprises” is a comprehensive guide for organizations aiming to fortify their cybersecurity posture, marrying best practices in proactive and reactive measures in the ever-challenging digital realm.展开更多
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio...Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.展开更多
Cyber attacks pose severe threats on synchronization of multi-agent systems.Deception attack,as a typical type of cyber attack,can bypass the surveillance of the attack detection mechanism silently,resulting in a heav...Cyber attacks pose severe threats on synchronization of multi-agent systems.Deception attack,as a typical type of cyber attack,can bypass the surveillance of the attack detection mechanism silently,resulting in a heavy loss.Therefore,the problem of mean-square bounded synchronization in multi-agent systems subject to deception attacks is investigated in this paper.The control signals can be replaced with false data from controllerto-actuator channels or the controller.The success of the attack is measured through a stochastic variable.A distributed impulsive controller using a pinning strategy is redesigned,which ensures that mean-square bounded synchronization is achieved in the presence of deception attacks.Some sufficient conditions are derived,in which upper bounds of the synchronization error are given.Finally,two numerical simulations with symmetric and asymmetric network topologies are given to illustrate the theoretical results.展开更多
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor...With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.展开更多
In this paper,a resilient distributed control scheme against replay attacks for multi-agent networked systems subject to input and state constraints is proposed.The methodological starting point relies on a smart use ...In this paper,a resilient distributed control scheme against replay attacks for multi-agent networked systems subject to input and state constraints is proposed.The methodological starting point relies on a smart use of predictive arguments with a twofold aim:1)Promptly detect malicious agent behaviors affecting normal system operations;2)Apply specific control actions,based on predictive ideas,for mitigating as much as possible undesirable domino effects resulting from adversary operations.Specifically,the multi-agent system is topologically described by a leader-follower digraph characterized by a unique leader and set-theoretic receding horizon control ideas are exploited to develop a distributed algorithm capable to instantaneously recognize the attacked agent.Finally,numerical simulations are carried out to show benefits and effectiveness of the proposed approach.展开更多
The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the rob...The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model.展开更多
This paper investigates the event-triggered security consensus problem for nonlinear multi-agent systems(MASs)under denial-of-service(Do S)attacks over an undirected graph.A novel adaptive memory observer-based anti-d...This paper investigates the event-triggered security consensus problem for nonlinear multi-agent systems(MASs)under denial-of-service(Do S)attacks over an undirected graph.A novel adaptive memory observer-based anti-disturbance control scheme is presented to improve the observer accuracy by adding a buffer for the system output measurements.Meanwhile,this control scheme can also provide more reasonable control signals when Do S attacks occur.To save network resources,an adaptive memory event-triggered mechanism(AMETM)is also proposed and Zeno behavior is excluded.It is worth mentioning that the AMETM's updates do not require global information.Then,the observer and controller gains are obtained by using the linear matrix inequality(LMI)technique.Finally,simulation examples show the effectiveness of the proposed control scheme.展开更多
Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the f...Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the field of AI security. Currently, robustness defense techniques for models often rely on adversarial training, a method that tends to only defend against specific types of attacks and lacks strong generalization. In response to this challenge, this paper proposes a black-box defense method based on Image Denoising and Pix2Pix (IDP) technology. This method does not require prior knowledge of the specific attack type and eliminates the need for cumbersome adversarial training. When making predictions on unknown samples, the IDP method first undergoes denoising processing, followed by inputting the processed image into a trained Pix2Pix model for image transformation. Finally, the image generated by Pix2Pix is input into the classification model for prediction. This versatile defense approach demonstrates excellent defensive performance against common attack methods such as FGSM, I-FGSM, DeepFool, and UPSET, showcasing high flexibility and transferability. In summary, the IDP method introduces new perspectives and possibilities for adversarial sample defense, alleviating the limitations of traditional adversarial training methods and enhancing the overall robustness of models.展开更多
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ...Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs.展开更多
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li...These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods.展开更多
Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the ...Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the process of establishing these networks. Nevertheless, security-wise precautions were not taken in some of them. In this paper, we expose some of the vulnerability that exists in a commonly and widely used network protocol, the Address Resolution Protocol (ARP) protocol. Effectively, we will implement a user friendly and an easy-to-use tool that exploits the weaknesses of this protocol to deceive a victim’s machine and a router through creating a sort of Man-in-the-Middle (MITM) attack. In MITM, all of the data going out or to the victim machine will pass first through the attacker’s machine. This enables the attacker to inspect victim’s data packets, extract valuable data (like passwords) that belong to the victim and manipulate these data packets. We suggest and implement a defense mechanism and tool that counters this attack, warns the user, and exposes some information about the attacker to isolate him. GNU/Linux is chosen as an operating system to implement both the attack and the defense tools. The results show the success of the defense mechanism in detecting the ARP related attacks in a very simple and efficient way.展开更多
In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classificat...In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods.展开更多
This paper puts forward the plan on constructing information security attack and defense platform based on cloud computing and virtualization, provides the hardware topology structure of the platform and technical fra...This paper puts forward the plan on constructing information security attack and defense platform based on cloud computing and virtualization, provides the hardware topology structure of the platform and technical framework of the system and the experimental process and technical principle of the platform. The experiment platform can provide more than 20 attack classes. Using the virtualization technology can build hypothesized target of various types in the laboratory and diversified network structure to carry out attack and defense experiment.展开更多
Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pos...Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices.The existing defense methods are either limited in application or only defend on results,but not on process.In this work,we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process.The insight of this method is based on the observation:many existing audio AE attacks utilize query-based methods,which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process.Inspired by this observation,We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query.Thus,we can identify when a sequence of queries appears to be suspectable to generate audio AEs.Through extensive evaluation on four state-of-the-art audio AE attacks,we demonstrate that on average our defense identify the adversary’s intent with over 90%accuracy.With careful regard for robustness evaluations,we also analyze our proposed defense and its strength to withstand two adaptive attacks.Finally,our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining.展开更多
Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively impl...Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively implant the backdoor,and triggers on the input can mislead the models during testing.Our study shows that the model shows different learning behaviors in clean and poisoned subsets during training.Based on this observation,we propose a general training pipeline to defend against backdoor attacks actively.Benign models can be trained from the unreli-able dataset by decoupling the learning process into three stages,i.e.,supervised learning,active unlearning,and active semi-supervised fine-tuning.The effectiveness of our approach has been shown in numerous experiments across various backdoor attacks and datasets.展开更多
The advancement of technology and the digitization of organizational functions and services have propelled the world into a new era of computing capability and sophistication. The proliferation and usability of such c...The advancement of technology and the digitization of organizational functions and services have propelled the world into a new era of computing capability and sophistication. The proliferation and usability of such complex technological services raise several security concerns. One of the most critical concerns is cross-site scripting (XSS) attacks. This paper has concentrated on revealing and comprehensively analyzing XSS injection attacks, detection, and prevention concisely and accurately. I have done a thorough study and reviewed several research papers and publications with a specific focus on the researchers’ defensive techniques for preventing XSS attacks and subdivided them into five categories: machine learning techniques, server-side techniques, client-side techniques, proxy-based techniques, and combined approaches. The majority of existing cutting-edge XSS defensive approaches carefully analyzed in this paper offer protection against the traditional XSS attacks, such as stored and reflected XSS. There is currently no reliable solution to provide adequate protection against the newly discovered XSS attack known as DOM-based and mutation-based XSS attacks. After reading all of the proposed models and identifying their drawbacks, I recommend a combination of static, dynamic, and code auditing in conjunction with secure coding and continuous user awareness campaigns about XSS emerging attacks.展开更多
With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an atta...With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an attack model of the malicious code, and discuss the critical techniques of implementation and prevention against the malicious code. The remaining problems and emerging trends in this area are also addressed in the paper.展开更多
文摘This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.
文摘As cyber threats keep changing and business environments adapt, a comprehensive approach to disaster recovery involves more than just defensive measures. This research delves deep into the strategies required to respond to threats and anticipate and mitigate them proactively. Beginning with understanding the critical need for a layered defense and the intricacies of the attacker’s journey, the research offers insights into specialized defense techniques, emphasizing the importance of timely and strategic responses during incidents. Risk management is brought to the forefront, underscoring businesses’ need to adopt mature risk assessment practices and understand the potential risk impact areas. Additionally, the value of threat intelligence is explored, shedding light on the importance of active engagement within sharing communities and the vigilant observation of adversary motivations. “Beyond Defense: Proactive Approaches to Disaster Recovery and Threat Intelligence in Modern Enterprises” is a comprehensive guide for organizations aiming to fortify their cybersecurity posture, marrying best practices in proactive and reactive measures in the ever-challenging digital realm.
基金Taif University,Taif,Saudi Arabia through Taif University Researchers Supporting Project Number(TURSP-2020/115).
文摘Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.
基金supported by the National Natural Science Foundation of China(61988101,61922030,61773163)Shanghai Rising-Star Program(18QA1401400)+3 种基金the International(Regional)Cooperation and Exchange Project(61720106008)the Natural Science Foundation of Shanghai(17ZR1406800)the Fundamental Research Funds for the Central Universitiesthe 111 Project(B17017)。
文摘Cyber attacks pose severe threats on synchronization of multi-agent systems.Deception attack,as a typical type of cyber attack,can bypass the surveillance of the attack detection mechanism silently,resulting in a heavy loss.Therefore,the problem of mean-square bounded synchronization in multi-agent systems subject to deception attacks is investigated in this paper.The control signals can be replaced with false data from controllerto-actuator channels or the controller.The success of the attack is measured through a stochastic variable.A distributed impulsive controller using a pinning strategy is redesigned,which ensures that mean-square bounded synchronization is achieved in the presence of deception attacks.Some sufficient conditions are derived,in which upper bounds of the synchronization error are given.Finally,two numerical simulations with symmetric and asymmetric network topologies are given to illustrate the theoretical results.
基金Ant Financial,Zhejiang University Financial Technology Research Center.
文摘With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.
文摘In this paper,a resilient distributed control scheme against replay attacks for multi-agent networked systems subject to input and state constraints is proposed.The methodological starting point relies on a smart use of predictive arguments with a twofold aim:1)Promptly detect malicious agent behaviors affecting normal system operations;2)Apply specific control actions,based on predictive ideas,for mitigating as much as possible undesirable domino effects resulting from adversary operations.Specifically,the multi-agent system is topologically described by a leader-follower digraph characterized by a unique leader and set-theoretic receding horizon control ideas are exploited to develop a distributed algorithm capable to instantaneously recognize the attacked agent.Finally,numerical simulations are carried out to show benefits and effectiveness of the proposed approach.
基金the National Nat-ural Science Foundation of China under Grant No.62072406,No.U19B2016,No.U20B2038 and No.61871398the Natural Science Foundation of Zhejiang Province under Grant No.LY19F020025the Major Special Funding for“Science and Tech-nology Innovation 2025”in Ningbo under Grant No.2018B10063.
文摘The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model.
基金supported by the National Natural Science Foundation of China(61773056)the Scientific and Technological Innovation Foundation of Shunde Graduate School,University of Science and Technology Beijing(USTB)(BK19AE018)+2 种基金the Fundamental Research Funds for the Central Universities of USTB(FRF-TP-20-09B,230201606500061,FRF-DF-20-35,FRF-BD-19-002A)supported by Zhejiang Natural Science Foundation(LD21F030001)supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(Ministry of Science and Information and Communications Technology)(NRF-2020R1A2C1005449)。
文摘This paper investigates the event-triggered security consensus problem for nonlinear multi-agent systems(MASs)under denial-of-service(Do S)attacks over an undirected graph.A novel adaptive memory observer-based anti-disturbance control scheme is presented to improve the observer accuracy by adding a buffer for the system output measurements.Meanwhile,this control scheme can also provide more reasonable control signals when Do S attacks occur.To save network resources,an adaptive memory event-triggered mechanism(AMETM)is also proposed and Zeno behavior is excluded.It is worth mentioning that the AMETM's updates do not require global information.Then,the observer and controller gains are obtained by using the linear matrix inequality(LMI)technique.Finally,simulation examples show the effectiveness of the proposed control scheme.
文摘Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the field of AI security. Currently, robustness defense techniques for models often rely on adversarial training, a method that tends to only defend against specific types of attacks and lacks strong generalization. In response to this challenge, this paper proposes a black-box defense method based on Image Denoising and Pix2Pix (IDP) technology. This method does not require prior knowledge of the specific attack type and eliminates the need for cumbersome adversarial training. When making predictions on unknown samples, the IDP method first undergoes denoising processing, followed by inputting the processed image into a trained Pix2Pix model for image transformation. Finally, the image generated by Pix2Pix is input into the classification model for prediction. This versatile defense approach demonstrates excellent defensive performance against common attack methods such as FGSM, I-FGSM, DeepFool, and UPSET, showcasing high flexibility and transferability. In summary, the IDP method introduces new perspectives and possibilities for adversarial sample defense, alleviating the limitations of traditional adversarial training methods and enhancing the overall robustness of models.
基金supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China under Grant No.61521003the National Natural Science Foundation of China under Grant No.62072467 and 62002383.
文摘Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs.
文摘These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods.
文摘Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the process of establishing these networks. Nevertheless, security-wise precautions were not taken in some of them. In this paper, we expose some of the vulnerability that exists in a commonly and widely used network protocol, the Address Resolution Protocol (ARP) protocol. Effectively, we will implement a user friendly and an easy-to-use tool that exploits the weaknesses of this protocol to deceive a victim’s machine and a router through creating a sort of Man-in-the-Middle (MITM) attack. In MITM, all of the data going out or to the victim machine will pass first through the attacker’s machine. This enables the attacker to inspect victim’s data packets, extract valuable data (like passwords) that belong to the victim and manipulate these data packets. We suggest and implement a defense mechanism and tool that counters this attack, warns the user, and exposes some information about the attacker to isolate him. GNU/Linux is chosen as an operating system to implement both the attack and the defense tools. The results show the success of the defense mechanism in detecting the ARP related attacks in a very simple and efficient way.
文摘In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods.
文摘This paper puts forward the plan on constructing information security attack and defense platform based on cloud computing and virtualization, provides the hardware topology structure of the platform and technical framework of the system and the experimental process and technical principle of the platform. The experiment platform can provide more than 20 attack classes. Using the virtualization technology can build hypothesized target of various types in the laboratory and diversified network structure to carry out attack and defense experiment.
基金supported in part by NSFC No.62202275,Shandong-SF No.ZR2022QF012 projects.
文摘Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices.The existing defense methods are either limited in application or only defend on results,but not on process.In this work,we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process.The insight of this method is based on the observation:many existing audio AE attacks utilize query-based methods,which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process.Inspired by this observation,We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query.Thus,we can identify when a sequence of queries appears to be suspectable to generate audio AEs.Through extensive evaluation on four state-of-the-art audio AE attacks,we demonstrate that on average our defense identify the adversary’s intent with over 90%accuracy.With careful regard for robustness evaluations,we also analyze our proposed defense and its strength to withstand two adaptive attacks.Finally,our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining.
基金supported by the National Nature Science Foundation of China under Grant No.62272007National Nature Science Foundation of China under Grant No.U1936119Major Technology Program of Hainan,China(ZDKJ2019003)。
文摘Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively implant the backdoor,and triggers on the input can mislead the models during testing.Our study shows that the model shows different learning behaviors in clean and poisoned subsets during training.Based on this observation,we propose a general training pipeline to defend against backdoor attacks actively.Benign models can be trained from the unreli-able dataset by decoupling the learning process into three stages,i.e.,supervised learning,active unlearning,and active semi-supervised fine-tuning.The effectiveness of our approach has been shown in numerous experiments across various backdoor attacks and datasets.
文摘The advancement of technology and the digitization of organizational functions and services have propelled the world into a new era of computing capability and sophistication. The proliferation and usability of such complex technological services raise several security concerns. One of the most critical concerns is cross-site scripting (XSS) attacks. This paper has concentrated on revealing and comprehensively analyzing XSS injection attacks, detection, and prevention concisely and accurately. I have done a thorough study and reviewed several research papers and publications with a specific focus on the researchers’ defensive techniques for preventing XSS attacks and subdivided them into five categories: machine learning techniques, server-side techniques, client-side techniques, proxy-based techniques, and combined approaches. The majority of existing cutting-edge XSS defensive approaches carefully analyzed in this paper offer protection against the traditional XSS attacks, such as stored and reflected XSS. There is currently no reliable solution to provide adequate protection against the newly discovered XSS attack known as DOM-based and mutation-based XSS attacks. After reading all of the proposed models and identifying their drawbacks, I recommend a combination of static, dynamic, and code auditing in conjunction with secure coding and continuous user awareness campaigns about XSS emerging attacks.
文摘With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an attack model of the malicious code, and discuss the critical techniques of implementation and prevention against the malicious code. The remaining problems and emerging trends in this area are also addressed in the paper.