As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ...As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.展开更多
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist...This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.展开更多
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio...Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.展开更多
To ensure the safe operation of industrial digital twins network and avoid the harm to the system caused by hacker invasion,a series of discussions on network security issues are carried out based on game theory.From ...To ensure the safe operation of industrial digital twins network and avoid the harm to the system caused by hacker invasion,a series of discussions on network security issues are carried out based on game theory.From the perspective of the life cycle of network vulnerabilities,mining and repairing vulnerabilities are analyzed by applying evolutionary game theory.The evolution process of knowledge sharing among white hats under various conditions is simulated,and a game model of the vulnerability patch cooperative development strategy among manufacturers is constructed.On this basis,the differential evolution is introduced into the update mechanism of the Wolf Colony Algorithm(WCA)to produce better replacement individuals with greater probability from the perspective of both attack and defense.Through the simulation experiment,it is found that the convergence speed of the probability(X)of white Hat 1 choosing the knowledge sharing policy is related to the probability(x0)of white Hat 2 choosing the knowledge sharing policy initially,and the probability(y0)of white hat 2 choosing the knowledge sharing policy initially.When y0?0.9,X converges rapidly in a relatively short time.When y0 is constant and x0 is small,the probability curve of the“cooperative development”strategy converges to 0.It is concluded that the higher the trust among the white hat members in the temporary team,the stronger their willingness to share knowledge,which is conducive to the mining of loopholes in the system.The greater the probability of a hacker attacking the vulnerability before it is fully disclosed,the lower the willingness of manufacturers to choose the"cooperative development"of vulnerability patches.Applying the improved wolf colonyco-evolution algorithm can obtain the equilibrium solution of the"attack and defense game model",and allocate the security protection resources according to the importance of nodes.This study can provide an effective solution to protect the network security for digital twins in the industry.展开更多
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor...The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application.展开更多
With the help of advanced information technology,real-time monitoring and control levels of cyber-physical distribution systems(CPDS)have been significantly improved.However due to the deep integration of cyber and ph...With the help of advanced information technology,real-time monitoring and control levels of cyber-physical distribution systems(CPDS)have been significantly improved.However due to the deep integration of cyber and physical systems,attackers could still threaten the stable operation of CPDS by launching cyber-attacks,such as denial-of-service(DoS)attacks.Thus,it is necessary to study the CPDS risk assessment and defense resource allocation methods under DoS attacks.This paper analyzes the impact of DoS attacks on the physical system based on the CPDS fault self-healing control.Then,considering attacker and defender strategies and attack damage,a CPDS risk assessment framework is established.Furthermore,risk assessment and defense resource allocation methods,based on the Stackelberg dynamic game model,are proposed under conditions in which the cyber and physical systems are launched simultaneously.Finally,a simulation based on an actual CPDS is performed,and the calculation results verify the effectiveness of the algorithm.展开更多
With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an atta...With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an attack model of the malicious code, and discuss the critical techniques of implementation and prevention against the malicious code. The remaining problems and emerging trends in this area are also addressed in the paper.展开更多
Most of existed strategies for defending OFA (Objective Function Attack)are centralized, only suitable for small-scale networks and stressed on the computation complexity and traffic load are usually neglected. In thi...Most of existed strategies for defending OFA (Objective Function Attack)are centralized, only suitable for small-scale networks and stressed on the computation complexity and traffic load are usually neglected. In this paper, we pay more attentions on the OFA problem in large-scale cognitive networks, where the big data generated from the network must be considered and the traditional methods could be of helplessness. In this paper, we first analyze the interactive processes between attacker and defender in detail, and then a defense strategy for OFA based on differential game is proposed, abbreviated as DSDG. Secondly, the game saddle point and optimal defense strategy have proved to be existed simultaneously. Simulation results show that the proposed DSDG has a less influence on network performance and a lower rate of packet loss.More importantly, it can cope with the large range展开更多
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu...Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.展开更多
基金supported by the National Natural Science Foundation of China(61771154)the Fundamental Research Funds for the Central Universities(3072022CF0601)supported by Key Laboratory of Advanced Marine Communication and Information Technology,Ministry of Industry and Information Technology,Harbin Engineering University,Harbin,China.
文摘As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.
文摘This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.
基金Taif University,Taif,Saudi Arabia through Taif University Researchers Supporting Project Number(TURSP-2020/115).
文摘Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.
文摘To ensure the safe operation of industrial digital twins network and avoid the harm to the system caused by hacker invasion,a series of discussions on network security issues are carried out based on game theory.From the perspective of the life cycle of network vulnerabilities,mining and repairing vulnerabilities are analyzed by applying evolutionary game theory.The evolution process of knowledge sharing among white hats under various conditions is simulated,and a game model of the vulnerability patch cooperative development strategy among manufacturers is constructed.On this basis,the differential evolution is introduced into the update mechanism of the Wolf Colony Algorithm(WCA)to produce better replacement individuals with greater probability from the perspective of both attack and defense.Through the simulation experiment,it is found that the convergence speed of the probability(X)of white Hat 1 choosing the knowledge sharing policy is related to the probability(x0)of white Hat 2 choosing the knowledge sharing policy initially,and the probability(y0)of white hat 2 choosing the knowledge sharing policy initially.When y0?0.9,X converges rapidly in a relatively short time.When y0 is constant and x0 is small,the probability curve of the“cooperative development”strategy converges to 0.It is concluded that the higher the trust among the white hat members in the temporary team,the stronger their willingness to share knowledge,which is conducive to the mining of loopholes in the system.The greater the probability of a hacker attacking the vulnerability before it is fully disclosed,the lower the willingness of manufacturers to choose the"cooperative development"of vulnerability patches.Applying the improved wolf colonyco-evolution algorithm can obtain the equilibrium solution of the"attack and defense game model",and allocate the security protection resources according to the importance of nodes.This study can provide an effective solution to protect the network security for digital twins in the industry.
文摘The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application.
基金supported in part by the National Key Research and Development Program of China(2017YFB0903000)in part by the National Natural Science Foundation of China(No.51677116).
文摘With the help of advanced information technology,real-time monitoring and control levels of cyber-physical distribution systems(CPDS)have been significantly improved.However due to the deep integration of cyber and physical systems,attackers could still threaten the stable operation of CPDS by launching cyber-attacks,such as denial-of-service(DoS)attacks.Thus,it is necessary to study the CPDS risk assessment and defense resource allocation methods under DoS attacks.This paper analyzes the impact of DoS attacks on the physical system based on the CPDS fault self-healing control.Then,considering attacker and defender strategies and attack damage,a CPDS risk assessment framework is established.Furthermore,risk assessment and defense resource allocation methods,based on the Stackelberg dynamic game model,are proposed under conditions in which the cyber and physical systems are launched simultaneously.Finally,a simulation based on an actual CPDS is performed,and the calculation results verify the effectiveness of the algorithm.
文摘With the explosive growth of network applications, the threat of the malicious code against network security becomes increasingly serious. In this paper we explore the mechanism of the malicious code by giving an attack model of the malicious code, and discuss the critical techniques of implementation and prevention against the malicious code. The remaining problems and emerging trends in this area are also addressed in the paper.
基金This work is supported by the Research Fund for the Doctoral Program of Higher Education of China (20122304130002), the Natural Science Foundation of China (61370212), the Natural Science Foundation of Heilongjiang Province (ZD 201102), the Fundamental Research Fund for the Central Universities (HEUCFZ1213, HEUCF100601), and Postdoctoral Science Foundation of Heilongjiang Province (LBH-210204).
文摘Most of existed strategies for defending OFA (Objective Function Attack)are centralized, only suitable for small-scale networks and stressed on the computation complexity and traffic load are usually neglected. In this paper, we pay more attentions on the OFA problem in large-scale cognitive networks, where the big data generated from the network must be considered and the traditional methods could be of helplessness. In this paper, we first analyze the interactive processes between attacker and defender in detail, and then a defense strategy for OFA based on differential game is proposed, abbreviated as DSDG. Secondly, the game saddle point and optimal defense strategy have proved to be existed simultaneously. Simulation results show that the proposed DSDG has a less influence on network performance and a lower rate of packet loss.More importantly, it can cope with the large range
文摘Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.