Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control l...Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control loops critical for managing Industry 5.0 deployments,digital agriculture systems,and essential infrastructures.The provision of extensive machine-type communications through 6G will render many of these innovative systems autonomous and unsupervised.While full automation will enhance industrial efficiency significantly,it concurrently introduces new cyber risks and vulnerabilities.In particular,unattended systems are highly susceptible to trust issues:malicious nodes and false information can be easily introduced into control loops.Additionally,Denialof-Service attacks can be executed by inundating the network with valueless noise.Current anomaly detection schemes require the entire transformation of the control software to integrate new steps and can only mitigate anomalies that conform to predefined mathematical models.Solutions based on an exhaustive data collection to detect anomalies are precise but extremely slow.Standard models,with their limited understanding of mobile networks,can achieve precision rates no higher than 75%.Therefore,more general and transversal protection mechanisms are needed to detect malicious behaviors transparently.This paper introduces a probabilistic trust model and control algorithm designed to address this gap.The model determines the probability of any node to be trustworthy.Communication channels are pruned for those nodes whose probability is below a given threshold.The trust control algorithmcomprises three primary phases,which feed themodel with three different probabilities,which are weighted and combined.Initially,anomalous nodes are identified using Gaussian mixture models and clustering technologies.Next,traffic patterns are studied using digital Bessel functions and the functional scalar product.Finally,the information coherence and content are analyzed.The noise content and abnormal information sequences are detected using a Volterra filter and a bank of Finite Impulse Response filters.An experimental validation based on simulation tools and environments was carried out.Results show the proposed solution can successfully detect up to 92%of malicious data injection attacks.展开更多
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor...The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.展开更多
Modeling of unsteady aerodynamic loads at high angles of attack using a small amount of experimental or simulation data to construct predictive models for unknown states can greatly improve the efficiency of aircraft ...Modeling of unsteady aerodynamic loads at high angles of attack using a small amount of experimental or simulation data to construct predictive models for unknown states can greatly improve the efficiency of aircraft unsteady aerodynamic design and flight dynamics analysis.In this paper,aiming at the problems of poor generalization of traditional aerodynamic models and intelligent models,an intelligent aerodynamic modeling method based on gated neural units is proposed.The time memory characteristics of the gated neural unit is fully utilized,thus the nonlinear flow field characterization ability of the learning and training process is enhanced,and the generalization ability of the whole prediction model is improved.The prediction and verification of the model are carried out under the maneuvering flight condition of NACA0015 airfoil.The results show that the model has good adaptability.In the interpolation prediction,the maximum prediction error of the lift and drag coefficients and the moment coefficient does not exceed 10%,which can basically represent the variation characteristics of the entire flow field.In the construction of extrapolation models,the training model based on the strong nonlinear data has good accuracy for weak nonlinear prediction.Furthermore,the error is larger,even exceeding 20%,which indicates that the extrapolation and generalization capabilities need to be further optimized by integrating physical models.Compared with the conventional state space equation model,the proposed method can improve the extrapolation accuracy and efficiency by 78%and 60%,respectively,which demonstrates the applied potential of this method in aerodynamic modeling.展开更多
This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control fram...This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control framework for such ICVs.This framework seamlessly integrates the predesigned optimal control with distributed model predictive control(DMPC)optimization and introduces a unique distributed attack detector to ensure the reliability of the transmitted information among vehicles.Notably,our strategy uses previously broadcasted information and a specialized convex set,termed the“resilience set”,to identify unreliable data.This approach significantly eases graph robustness prerequisites,requiring only an(F+1)-robust graph,in contrast to the established mean sequence reduced algorithms,which require a minimum(2F+1)-robust graph.Additionally,we introduce a verification algorithm to restore trust in vehicles under minor attacks,further reducing communication network robustness.Our analysis demonstrates the recursive feasibility of the DMPC optimization.Furthermore,the proposed method achieves exceptional control performance by minimizing the discrepancies between the DMPC control inputs and predesigned platoon control inputs,while ensuring constraint compliance and cybersecurity.Simulation results verify the effectiveness of our theoretical findings.展开更多
Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a n...Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a new non-linear generalized model to describe Cyber-Physical Systems.This model includes unknown multivariable discrete and continuous-time functions and different multiplicative noises to represent the evolution of physical processes and randomeffects in the physical and computationalworlds.Besides,the digitalization stage in hardware devices is represented too.Attackers and most critical sparse sensor attacks are described through a stochastic process.The reconstruction and protectionmechanisms are based on aweighted stochasticmodel.Error probability in data samples is estimated through different indicators commonly employed in non-linear dynamics(such as the Fourier transform,first-return maps,or the probability density function).A decision algorithm calculates the final reconstructed value considering the previous error probability.An experimental validation based on simulation tools and real deployments is also carried out.Both,the new technology performance and scalability are studied.Results prove that the proposed solution protects Cyber-Physical Systems against up to 92%of attacks and perturbations,with a computational delay below 2.5 s.The proposed model shows a linear complexity,as recursive or iterative structures are not employed,just algebraic and probabilistic functions.In conclusion,the new model and reconstructionmechanism can protect successfully Cyber-Physical Systems against sparse sensor attacks,even in dense or pervasive deployments and scenarios.展开更多
Kinetically constrained spin systems are toy models of supercooled liquids and amorphous solids. In this perspective,we revisit the prototypical Fredrickson–Andersen(FA) kinetically constrained model from the viewpoi...Kinetically constrained spin systems are toy models of supercooled liquids and amorphous solids. In this perspective,we revisit the prototypical Fredrickson–Andersen(FA) kinetically constrained model from the viewpoint of K-core combinatorial optimization. Each kinetic cluster of the FA system, containing all the mutually visitable microscopic occupation configurations, is exactly the solution space of a specific instance of the K-core attack problem. The whole set of different jammed occupation patterns of the FA system is the configuration space of an equilibrium K-core problem. Based on recent theoretical results achieved on the K-core attack and equilibrium K-core problems, we discuss the thermodynamic spin glass phase transitions and the maximum occupation density of the fully unfrozen FA kinetic cluster, and the minimum occupation density and extreme vulnerability of the partially frozen(jammed) kinetic clusters. The equivalence between K-core attack and the fully unfrozen FA kinetic cluster also implies a new way of sampling K-core attack solutions.展开更多
Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural netw...Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples.This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems.Most existing adversarial attack strategies focus primarily on image classification problems,failing to fully exploit the unique characteristics of object detectionmodels,thus resulting in widespread deficiencies in their transferability.Furthermore,previous research has predominantly concentrated on the transferability issues of non-targeted attacks,whereas enhancing the transferability of targeted adversarial examples presents even greater challenges.Traditional attack techniques typically employ cross-entropy as a loss measure,iteratively adjusting adversarial examples to match target categories.However,their inherent limitations restrict their broad applicability and transferability across different models.To address the aforementioned challenges,this study proposes a novel targeted adversarial attack method aimed at enhancing the transferability of adversarial samples across object detection models.Within the framework of iterative attacks,we devise a new objective function designed to mitigate consistency issues arising from cumulative noise and to enhance the separation between target and non-target categories(logit margin).Secondly,a data augmentation framework incorporating random erasing and color transformations is introduced into targeted adversarial attacks.This enhances the diversity of gradients,preventing overfitting to white-box models.Lastly,perturbations are applied only within the specified object’s bounding box to reduce the perturbation range,enhancing attack stealthiness.Experiments were conducted on the Microsoft Common Objects in Context(MS COCO)dataset using You Only Look Once version 3(YOLOv3),You Only Look Once version 8(YOLOv8),Faster Region-based Convolutional Neural Networks(Faster R-CNN),and RetinaNet.The results demonstrate a significant advantage of the proposed method in black-box settings.Among these,the success rate of RetinaNet transfer attacks reached a maximum of 82.59%.展开更多
The RPL(IPv6 Routing Protocol for Low-Power and Lossy Networks)protocol is essential for efficient communi-cation within the Internet of Things(IoT)ecosystem.Despite its significance,RPL’s susceptibility to attacks r...The RPL(IPv6 Routing Protocol for Low-Power and Lossy Networks)protocol is essential for efficient communi-cation within the Internet of Things(IoT)ecosystem.Despite its significance,RPL’s susceptibility to attacks remains a concern.This paper presents a comprehensive simulation-based analysis of the RPL protocol’s vulnerability to the decreased rank attack in both static andmobilenetwork environments.We employ the Random Direction Mobility Model(RDM)for mobile scenarios within the Cooja simulator.Our systematic evaluation focuses on critical performance metrics,including Packet Delivery Ratio(PDR),Average End to End Delay(AE2ED),throughput,Expected Transmission Count(ETX),and Average Power Consumption(APC).Our findings illuminate the disruptive impact of this attack on the routing hierarchy,resulting in decreased PDR and throughput,increased AE2ED,ETX,and APC.These results underscore the urgent need for robust security measures to protect RPL-based IoT networks.Furthermore,our study emphasizes the exacerbated impact of the attack in mobile scenarios,highlighting the evolving security requirements of IoT networks.展开更多
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ...As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.展开更多
Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentica...Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentication,and integrity verification of digital content interchanged through the Internet were utilized to solve a major concern in information and communication technologies.The authors’difficulties were tampering detection,authentication,and integrity verification of the digital contents.This study develops an Automated Data Mining based Digital Text Document Watermarking for Tampering Attack Detection(ADMDTW-TAD)via the Internet.The DM concept is exploited in the presented ADMDTW-TAD technique to identify the document’s appropriate characteristics to embed larger watermark information.The presented secure watermarking scheme intends to transmit digital text documents over the Internet securely.Once the watermark is embedded with no damage to the original document,it is then shared with the destination.The watermark extraction process is performed to get the original document securely.The experimental validation of the ADMDTW-TAD technique is carried out under varying levels of attack volumes,and the outcomes were inspected in terms of different measures.The simulation values indicated that the ADMDTW-TAD technique improved performance over other models.展开更多
Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim...Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim to investigate backdoor attack methods for image categorization tasks,to promote the development of DNN towards higher security.Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples,and the meticulous data screening by developers,hindering practical attack implementation.To overcome these challenges,this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation(GN-TUAP)algorithm.This approach restricts the direction of perturbations and normalizes abnormal pixel values,ensuring that perturbations progress as much as possible in a direction perpendicular to the decision hyperplane in linear problems.This limits anomalies within the perturbations improves their visual stealthiness,and makes them more challenging for defense methods to detect.To verify the effectiveness,stealthiness,and robustness of GN-TUAP,we proposed a comprehensive threat model.Based on this model,extensive experiments were conducted using the CIFAR-10,CIFAR-100,GTSRB,and MNIST datasets,comparing our method with existing state-of-the-art attack methods.We also tested our perturbation triggers using various defense methods and further experimented on the robustness of the triggers against noise filtering techniques.The experimental outcomes demonstrate that backdoor attacks leveraging perturbations generated via our algorithm exhibit cross-model attack effectiveness and superior stealthiness.Furthermore,they possess robust anti-detection capabilities and maintain commendable performance when subjected to noise-filtering methods.展开更多
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist...This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.展开更多
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor...The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application.展开更多
In this paper,a resilient distributed control scheme against replay attacks for multi-agent networked systems subject to input and state constraints is proposed.The methodological starting point relies on a smart use ...In this paper,a resilient distributed control scheme against replay attacks for multi-agent networked systems subject to input and state constraints is proposed.The methodological starting point relies on a smart use of predictive arguments with a twofold aim:1)Promptly detect malicious agent behaviors affecting normal system operations;2)Apply specific control actions,based on predictive ideas,for mitigating as much as possible undesirable domino effects resulting from adversary operations.Specifically,the multi-agent system is topologically described by a leader-follower digraph characterized by a unique leader and set-theoretic receding horizon control ideas are exploited to develop a distributed algorithm capable to instantaneously recognize the attacked agent.Finally,numerical simulations are carried out to show benefits and effectiveness of the proposed approach.展开更多
Based on the analysis for the interception process of ship-to-air missile system to the anti-ship missile stream, the antagonism of ship-to-air missile and anti-ship missile stream was modeled by Monte Carlo method. T...Based on the analysis for the interception process of ship-to-air missile system to the anti-ship missile stream, the antagonism of ship-to-air missile and anti-ship missile stream was modeled by Monte Carlo method. This model containing the probability of acquiring anti-ship missile, threat estimation, firepower distribution, interception, effectiveness evaluation and firepower turning, can dynamically simulate the antagonism process of anti-ship missile attack stream and anti-air missile weapon system. The anti-ship missile's saturation attack stream for different ship-to-air missile systems can be calculated quantitatively. The simulated results reveal the relations among the anti-ship missile saturation attack and the attack intensity of anti-ship missile, interception mode and the main parameters of anti-air missile weapon system. It provides a theoretical basis for the effective operation of anti-ship missile.展开更多
With the rapid development of e-commerce, the security issues of collaborative filtering recommender systems have been widely investigated. Malicious users can benefit from injecting a great quantities of fake profile...With the rapid development of e-commerce, the security issues of collaborative filtering recommender systems have been widely investigated. Malicious users can benefit from injecting a great quantities of fake profiles into recommender systems to manipulate recommendation results. As one of the most important attack methods in recommender systems, the shilling attack has been paid considerable attention, especially to its model and the way to detect it. Among them, the loose version of Group Shilling Attack Generation Algorithm (GSAGenl) has outstanding performance. It can be immune to some PCC (Pearson Correlation Coefficient)-based detectors due to the nature of anti-Pearson correlation. In order to overcome the vulnerabilities caused by GSAGenl, a gravitation-based detection model (GBDM) is presented, integrated with a sophisticated gravitational detector and a decider. And meanwhile two new basic attributes and a particle filter algorithm are used for tracking prediction. And then, whether an attack occurs can be judged according to the law of universal gravitation in decision-making. The detection performances of GBDM, HHT-SVM, UnRAP, AP-UnRAP Semi-SAD,SVM-TIA and PCA-P are compared and evaluated. And simulation results show the effectiveness and availability of GBDM.展开更多
Algebraic attack was applied to attack Filter-Combintr model keystreamgenerators. We proposed the technique of function composition to improve the model, and the improvedmodel can resist the algebraic attack. A new cr...Algebraic attack was applied to attack Filter-Combintr model keystreamgenerators. We proposed the technique of function composition to improve the model, and the improvedmodel can resist the algebraic attack. A new criterion for designing Filter-Combiner model was alsoproposed: the total length I. of Linear Finite State Machines used in the model should be largeenough and the degree d of Filter-Combiner function should be approximate [L/2].展开更多
In order to evaluate all attack paths in a threat tree,based on threat modeling theory,a weight distribution algorithm of the root node in a threat tree is designed,which computes threat coefficients of leaf nodes in ...In order to evaluate all attack paths in a threat tree,based on threat modeling theory,a weight distribution algorithm of the root node in a threat tree is designed,which computes threat coefficients of leaf nodes in two ways including threat occurring possibility and the degree of damage.Besides,an algorithm of searching attack path was also obtained in accordence with its definition.Finally,an attack path evaluation system was implemented which can output the threat coefficients of the leaf nodes in a target threat tree,the weight distribution information,and the attack paths.An example threat tree is given to verify the effectiveness of the algorithms.展开更多
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio...Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.展开更多
A color petri net (CPN) based attack modeling approach is addressed. Compared with graph-based modeling, CPN based attack model is flexible enough to model Internet intrusions, because of their static and dynamic feat...A color petri net (CPN) based attack modeling approach is addressed. Compared with graph-based modeling, CPN based attack model is flexible enough to model Internet intrusions, because of their static and dynamic features. The processes and rules of building CPN based attack model from attack tree are also presented. In order to evaluate the risk of intrusion, some cost elements are added to CPN based attack modeling. This extended model is useful in intrusion detection and risk evaluation. Experiences show that it is easy to exploit CPN based attack modeling approach to provide the controlling functions, such as intrusion response and intrusion defense. A case study given in this paper shows that CPN based attack model has many unique characters which attack tree model hasn’t.展开更多
基金funding by Comunidad de Madrid within the framework of the Multiannual Agreement with Universidad Politécnica de Madrid to encourage research by young doctors(PRINCE project).
文摘Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control loops critical for managing Industry 5.0 deployments,digital agriculture systems,and essential infrastructures.The provision of extensive machine-type communications through 6G will render many of these innovative systems autonomous and unsupervised.While full automation will enhance industrial efficiency significantly,it concurrently introduces new cyber risks and vulnerabilities.In particular,unattended systems are highly susceptible to trust issues:malicious nodes and false information can be easily introduced into control loops.Additionally,Denialof-Service attacks can be executed by inundating the network with valueless noise.Current anomaly detection schemes require the entire transformation of the control software to integrate new steps and can only mitigate anomalies that conform to predefined mathematical models.Solutions based on an exhaustive data collection to detect anomalies are precise but extremely slow.Standard models,with their limited understanding of mobile networks,can achieve precision rates no higher than 75%.Therefore,more general and transversal protection mechanisms are needed to detect malicious behaviors transparently.This paper introduces a probabilistic trust model and control algorithm designed to address this gap.The model determines the probability of any node to be trustworthy.Communication channels are pruned for those nodes whose probability is below a given threshold.The trust control algorithmcomprises three primary phases,which feed themodel with three different probabilities,which are weighted and combined.Initially,anomalous nodes are identified using Gaussian mixture models and clustering technologies.Next,traffic patterns are studied using digital Bessel functions and the functional scalar product.Finally,the information coherence and content are analyzed.The noise content and abnormal information sequences are detected using a Volterra filter and a bank of Finite Impulse Response filters.An experimental validation based on simulation tools and environments was carried out.Results show the proposed solution can successfully detect up to 92%of malicious data injection attacks.
文摘The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.
基金supported in part by the National Natural Science Foundation of China (No. 12202363)。
文摘Modeling of unsteady aerodynamic loads at high angles of attack using a small amount of experimental or simulation data to construct predictive models for unknown states can greatly improve the efficiency of aircraft unsteady aerodynamic design and flight dynamics analysis.In this paper,aiming at the problems of poor generalization of traditional aerodynamic models and intelligent models,an intelligent aerodynamic modeling method based on gated neural units is proposed.The time memory characteristics of the gated neural unit is fully utilized,thus the nonlinear flow field characterization ability of the learning and training process is enhanced,and the generalization ability of the whole prediction model is improved.The prediction and verification of the model are carried out under the maneuvering flight condition of NACA0015 airfoil.The results show that the model has good adaptability.In the interpolation prediction,the maximum prediction error of the lift and drag coefficients and the moment coefficient does not exceed 10%,which can basically represent the variation characteristics of the entire flow field.In the construction of extrapolation models,the training model based on the strong nonlinear data has good accuracy for weak nonlinear prediction.Furthermore,the error is larger,even exceeding 20%,which indicates that the extrapolation and generalization capabilities need to be further optimized by integrating physical models.Compared with the conventional state space equation model,the proposed method can improve the extrapolation accuracy and efficiency by 78%and 60%,respectively,which demonstrates the applied potential of this method in aerodynamic modeling.
基金the financial support from the Natural Sciences and Engineering Research Council of Canada(NSERC)。
文摘This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control framework for such ICVs.This framework seamlessly integrates the predesigned optimal control with distributed model predictive control(DMPC)optimization and introduces a unique distributed attack detector to ensure the reliability of the transmitted information among vehicles.Notably,our strategy uses previously broadcasted information and a specialized convex set,termed the“resilience set”,to identify unreliable data.This approach significantly eases graph robustness prerequisites,requiring only an(F+1)-robust graph,in contrast to the established mean sequence reduced algorithms,which require a minimum(2F+1)-robust graph.Additionally,we introduce a verification algorithm to restore trust in vehicles under minor attacks,further reducing communication network robustness.Our analysis demonstrates the recursive feasibility of the DMPC optimization.Furthermore,the proposed method achieves exceptional control performance by minimizing the discrepancies between the DMPC control inputs and predesigned platoon control inputs,while ensuring constraint compliance and cybersecurity.Simulation results verify the effectiveness of our theoretical findings.
基金supported by Comunidad de Madrid within the framework of the Multiannual Agreement with Universidad Politécnica de Madrid to encourage research by young doctors(PRINCE).
文摘Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a new non-linear generalized model to describe Cyber-Physical Systems.This model includes unknown multivariable discrete and continuous-time functions and different multiplicative noises to represent the evolution of physical processes and randomeffects in the physical and computationalworlds.Besides,the digitalization stage in hardware devices is represented too.Attackers and most critical sparse sensor attacks are described through a stochastic process.The reconstruction and protectionmechanisms are based on aweighted stochasticmodel.Error probability in data samples is estimated through different indicators commonly employed in non-linear dynamics(such as the Fourier transform,first-return maps,or the probability density function).A decision algorithm calculates the final reconstructed value considering the previous error probability.An experimental validation based on simulation tools and real deployments is also carried out.Both,the new technology performance and scalability are studied.Results prove that the proposed solution protects Cyber-Physical Systems against up to 92%of attacks and perturbations,with a computational delay below 2.5 s.The proposed model shows a linear complexity,as recursive or iterative structures are not employed,just algebraic and probabilistic functions.In conclusion,the new model and reconstructionmechanism can protect successfully Cyber-Physical Systems against sparse sensor attacks,even in dense or pervasive deployments and scenarios.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 12247104 and 12047503)。
文摘Kinetically constrained spin systems are toy models of supercooled liquids and amorphous solids. In this perspective,we revisit the prototypical Fredrickson–Andersen(FA) kinetically constrained model from the viewpoint of K-core combinatorial optimization. Each kinetic cluster of the FA system, containing all the mutually visitable microscopic occupation configurations, is exactly the solution space of a specific instance of the K-core attack problem. The whole set of different jammed occupation patterns of the FA system is the configuration space of an equilibrium K-core problem. Based on recent theoretical results achieved on the K-core attack and equilibrium K-core problems, we discuss the thermodynamic spin glass phase transitions and the maximum occupation density of the fully unfrozen FA kinetic cluster, and the minimum occupation density and extreme vulnerability of the partially frozen(jammed) kinetic clusters. The equivalence between K-core attack and the fully unfrozen FA kinetic cluster also implies a new way of sampling K-core attack solutions.
文摘Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples.This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems.Most existing adversarial attack strategies focus primarily on image classification problems,failing to fully exploit the unique characteristics of object detectionmodels,thus resulting in widespread deficiencies in their transferability.Furthermore,previous research has predominantly concentrated on the transferability issues of non-targeted attacks,whereas enhancing the transferability of targeted adversarial examples presents even greater challenges.Traditional attack techniques typically employ cross-entropy as a loss measure,iteratively adjusting adversarial examples to match target categories.However,their inherent limitations restrict their broad applicability and transferability across different models.To address the aforementioned challenges,this study proposes a novel targeted adversarial attack method aimed at enhancing the transferability of adversarial samples across object detection models.Within the framework of iterative attacks,we devise a new objective function designed to mitigate consistency issues arising from cumulative noise and to enhance the separation between target and non-target categories(logit margin).Secondly,a data augmentation framework incorporating random erasing and color transformations is introduced into targeted adversarial attacks.This enhances the diversity of gradients,preventing overfitting to white-box models.Lastly,perturbations are applied only within the specified object’s bounding box to reduce the perturbation range,enhancing attack stealthiness.Experiments were conducted on the Microsoft Common Objects in Context(MS COCO)dataset using You Only Look Once version 3(YOLOv3),You Only Look Once version 8(YOLOv8),Faster Region-based Convolutional Neural Networks(Faster R-CNN),and RetinaNet.The results demonstrate a significant advantage of the proposed method in black-box settings.Among these,the success rate of RetinaNet transfer attacks reached a maximum of 82.59%.
文摘The RPL(IPv6 Routing Protocol for Low-Power and Lossy Networks)protocol is essential for efficient communi-cation within the Internet of Things(IoT)ecosystem.Despite its significance,RPL’s susceptibility to attacks remains a concern.This paper presents a comprehensive simulation-based analysis of the RPL protocol’s vulnerability to the decreased rank attack in both static andmobilenetwork environments.We employ the Random Direction Mobility Model(RDM)for mobile scenarios within the Cooja simulator.Our systematic evaluation focuses on critical performance metrics,including Packet Delivery Ratio(PDR),Average End to End Delay(AE2ED),throughput,Expected Transmission Count(ETX),and Average Power Consumption(APC).Our findings illuminate the disruptive impact of this attack on the routing hierarchy,resulting in decreased PDR and throughput,increased AE2ED,ETX,and APC.These results underscore the urgent need for robust security measures to protect RPL-based IoT networks.Furthermore,our study emphasizes the exacerbated impact of the attack in mobile scenarios,highlighting the evolving security requirements of IoT networks.
基金supported by the National Natural Science Foundation of China(61771154)the Fundamental Research Funds for the Central Universities(3072022CF0601)supported by Key Laboratory of Advanced Marine Communication and Information Technology,Ministry of Industry and Information Technology,Harbin Engineering University,Harbin,China.
文摘As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Research Groups Program Grant No.(RGP-1443-0051).
文摘Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentication,and integrity verification of digital content interchanged through the Internet were utilized to solve a major concern in information and communication technologies.The authors’difficulties were tampering detection,authentication,and integrity verification of the digital contents.This study develops an Automated Data Mining based Digital Text Document Watermarking for Tampering Attack Detection(ADMDTW-TAD)via the Internet.The DM concept is exploited in the presented ADMDTW-TAD technique to identify the document’s appropriate characteristics to embed larger watermark information.The presented secure watermarking scheme intends to transmit digital text documents over the Internet securely.Once the watermark is embedded with no damage to the original document,it is then shared with the destination.The watermark extraction process is performed to get the original document securely.The experimental validation of the ADMDTW-TAD technique is carried out under varying levels of attack volumes,and the outcomes were inspected in terms of different measures.The simulation values indicated that the ADMDTW-TAD technique improved performance over other models.
基金funded by National Natural Science Foundation of China under Grant No.61806171The Sichuan University of Science&Engineering Talent Project under Grant No.2021RC15Sichuan University of Science&Engineering Graduate Student Innovation Fund under Grant No.Y2023115,The Scientific Research and Innovation Team Program of Sichuan University of Science and Technology under Grant No.SUSE652A006.
文摘Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim to investigate backdoor attack methods for image categorization tasks,to promote the development of DNN towards higher security.Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples,and the meticulous data screening by developers,hindering practical attack implementation.To overcome these challenges,this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation(GN-TUAP)algorithm.This approach restricts the direction of perturbations and normalizes abnormal pixel values,ensuring that perturbations progress as much as possible in a direction perpendicular to the decision hyperplane in linear problems.This limits anomalies within the perturbations improves their visual stealthiness,and makes them more challenging for defense methods to detect.To verify the effectiveness,stealthiness,and robustness of GN-TUAP,we proposed a comprehensive threat model.Based on this model,extensive experiments were conducted using the CIFAR-10,CIFAR-100,GTSRB,and MNIST datasets,comparing our method with existing state-of-the-art attack methods.We also tested our perturbation triggers using various defense methods and further experimented on the robustness of the triggers against noise filtering techniques.The experimental outcomes demonstrate that backdoor attacks leveraging perturbations generated via our algorithm exhibit cross-model attack effectiveness and superior stealthiness.Furthermore,they possess robust anti-detection capabilities and maintain commendable performance when subjected to noise-filtering methods.
文摘This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.
文摘The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application.
文摘In this paper,a resilient distributed control scheme against replay attacks for multi-agent networked systems subject to input and state constraints is proposed.The methodological starting point relies on a smart use of predictive arguments with a twofold aim:1)Promptly detect malicious agent behaviors affecting normal system operations;2)Apply specific control actions,based on predictive ideas,for mitigating as much as possible undesirable domino effects resulting from adversary operations.Specifically,the multi-agent system is topologically described by a leader-follower digraph characterized by a unique leader and set-theoretic receding horizon control ideas are exploited to develop a distributed algorithm capable to instantaneously recognize the attacked agent.Finally,numerical simulations are carried out to show benefits and effectiveness of the proposed approach.
文摘Based on the analysis for the interception process of ship-to-air missile system to the anti-ship missile stream, the antagonism of ship-to-air missile and anti-ship missile stream was modeled by Monte Carlo method. This model containing the probability of acquiring anti-ship missile, threat estimation, firepower distribution, interception, effectiveness evaluation and firepower turning, can dynamically simulate the antagonism process of anti-ship missile attack stream and anti-air missile weapon system. The anti-ship missile's saturation attack stream for different ship-to-air missile systems can be calculated quantitatively. The simulated results reveal the relations among the anti-ship missile saturation attack and the attack intensity of anti-ship missile, interception mode and the main parameters of anti-air missile weapon system. It provides a theoretical basis for the effective operation of anti-ship missile.
基金supported by the National Natural Science Foundation of P.R.China(No.61672297)the Key Research and Development Program of Jiangsu Province(Social Development Program,No.BE2017742)+1 种基金The Sixth Talent Peaks Project of Jiangsu Province(No.DZXX-017)Jiangsu Natural Science Foundation for Excellent Young Scholar(No.BK20160089)
文摘With the rapid development of e-commerce, the security issues of collaborative filtering recommender systems have been widely investigated. Malicious users can benefit from injecting a great quantities of fake profiles into recommender systems to manipulate recommendation results. As one of the most important attack methods in recommender systems, the shilling attack has been paid considerable attention, especially to its model and the way to detect it. Among them, the loose version of Group Shilling Attack Generation Algorithm (GSAGenl) has outstanding performance. It can be immune to some PCC (Pearson Correlation Coefficient)-based detectors due to the nature of anti-Pearson correlation. In order to overcome the vulnerabilities caused by GSAGenl, a gravitation-based detection model (GBDM) is presented, integrated with a sophisticated gravitational detector and a decider. And meanwhile two new basic attributes and a particle filter algorithm are used for tracking prediction. And then, whether an attack occurs can be judged according to the law of universal gravitation in decision-making. The detection performances of GBDM, HHT-SVM, UnRAP, AP-UnRAP Semi-SAD,SVM-TIA and PCA-P are compared and evaluated. And simulation results show the effectiveness and availability of GBDM.
文摘Algebraic attack was applied to attack Filter-Combintr model keystreamgenerators. We proposed the technique of function composition to improve the model, and the improvedmodel can resist the algebraic attack. A new criterion for designing Filter-Combiner model was alsoproposed: the total length I. of Linear Finite State Machines used in the model should be largeenough and the degree d of Filter-Combiner function should be approximate [L/2].
基金Supported by National Natural Science Foundation of China (No.90718023)National High-Tech Research and Development Program of China (No.2007AA01Z130)
文摘In order to evaluate all attack paths in a threat tree,based on threat modeling theory,a weight distribution algorithm of the root node in a threat tree is designed,which computes threat coefficients of leaf nodes in two ways including threat occurring possibility and the degree of damage.Besides,an algorithm of searching attack path was also obtained in accordence with its definition.Finally,an attack path evaluation system was implemented which can output the threat coefficients of the leaf nodes in a target threat tree,the weight distribution information,and the attack paths.An example threat tree is given to verify the effectiveness of the algorithms.
基金Taif University,Taif,Saudi Arabia through Taif University Researchers Supporting Project Number(TURSP-2020/115).
文摘Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.
基金Supperted by the Nation High Technology Research and Development Program of China (863 Program) (No.2002AA001042) and the Tackle Key Problem Program of Sichuan Province (No. 01GG0712)
文摘A color petri net (CPN) based attack modeling approach is addressed. Compared with graph-based modeling, CPN based attack model is flexible enough to model Internet intrusions, because of their static and dynamic features. The processes and rules of building CPN based attack model from attack tree are also presented. In order to evaluate the risk of intrusion, some cost elements are added to CPN based attack modeling. This extended model is useful in intrusion detection and risk evaluation. Experiences show that it is easy to exploit CPN based attack modeling approach to provide the controlling functions, such as intrusion response and intrusion defense. A case study given in this paper shows that CPN based attack model has many unique characters which attack tree model hasn’t.