期刊文献+
共找到1,426篇文章
< 1 2 72 >
每页显示 20 50 100
A Probabilistic Trust Model and Control Algorithm to Protect 6G Networks against Malicious Data Injection Attacks in Edge Computing Environments
1
作者 Borja Bordel Sánchez Ramón Alcarria Tomás Robles 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期631-654,共24页
Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control l... Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control loops critical for managing Industry 5.0 deployments,digital agriculture systems,and essential infrastructures.The provision of extensive machine-type communications through 6G will render many of these innovative systems autonomous and unsupervised.While full automation will enhance industrial efficiency significantly,it concurrently introduces new cyber risks and vulnerabilities.In particular,unattended systems are highly susceptible to trust issues:malicious nodes and false information can be easily introduced into control loops.Additionally,Denialof-Service attacks can be executed by inundating the network with valueless noise.Current anomaly detection schemes require the entire transformation of the control software to integrate new steps and can only mitigate anomalies that conform to predefined mathematical models.Solutions based on an exhaustive data collection to detect anomalies are precise but extremely slow.Standard models,with their limited understanding of mobile networks,can achieve precision rates no higher than 75%.Therefore,more general and transversal protection mechanisms are needed to detect malicious behaviors transparently.This paper introduces a probabilistic trust model and control algorithm designed to address this gap.The model determines the probability of any node to be trustworthy.Communication channels are pruned for those nodes whose probability is below a given threshold.The trust control algorithmcomprises three primary phases,which feed themodel with three different probabilities,which are weighted and combined.Initially,anomalous nodes are identified using Gaussian mixture models and clustering technologies.Next,traffic patterns are studied using digital Bessel functions and the functional scalar product.Finally,the information coherence and content are analyzed.The noise content and abnormal information sequences are detected using a Volterra filter and a bank of Finite Impulse Response filters.An experimental validation based on simulation tools and environments was carried out.Results show the proposed solution can successfully detect up to 92%of malicious data injection attacks. 展开更多
关键词 6G networks noise injection attacks Gaussian mixture model Bessel function traffic filter Volterra filter
下载PDF
Gated Neural Network-Based Unsteady Aerodynamic Modeling for Large Angles of Attack
2
作者 DENG Yongtao CHENG Shixin MI Baigang 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2024年第4期432-443,共12页
Modeling of unsteady aerodynamic loads at high angles of attack using a small amount of experimental or simulation data to construct predictive models for unknown states can greatly improve the efficiency of aircraft ... Modeling of unsteady aerodynamic loads at high angles of attack using a small amount of experimental or simulation data to construct predictive models for unknown states can greatly improve the efficiency of aircraft unsteady aerodynamic design and flight dynamics analysis.In this paper,aiming at the problems of poor generalization of traditional aerodynamic models and intelligent models,an intelligent aerodynamic modeling method based on gated neural units is proposed.The time memory characteristics of the gated neural unit is fully utilized,thus the nonlinear flow field characterization ability of the learning and training process is enhanced,and the generalization ability of the whole prediction model is improved.The prediction and verification of the model are carried out under the maneuvering flight condition of NACA0015 airfoil.The results show that the model has good adaptability.In the interpolation prediction,the maximum prediction error of the lift and drag coefficients and the moment coefficient does not exceed 10%,which can basically represent the variation characteristics of the entire flow field.In the construction of extrapolation models,the training model based on the strong nonlinear data has good accuracy for weak nonlinear prediction.Furthermore,the error is larger,even exceeding 20%,which indicates that the extrapolation and generalization capabilities need to be further optimized by integrating physical models.Compared with the conventional state space equation model,the proposed method can improve the extrapolation accuracy and efficiency by 78%and 60%,respectively,which demonstrates the applied potential of this method in aerodynamic modeling. 展开更多
关键词 large angle of attack unsteady aerodynamic modeling gated neural networks generalization ability
下载PDF
Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications
3
作者 Harshvardhan Aditya Siddansh Chawla +6 位作者 Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期421-447,共27页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. 展开更多
关键词 Large Language models PII Leakage Privacy Memorization OVERFITTING Membership Inference attack (MIA)
下载PDF
Ensuring Secure Platooning of Constrained Intelligent and Connected Vehicles Against Byzantine Attacks:A Distributed MPC Framework 被引量:1
4
作者 Henglai Wei Hui Zhang +1 位作者 Kamal AI-Haddad Yang Shi 《Engineering》 SCIE EI CAS CSCD 2024年第2期35-46,共12页
This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control fram... This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control framework for such ICVs.This framework seamlessly integrates the predesigned optimal control with distributed model predictive control(DMPC)optimization and introduces a unique distributed attack detector to ensure the reliability of the transmitted information among vehicles.Notably,our strategy uses previously broadcasted information and a specialized convex set,termed the“resilience set”,to identify unreliable data.This approach significantly eases graph robustness prerequisites,requiring only an(F+1)-robust graph,in contrast to the established mean sequence reduced algorithms,which require a minimum(2F+1)-robust graph.Additionally,we introduce a verification algorithm to restore trust in vehicles under minor attacks,further reducing communication network robustness.Our analysis demonstrates the recursive feasibility of the DMPC optimization.Furthermore,the proposed method achieves exceptional control performance by minimizing the discrepancies between the DMPC control inputs and predesigned platoon control inputs,while ensuring constraint compliance and cybersecurity.Simulation results verify the effectiveness of our theoretical findings. 展开更多
关键词 model predictive control Resilient control Platoon control Intelligent and connected vehicle Byzantine attacks
下载PDF
Stochastic Models to Mitigate Sparse Sensor Attacks in Continuous-Time Non-Linear Cyber-Physical Systems
5
作者 Borja Bordel Sánchez Ramón Alcarria Tomás Robles 《Computers, Materials & Continua》 SCIE EI 2023年第9期3189-3218,共30页
Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a n... Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a new non-linear generalized model to describe Cyber-Physical Systems.This model includes unknown multivariable discrete and continuous-time functions and different multiplicative noises to represent the evolution of physical processes and randomeffects in the physical and computationalworlds.Besides,the digitalization stage in hardware devices is represented too.Attackers and most critical sparse sensor attacks are described through a stochastic process.The reconstruction and protectionmechanisms are based on aweighted stochasticmodel.Error probability in data samples is estimated through different indicators commonly employed in non-linear dynamics(such as the Fourier transform,first-return maps,or the probability density function).A decision algorithm calculates the final reconstructed value considering the previous error probability.An experimental validation based on simulation tools and real deployments is also carried out.Both,the new technology performance and scalability are studied.Results prove that the proposed solution protects Cyber-Physical Systems against up to 92%of attacks and perturbations,with a computational delay below 2.5 s.The proposed model shows a linear complexity,as recursive or iterative structures are not employed,just algebraic and probabilistic functions.In conclusion,the new model and reconstructionmechanism can protect successfully Cyber-Physical Systems against sparse sensor attacks,even in dense or pervasive deployments and scenarios. 展开更多
关键词 Cyber-physical systems sparse sensor attack non-linear models stochastic models security
下载PDF
K-core attack, equilibrium K-core,and kinetically constrained spin system
6
作者 周海军 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第6期14-26,共13页
Kinetically constrained spin systems are toy models of supercooled liquids and amorphous solids. In this perspective,we revisit the prototypical Fredrickson–Andersen(FA) kinetically constrained model from the viewpoi... Kinetically constrained spin systems are toy models of supercooled liquids and amorphous solids. In this perspective,we revisit the prototypical Fredrickson–Andersen(FA) kinetically constrained model from the viewpoint of K-core combinatorial optimization. Each kinetic cluster of the FA system, containing all the mutually visitable microscopic occupation configurations, is exactly the solution space of a specific instance of the K-core attack problem. The whole set of different jammed occupation patterns of the FA system is the configuration space of an equilibrium K-core problem. Based on recent theoretical results achieved on the K-core attack and equilibrium K-core problems, we discuss the thermodynamic spin glass phase transitions and the maximum occupation density of the fully unfrozen FA kinetic cluster, and the minimum occupation density and extreme vulnerability of the partially frozen(jammed) kinetic clusters. The equivalence between K-core attack and the fully unfrozen FA kinetic cluster also implies a new way of sampling K-core attack solutions. 展开更多
关键词 Fredrickson–Andersen model K-core attack spin glass jamming
下载PDF
Improving Transferable Targeted Adversarial Attack for Object Detection Using RCEN Framework and Logit Loss Optimization
7
作者 Zhiyi Ding Lei Sun +2 位作者 Xiuqing Mao Leyu Dai Ruiyang Ding 《Computers, Materials & Continua》 SCIE EI 2024年第9期4387-4412,共26页
Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural netw... Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples.This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems.Most existing adversarial attack strategies focus primarily on image classification problems,failing to fully exploit the unique characteristics of object detectionmodels,thus resulting in widespread deficiencies in their transferability.Furthermore,previous research has predominantly concentrated on the transferability issues of non-targeted attacks,whereas enhancing the transferability of targeted adversarial examples presents even greater challenges.Traditional attack techniques typically employ cross-entropy as a loss measure,iteratively adjusting adversarial examples to match target categories.However,their inherent limitations restrict their broad applicability and transferability across different models.To address the aforementioned challenges,this study proposes a novel targeted adversarial attack method aimed at enhancing the transferability of adversarial samples across object detection models.Within the framework of iterative attacks,we devise a new objective function designed to mitigate consistency issues arising from cumulative noise and to enhance the separation between target and non-target categories(logit margin).Secondly,a data augmentation framework incorporating random erasing and color transformations is introduced into targeted adversarial attacks.This enhances the diversity of gradients,preventing overfitting to white-box models.Lastly,perturbations are applied only within the specified object’s bounding box to reduce the perturbation range,enhancing attack stealthiness.Experiments were conducted on the Microsoft Common Objects in Context(MS COCO)dataset using You Only Look Once version 3(YOLOv3),You Only Look Once version 8(YOLOv8),Faster Region-based Convolutional Neural Networks(Faster R-CNN),and RetinaNet.The results demonstrate a significant advantage of the proposed method in black-box settings.Among these,the success rate of RetinaNet transfer attacks reached a maximum of 82.59%. 展开更多
关键词 Object detection model security targeted attack gradient diversity
下载PDF
RPL-Based IoT Networks under Decreased Rank Attack:Performance Analysis in Static and Mobile Environments
8
作者 Amal Hkiri Mouna Karmani +3 位作者 Omar Ben Bahri Ahmed Mohammed Murayr Fawaz Hassan Alasmari Mohsen Machhout 《Computers, Materials & Continua》 SCIE EI 2024年第1期227-247,共21页
The RPL(IPv6 Routing Protocol for Low-Power and Lossy Networks)protocol is essential for efficient communi-cation within the Internet of Things(IoT)ecosystem.Despite its significance,RPL’s susceptibility to attacks r... The RPL(IPv6 Routing Protocol for Low-Power and Lossy Networks)protocol is essential for efficient communi-cation within the Internet of Things(IoT)ecosystem.Despite its significance,RPL’s susceptibility to attacks remains a concern.This paper presents a comprehensive simulation-based analysis of the RPL protocol’s vulnerability to the decreased rank attack in both static andmobilenetwork environments.We employ the Random Direction Mobility Model(RDM)for mobile scenarios within the Cooja simulator.Our systematic evaluation focuses on critical performance metrics,including Packet Delivery Ratio(PDR),Average End to End Delay(AE2ED),throughput,Expected Transmission Count(ETX),and Average Power Consumption(APC).Our findings illuminate the disruptive impact of this attack on the routing hierarchy,resulting in decreased PDR and throughput,increased AE2ED,ETX,and APC.These results underscore the urgent need for robust security measures to protect RPL-based IoT networks.Furthermore,our study emphasizes the exacerbated impact of the attack in mobile scenarios,highlighting the evolving security requirements of IoT networks. 展开更多
关键词 RPL decreased rank attacks MOBILITY random direction model
下载PDF
Adversarial attacks and defenses for digital communication signals identification
9
作者 Qiao Tian Sicheng Zhang +1 位作者 Shiwen Mao Yun Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第3期756-764,共9页
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ... As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research. 展开更多
关键词 Digital communication signals identification AI model Adversarial attacks Adversarial defenses Adversarial indicators
下载PDF
Digital Text Document Watermarking Based Tampering Attack Detection via Internet
10
作者 Manal Abdullah Alohali Muna Elsadig +3 位作者 Fahd N.Al-Wesabi Mesfer Al Duhayyim Anwer Mustafa Hilal Abdelwahed Motwakel 《Computer Systems Science & Engineering》 2024年第3期759-771,共13页
Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentica... Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentication,and integrity verification of digital content interchanged through the Internet were utilized to solve a major concern in information and communication technologies.The authors’difficulties were tampering detection,authentication,and integrity verification of the digital contents.This study develops an Automated Data Mining based Digital Text Document Watermarking for Tampering Attack Detection(ADMDTW-TAD)via the Internet.The DM concept is exploited in the presented ADMDTW-TAD technique to identify the document’s appropriate characteristics to embed larger watermark information.The presented secure watermarking scheme intends to transmit digital text documents over the Internet securely.Once the watermark is embedded with no damage to the original document,it is then shared with the destination.The watermark extraction process is performed to get the original document securely.The experimental validation of the ADMDTW-TAD technique is carried out under varying levels of attack volumes,and the outcomes were inspected in terms of different measures.The simulation values indicated that the ADMDTW-TAD technique improved performance over other models. 展开更多
关键词 Content authentication tampering attacks detection model SECURITY digital watermarking
下载PDF
A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks
11
作者 Hong Huang Yunfei Wang +1 位作者 Guotao Yuan Xin Li 《Computers, Materials & Continua》 SCIE EI 2024年第7期361-387,共27页
Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim... Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim to investigate backdoor attack methods for image categorization tasks,to promote the development of DNN towards higher security.Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples,and the meticulous data screening by developers,hindering practical attack implementation.To overcome these challenges,this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation(GN-TUAP)algorithm.This approach restricts the direction of perturbations and normalizes abnormal pixel values,ensuring that perturbations progress as much as possible in a direction perpendicular to the decision hyperplane in linear problems.This limits anomalies within the perturbations improves their visual stealthiness,and makes them more challenging for defense methods to detect.To verify the effectiveness,stealthiness,and robustness of GN-TUAP,we proposed a comprehensive threat model.Based on this model,extensive experiments were conducted using the CIFAR-10,CIFAR-100,GTSRB,and MNIST datasets,comparing our method with existing state-of-the-art attack methods.We also tested our perturbation triggers using various defense methods and further experimented on the robustness of the triggers against noise filtering techniques.The experimental outcomes demonstrate that backdoor attacks leveraging perturbations generated via our algorithm exhibit cross-model attack effectiveness and superior stealthiness.Furthermore,they possess robust anti-detection capabilities and maintain commendable performance when subjected to noise-filtering methods. 展开更多
关键词 Image classification model backdoor attack gaussian distribution Artificial Intelligence(AI)security
下载PDF
GUARDIAN: A Multi-Tiered Defense Architecture for Thwarting Prompt Injection Attacks on LLMs
12
作者 Parijat Rai Saumil Sood +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期43-68,共26页
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist... This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner. 展开更多
关键词 Large Language models (LLMs) Adversarial attack Prompt Injection Filter Defense Artificial Intelligence Machine Learning CYBERSECURITY
下载PDF
Protecting LLMs against Privacy Attacks While Preserving Utility
13
作者 Gunika Dhingra Saumil Sood +2 位作者 Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Information Security》 2024年第4期448-473,共26页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application. 展开更多
关键词 Large Language models PII Leakage PRIVACY Memorization Membership Inference attack (MIA) DEFENSES Generative Adversarial Networks (GANs) Synthetic Data
下载PDF
Resilience Against Replay Attacks:A Distributed Model Predictive Control Scheme for Networked Multi-Agent Systems 被引量:5
14
作者 Giuseppe Franzè Francesco Tedesco Domenico Famularo 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第3期628-640,共13页
In this paper,a resilient distributed control scheme against replay attacks for multi-agent networked systems subject to input and state constraints is proposed.The methodological starting point relies on a smart use ... In this paper,a resilient distributed control scheme against replay attacks for multi-agent networked systems subject to input and state constraints is proposed.The methodological starting point relies on a smart use of predictive arguments with a twofold aim:1)Promptly detect malicious agent behaviors affecting normal system operations;2)Apply specific control actions,based on predictive ideas,for mitigating as much as possible undesirable domino effects resulting from adversary operations.Specifically,the multi-agent system is topologically described by a leader-follower digraph characterized by a unique leader and set-theoretic receding horizon control ideas are exploited to develop a distributed algorithm capable to instantaneously recognize the attacked agent.Finally,numerical simulations are carried out to show benefits and effectiveness of the proposed approach. 展开更多
关键词 Distributed model predictive control leader-follower networks multi-agent systems replay attacks resilient control
下载PDF
Study on Anti-ship Missile Saturation Attack Model 被引量:1
15
作者 王光辉 孙学锋 +1 位作者 严建钢 谢宇鹏 《Defence Technology(防务技术)》 SCIE EI CAS 2010年第1期10-15,共6页
Based on the analysis for the interception process of ship-to-air missile system to the anti-ship missile stream, the antagonism of ship-to-air missile and anti-ship missile stream was modeled by Monte Carlo method. T... Based on the analysis for the interception process of ship-to-air missile system to the anti-ship missile stream, the antagonism of ship-to-air missile and anti-ship missile stream was modeled by Monte Carlo method. This model containing the probability of acquiring anti-ship missile, threat estimation, firepower distribution, interception, effectiveness evaluation and firepower turning, can dynamically simulate the antagonism process of anti-ship missile attack stream and anti-air missile weapon system. The anti-ship missile's saturation attack stream for different ship-to-air missile systems can be calculated quantitatively. The simulated results reveal the relations among the anti-ship missile saturation attack and the attack intensity of anti-ship missile, interception mode and the main parameters of anti-air missile weapon system. It provides a theoretical basis for the effective operation of anti-ship missile. 展开更多
关键词 operational research system engineering anti-ship missile ship-to-air missile saturation attack antagonism model penetrate efficiency
下载PDF
A Novel Shilling Attack Detection Model Based on Particle Filter and Gravitation 被引量:1
16
作者 Lingtao Qi Haiping Huang +2 位作者 Feng Li Reza Malekian Ruchuan Wang 《China Communications》 SCIE CSCD 2019年第10期112-132,共21页
With the rapid development of e-commerce, the security issues of collaborative filtering recommender systems have been widely investigated. Malicious users can benefit from injecting a great quantities of fake profile... With the rapid development of e-commerce, the security issues of collaborative filtering recommender systems have been widely investigated. Malicious users can benefit from injecting a great quantities of fake profiles into recommender systems to manipulate recommendation results. As one of the most important attack methods in recommender systems, the shilling attack has been paid considerable attention, especially to its model and the way to detect it. Among them, the loose version of Group Shilling Attack Generation Algorithm (GSAGenl) has outstanding performance. It can be immune to some PCC (Pearson Correlation Coefficient)-based detectors due to the nature of anti-Pearson correlation. In order to overcome the vulnerabilities caused by GSAGenl, a gravitation-based detection model (GBDM) is presented, integrated with a sophisticated gravitational detector and a decider. And meanwhile two new basic attributes and a particle filter algorithm are used for tracking prediction. And then, whether an attack occurs can be judged according to the law of universal gravitation in decision-making. The detection performances of GBDM, HHT-SVM, UnRAP, AP-UnRAP Semi-SAD,SVM-TIA and PCA-P are compared and evaluated. And simulation results show the effectiveness and availability of GBDM. 展开更多
关键词 shilling attack detection model collaborative filtering recommender systems gravitation-based detection model particle filter algorithm
下载PDF
Algebraic Attack on Filter-Combiner Model Keystream Generators
17
作者 WUZhi-ping YEDing-feng MAWei-ju 《Wuhan University Journal of Natural Sciences》 EI CAS 2005年第1期259-262,共4页
Algebraic attack was applied to attack Filter-Combintr model keystreamgenerators. We proposed the technique of function composition to improve the model, and the improvedmodel can resist the algebraic attack. A new cr... Algebraic attack was applied to attack Filter-Combintr model keystreamgenerators. We proposed the technique of function composition to improve the model, and the improvedmodel can resist the algebraic attack. A new criterion for designing Filter-Combiner model was alsoproposed: the total length I. of Linear Finite State Machines used in the model should be largeenough and the degree d of Filter-Combiner function should be approximate [L/2]. 展开更多
关键词 algebraic attack Filter-Combiner model stream cipher 'XL' algorithm function composition
下载PDF
Threat Modeling-Oriented Attack Path Evaluating Algorithm
18
作者 李晓红 刘然 +1 位作者 冯志勇 何可 《Transactions of Tianjin University》 EI CAS 2009年第3期162-167,共6页
In order to evaluate all attack paths in a threat tree,based on threat modeling theory,a weight distribution algorithm of the root node in a threat tree is designed,which computes threat coefficients of leaf nodes in ... In order to evaluate all attack paths in a threat tree,based on threat modeling theory,a weight distribution algorithm of the root node in a threat tree is designed,which computes threat coefficients of leaf nodes in two ways including threat occurring possibility and the degree of damage.Besides,an algorithm of searching attack path was also obtained in accordence with its definition.Finally,an attack path evaluation system was implemented which can output the threat coefficients of the leaf nodes in a target threat tree,the weight distribution information,and the attack paths.An example threat tree is given to verify the effectiveness of the algorithms. 展开更多
关键词 attack tree attack path threat modeling threat coefficient attack path evaluation
下载PDF
Chained Dual-Generative Adversarial Network:A Generalized Defense Against Adversarial Attacks 被引量:1
19
作者 Amitoj Bir Singh Lalit Kumar Awasthi +3 位作者 Urvashi Mohammad Shorfuzzaman Abdulmajeed Alsufyani Mueen Uddin 《Computers, Materials & Continua》 SCIE EI 2023年第2期2541-2555,共15页
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio... Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training. 展开更多
关键词 Adversarial attacks GAN-based adversarial defense image classification models adversarial defense
下载PDF
Local imperceptible adversarial attacks against human pose estimation networks
20
作者 Fuchang Liu Shen Zhang +2 位作者 Hao Wang Caiping Yan Yongwei Miao 《Visual Computing for Industry,Biomedicine,and Art》 EI 2023年第1期318-328,共11页
Deep neural networks are vulnerable to attacks from adversarial inputs.Corresponding attack research on human pose estimation(HPE),particularly for body joint detection,has been largely unexplored.Transferring classif... Deep neural networks are vulnerable to attacks from adversarial inputs.Corresponding attack research on human pose estimation(HPE),particularly for body joint detection,has been largely unexplored.Transferring classification-based attack methods to body joint regression tasks is not straightforward.Another issue is that the attack effectiveness and imperceptibility contradict each other.To solve these issues,we propose local imperceptible attacks on HPE networks.In particular,we reformulate imperceptible attacks on body joint regression into a constrained maximum allowable attack.Furthermore,we approximate the solution using iterative gradient-based strength refinement and greedy-based pixel selection.Our method crafts effective perceptual adversarial attacks that consider both human perception and attack effectiveness.We conducted a series of imperceptible attacks against state-of-the-art HPE methods,including HigherHRNet,DEKR,and ViTPose.The experimental results demonstrate that the proposed method achieves excellent imperceptibility while maintaining attack effectiveness by significantly reducing the number of perturbed pixels.Approximately 4%of the pixels can achieve sufficient attacks on HPE. 展开更多
关键词 Adversarial attack Human pose estimation white-box attack IMPERCEPTIBILITY Local perturbation
下载PDF
上一页 1 2 72 下一页 到第
使用帮助 返回顶部