期刊文献+
共找到1,655篇文章
< 1 2 83 >
每页显示 20 50 100
Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems:Hierarchical Poisoning Attacks and Defenses in Federated Learning
1
作者 Yongsheng Zhu Chong Liu +8 位作者 Chunlei Chen Xiaoting Lyu Zheng Chen Bin Wang Fuqiang Hu Hanxi Li Jiao Dai Baigen Cai Wei Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第11期1305-1325,共21页
The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning o... The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data.However,despite its privacy benefits,federated learning systems are vulnerable to poisoning attacks,where adversaries alter local model parameters on compromised clients and send malicious updates to the server,potentially compromising the global model’s accuracy.In this study,we introduce PMM(Perturbation coefficient Multiplied by Maximum value),a new poisoning attack method that perturbs model updates layer by layer,demonstrating the threat of poisoning attacks faced by federated learning.Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy.Additionally,we propose an effective defense method,namely CLBL(Cluster Layer By Layer).Experiment results on three datasets have confirmed CLBL’s effectiveness. 展开更多
关键词 PRIVACY-PRESERVING intelligent railway transportation system federated learning poisoning attacks defenseS
下载PDF
Adversarial attacks and defenses for digital communication signals identification
2
作者 Qiao Tian Sicheng Zhang +1 位作者 Shiwen Mao Yun Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第3期756-764,共9页
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ... As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research. 展开更多
关键词 Digital communication signals identification AI model Adversarial attacks Adversarial defenses Adversarial indicators
下载PDF
GUARDIAN: A Multi-Tiered Defense Architecture for Thwarting Prompt Injection Attacks on LLMs
3
作者 Parijat Rai Saumil Sood +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期43-68,共26页
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist... This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner. 展开更多
关键词 Large Language Models (LLMs) Adversarial attack Prompt Injection Filter defense Artificial Intelligence Machine Learning CYBERSECURITY
下载PDF
Adaptive Network Sustainability and Defense Based on Artificial Bees Colony Optimization Algorithm for Nature Inspired Cyber Security
4
作者 Chirag Ganguli Shishir Kumar Shandilya +1 位作者 Michal Gregus Oleh Basystiuk 《Computer Systems Science & Engineering》 2024年第3期739-758,共20页
Cyber Defense is becoming a major issue for every organization to keep business continuity intact.The presented paper explores the effectiveness of a meta-heuristic optimization algorithm-Artificial Bees Colony Algori... Cyber Defense is becoming a major issue for every organization to keep business continuity intact.The presented paper explores the effectiveness of a meta-heuristic optimization algorithm-Artificial Bees Colony Algorithm(ABC)as an Nature Inspired Cyber Security mechanism to achieve adaptive defense.It experiments on the Denial-Of-Service attack scenarios which involves limiting the traffic flow for each node.Businesses today have adapted their service distribution models to include the use of the Internet,allowing them to effectively manage and interact with their customer data.This shift has created an increased reliance on online services to store vast amounts of confidential customer data,meaning any disruption or outage of these services could be disastrous for the business,leaving them without the knowledge to serve their customers.Adversaries can exploit such an event to gain unauthorized access to the confidential data of the customers.The proposed algorithm utilizes an Adaptive Defense approach to continuously select nodes that could present characteristics of a probable malicious entity.For any changes in network parameters,the cluster of nodes is selected in the prepared solution set as a probable malicious node and the traffic rate with the ratio of packet delivery is managed with respect to the properties of normal nodes to deliver a disaster recovery plan for potential businesses. 展开更多
关键词 Artificial bee colonization adaptive defense cyber attack nature inspired cyber security cyber security cyber physical infrastructure
下载PDF
Chained Dual-Generative Adversarial Network:A Generalized Defense Against Adversarial Attacks 被引量:1
5
作者 Amitoj Bir Singh Lalit Kumar Awasthi +3 位作者 Urvashi Mohammad Shorfuzzaman Abdulmajeed Alsufyani Mueen Uddin 《Computers, Materials & Continua》 SCIE EI 2023年第2期2541-2555,共15页
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio... Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training. 展开更多
关键词 Adversarial attacks GAN-based adversarial defense image classification models adversarial defense
下载PDF
Beyond Defense: Proactive Approaches to Disaster Recovery and Threat Intelligence in Modern Enterprises
6
作者 Meysam Tahmasebi 《Journal of Information Security》 2024年第2期106-133,共28页
As cyber threats keep changing and business environments adapt, a comprehensive approach to disaster recovery involves more than just defensive measures. This research delves deep into the strategies required to respo... As cyber threats keep changing and business environments adapt, a comprehensive approach to disaster recovery involves more than just defensive measures. This research delves deep into the strategies required to respond to threats and anticipate and mitigate them proactively. Beginning with understanding the critical need for a layered defense and the intricacies of the attacker’s journey, the research offers insights into specialized defense techniques, emphasizing the importance of timely and strategic responses during incidents. Risk management is brought to the forefront, underscoring businesses’ need to adopt mature risk assessment practices and understand the potential risk impact areas. Additionally, the value of threat intelligence is explored, shedding light on the importance of active engagement within sharing communities and the vigilant observation of adversary motivations. “Beyond Defense: Proactive Approaches to Disaster Recovery and Threat Intelligence in Modern Enterprises” is a comprehensive guide for organizations aiming to fortify their cybersecurity posture, marrying best practices in proactive and reactive measures in the ever-challenging digital realm. 展开更多
关键词 Advanced Persistent Threats (APT) attack Phases attack Surface defense-IN-DEPTH Disaster Recovery (DR) Incident Response Plan (IRP) Intrusion Detection Systems (IDS) Intrusion Prevention System (IPS) Key Risk Indicator (KRI) Layered defense Lockheed Martin Kill Chain Proactive defense Redundancy Risk Management Threat Intelligence
下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:18
7
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning Deep neural network Adversarial example Adversarial attack Adversarial defense
下载PDF
Primary User Adversarial Attacks on Deep Learning-Based Spectrum Sensing and the Defense Method 被引量:3
8
作者 Shilian Zheng Linhui Ye +5 位作者 Xuanye Wang Jinyin Chen Huaji Zhou Caiyi Lou Zhijin Zhao Xiaoniu Yang 《China Communications》 SCIE CSCD 2021年第12期94-107,共14页
The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the rob... The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model. 展开更多
关键词 spectrum sensing cognitive radio deep learning adversarial attack autoencoder defense
下载PDF
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks 被引量:1
9
作者 Kazim Ali Adnan N.Quershi +3 位作者 Ahmad Alauddin Bin Arifin Muhammad Shahid Bhatti Abid Sohail Rohail Hassan 《Computers, Materials & Continua》 SCIE EI 2022年第5期2209-2224,共16页
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li... These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods. 展开更多
关键词 Computer vision deep learning convolutional neural networks adversarial examples adversarial attacks adversarial defenses
下载PDF
Black Box Adversarial Defense Based on Image Denoising and Pix2Pix
10
作者 Zhenyong Rui Xiugang Gong 《Journal of Computer and Communications》 2023年第12期14-30,共17页
Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the f... Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the field of AI security. Currently, robustness defense techniques for models often rely on adversarial training, a method that tends to only defend against specific types of attacks and lacks strong generalization. In response to this challenge, this paper proposes a black-box defense method based on Image Denoising and Pix2Pix (IDP) technology. This method does not require prior knowledge of the specific attack type and eliminates the need for cumbersome adversarial training. When making predictions on unknown samples, the IDP method first undergoes denoising processing, followed by inputting the processed image into a trained Pix2Pix model for image transformation. Finally, the image generated by Pix2Pix is input into the classification model for prediction. This versatile defense approach demonstrates excellent defensive performance against common attack methods such as FGSM, I-FGSM, DeepFool, and UPSET, showcasing high flexibility and transferability. In summary, the IDP method introduces new perspectives and possibilities for adversarial sample defense, alleviating the limitations of traditional adversarial training methods and enhancing the overall robustness of models. 展开更多
关键词 Deep Neural Networks (DNN) Adversarial attack Adversarial Training Fourier Transform Robust defense
下载PDF
ATSSC:An Attack Tolerant System in Serverless Computing
11
作者 Zhang Shuai Guo Yunfei +2 位作者 Hu Hongchao Liu Wenyan Wang Yawen 《China Communications》 SCIE CSCD 2024年第6期192-205,共14页
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ... Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs. 展开更多
关键词 active defense attack tolerant cloud computing SECURITY serverless computing
下载PDF
Address Resolution Protocol (ARP): Spoofing Attack and Proposed Defense
12
作者 Ghazi Al Sukkar Ramzi Saifan +2 位作者 Sufian Khwaldeh Mahmoud Maqableh Iyad Jafar 《Communications and Network》 2016年第3期118-130,共13页
Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the ... Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the process of establishing these networks. Nevertheless, security-wise precautions were not taken in some of them. In this paper, we expose some of the vulnerability that exists in a commonly and widely used network protocol, the Address Resolution Protocol (ARP) protocol. Effectively, we will implement a user friendly and an easy-to-use tool that exploits the weaknesses of this protocol to deceive a victim’s machine and a router through creating a sort of Man-in-the-Middle (MITM) attack. In MITM, all of the data going out or to the victim machine will pass first through the attacker’s machine. This enables the attacker to inspect victim’s data packets, extract valuable data (like passwords) that belong to the victim and manipulate these data packets. We suggest and implement a defense mechanism and tool that counters this attack, warns the user, and exposes some information about the attacker to isolate him. GNU/Linux is chosen as an operating system to implement both the attack and the defense tools. The results show the success of the defense mechanism in detecting the ARP related attacks in a very simple and efficient way. 展开更多
关键词 Address Resolution Protocol ARP Spoofing Security attack and defense Man in the Middle attack
下载PDF
Protecting LLMs against Privacy Attacks While Preserving Utility
13
作者 Gunika Dhingra Saumil Sood +2 位作者 Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Information Security》 2024年第4期448-473,共26页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application. 展开更多
关键词 Large Language Models PII Leakage PRIVACY Memorization Membership Inference attack (MIA) defenseS Generative Adversarial Networks (GANs) Synthetic Data
下载PDF
Research on Cyberspace Attack and Defense Confrontation Technology
14
作者 Chengjun ZHOU 《International Journal of Technology Management》 2015年第3期11-14,共4页
This paper analyzes the characteristics of Interact space and confrontation, discussed on the main technology of network space attack and defense confrontation. The paper presents the realization scheme of network spa... This paper analyzes the characteristics of Interact space and confrontation, discussed on the main technology of network space attack and defense confrontation. The paper presents the realization scheme of network space attack defense confrontation system, and analyzes its feasibility. The technology and the system can provide technical support for the system in the network space of our country development, and safeguard security of network space in China, promote the development of the network space security industry of China, it plays an important role and significance to speed up China' s independent controllable security products development. 展开更多
关键词 Intrusion prevention system attack and defense confrontation attack tracing Active defense
下载PDF
An Overview of Adversarial Attacks and Defenses
15
作者 Kai Chen Jinwei Wang Jiawei Zhang 《Journal of Information Hiding and Privacy Protection》 2022年第1期15-24,共10页
In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classificat... In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods. 展开更多
关键词 Deep learning adversarial example adversarial attacks adversarial defenses
下载PDF
Discussion and Research on Information Security Attack and Defense Platform Construction in Universities Based on Cloud Computing and Virtualization
16
作者 Xiancheng Ding 《Journal of Information Security》 2016年第5期297-303,共7页
This paper puts forward the plan on constructing information security attack and defense platform based on cloud computing and virtualization, provides the hardware topology structure of the platform and technical fra... This paper puts forward the plan on constructing information security attack and defense platform based on cloud computing and virtualization, provides the hardware topology structure of the platform and technical framework of the system and the experimental process and technical principle of the platform. The experiment platform can provide more than 20 attack classes. Using the virtualization technology can build hypothesized target of various types in the laboratory and diversified network structure to carry out attack and defense experiment. 展开更多
关键词 Information Security Network attack and defense VIRTUALIZATION Experiment Platform
下载PDF
A Defense Planning Model for a Power System Against Coordinated Cyber-physical Attack
17
作者 Peiyun Li Jian Fu +5 位作者 Kaigui Xie Bo Hu Yu Wang Changzheng Shao Yue Sun Wei Huang 《Protection and Control of Modern Power Systems》 SCIE EI 2024年第5期84-95,共12页
This paper proposes a tri-level defense planning model to defend a power system against a coor-dinated cyber-physical attack(CCPA).The defense plan considers not only the standalone physical attack or the cyber attack... This paper proposes a tri-level defense planning model to defend a power system against a coor-dinated cyber-physical attack(CCPA).The defense plan considers not only the standalone physical attack or the cyber attack,but also coordinated attacks.The defense strategy adopts coordinated generation and transmission expansion planning to defend against the attacks.In the process of modeling,the upper-level plan represents the perspective of the planner,aiming to minimize the critical load shedding of the planning system after the attack.The load resources available to planners are extended to flex-ible loads and critical loads.The middle-level plan is from the viewpoint of the attacker,and aims at generating an optimal CCPA scheme in the light of the planning strategy determined by the upper-level plan to maximize the load shedding caused by the attack.The optimal operational behavior of the operator is described by the lower-level plan,which minimizes the load shedding by defending against the CCPA.The tri-level model is analyzed by the column and constraint generation algorithm,which de-composes the defense model into a master problem and subproblem.Case studies on a modified IEEE RTS-79 system are performed to demonstrate the economic effi-ciency of the proposed model.Index Terms—Coordinated cyber-physical attack,flexible load,column-and-constraint generation,defense planning,robust optimization. 展开更多
关键词 Coordinated cyber-physical attack flexible load column-and-constraint generation defense planning robust optimization.
原文传递
Risk Assessment and Defense Resource Allocation of Cyber-physical Distribution Systems Under Denial-of-service Attacks
18
作者 Han Qin Jiaming Weng +2 位作者 Dong Liu Donglian Qi Yufei Wang 《CSEE Journal of Power and Energy Systems》 SCIE EI CSCD 2024年第5期2197-2207,共11页
With the help of advanced information technology,real-time monitoring and control levels of cyber-physical distribution systems(CPDS)have been significantly improved.However due to the deep integration of cyber and ph... With the help of advanced information technology,real-time monitoring and control levels of cyber-physical distribution systems(CPDS)have been significantly improved.However due to the deep integration of cyber and physical systems,attackers could still threaten the stable operation of CPDS by launching cyber-attacks,such as denial-of-service(DoS)attacks.Thus,it is necessary to study the CPDS risk assessment and defense resource allocation methods under DoS attacks.This paper analyzes the impact of DoS attacks on the physical system based on the CPDS fault self-healing control.Then,considering attacker and defender strategies and attack damage,a CPDS risk assessment framework is established.Furthermore,risk assessment and defense resource allocation methods,based on the Stackelberg dynamic game model,are proposed under conditions in which the cyber and physical systems are launched simultaneously.Finally,a simulation based on an actual CPDS is performed,and the calculation results verify the effectiveness of the algorithm. 展开更多
关键词 Cyber physical distribution system defense resource allocation denial-of-service attack risk assessment Stackelberg dynamic game model
原文传递
Towards the universal defense for query-based audio adversarial attacks on speech recognition system
19
作者 Feng Guo Zheng Sun +1 位作者 Yuxuan Chen Lei Ju 《Cybersecurity》 EI CSCD 2024年第1期53-70,共18页
Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pos... Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices.The existing defense methods are either limited in application or only defend on results,but not on process.In this work,we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process.The insight of this method is based on the observation:many existing audio AE attacks utilize query-based methods,which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process.Inspired by this observation,We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query.Thus,we can identify when a sequence of queries appears to be suspectable to generate audio AEs.Through extensive evaluation on four state-of-the-art audio AE attacks,we demonstrate that on average our defense identify the adversary’s intent with over 90%accuracy.With careful regard for robustness evaluations,we also analyze our proposed defense and its strength to withstand two adaptive attacks.Finally,our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining. 展开更多
关键词 Adversarial attacks defense Memory mechanism Query-based
原文传递
DLP:towards active defense against backdoor attacks with decoupled learning process
20
作者 Zonghao Ying Bin Wu 《Cybersecurity》 EI CSCD 2024年第1期122-134,共13页
Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively impl... Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively implant the backdoor,and triggers on the input can mislead the models during testing.Our study shows that the model shows different learning behaviors in clean and poisoned subsets during training.Based on this observation,we propose a general training pipeline to defend against backdoor attacks actively.Benign models can be trained from the unreli-able dataset by decoupling the learning process into three stages,i.e.,supervised learning,active unlearning,and active semi-supervised fine-tuning.The effectiveness of our approach has been shown in numerous experiments across various backdoor attacks and datasets. 展开更多
关键词 Deep learning Backdoor attack Active defense
原文传递
上一页 1 2 83 下一页 到第
使用帮助 返回顶部