Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
With advanced communication technologies,cyberphysical systems such as networked industrial control systems can be monitored and controlled by a remote control center via communication networks.While lots of benefits ...With advanced communication technologies,cyberphysical systems such as networked industrial control systems can be monitored and controlled by a remote control center via communication networks.While lots of benefits can be achieved with such a configuration,it also brings the concern of cyber attacks to the industrial control systems,such as networked manipulators that are widely adopted in industrial automation.For such systems,a false data injection attack on a control-center-to-manipulator(CC-M)communication channel is undesirable,and has negative effects on the manufacture quality.In this paper,we propose a resilient remote kinematic control method for serial manipulators undergoing a false data injection attack by leveraging the kinematic model.Theoretical analysis shows that the proposed method can guarantee asymptotic convergence of the regulation error to zero in the presence of a type of false data injection attack.The efficacy of the proposed method is validated via simulations.展开更多
This paper investigates the security issue of multisensor remote estimation systems.An optimal stealthy false data injection(FDI)attack scheme based on historical and current residuals,which only tampers with the meas...This paper investigates the security issue of multisensor remote estimation systems.An optimal stealthy false data injection(FDI)attack scheme based on historical and current residuals,which only tampers with the measurement residuals of partial sensors due to limited attack resources,is proposed to maximally degrade system estimation performance.The attack stealthiness condition is given,and then the estimation error covariance in compromised state is derived to quantify the system performance under attack.The optimal attack strategy is obtained by solving several convex optimization problems which maximize the trace of the compromised estimation error covariance subject to the stealthiness condition.Moreover,due to the constraint of attack resources,the selection principle of the attacked sensor is provided to determine which sensor is attacked so as to hold the most impact on system performance.Finally,simulation results are presented to verify the theoretical analysis.展开更多
The recent developments in smart cities pose major security issues for the Internet of Things(IoT)devices.These security issues directly result from inappropriate security management protocols and their implementation...The recent developments in smart cities pose major security issues for the Internet of Things(IoT)devices.These security issues directly result from inappropriate security management protocols and their implementation by IoT gadget developers.Cyber-attackers take advantage of such gadgets’vulnerabilities through various attacks such as injection and Distributed Denial of Service(DDoS)attacks.In this background,Intrusion Detection(ID)is the only way to identify the attacks and mitigate their damage.The recent advancements in Machine Learning(ML)and Deep Learning(DL)models are useful in effectively classifying cyber-attacks.The current research paper introduces a new Coot Optimization Algorithm with a Deep Learning-based False Data Injection Attack Recognition(COADL-FDIAR)model for the IoT environment.The presented COADL-FDIAR technique aims to identify false data injection attacks in the IoT environment.To accomplish this,the COADL-FDIAR model initially preprocesses the input data and selects the features with the help of the Chi-square test.To detect and classify false data injection attacks,the Stacked Long Short-Term Memory(SLSTM)model is exploited in this study.Finally,the COA algorithm effectively adjusts the SLTSM model’s hyperparameters effectively and accomplishes a superior recognition efficiency.The proposed COADL-FDIAR model was experimentally validated using a standard dataset,and the outcomes were scrutinized under distinct aspects.The comparative analysis results assured the superior performance of the proposed COADL-FDIAR model over other recent approaches with a maximum accuracy of 98.84%.展开更多
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca...The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.展开更多
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor...The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application.展开更多
With the increasing deployment of wireless sensordevices and networks,security becomes a criticalchallenge for sensor networks.In this paper,a schemeusing data mining is proposed for routing anomalydetection in wirele...With the increasing deployment of wireless sensordevices and networks,security becomes a criticalchallenge for sensor networks.In this paper,a schemeusing data mining is proposed for routing anomalydetection in wireless sensor networks.The schemeuses the Apriori algorithm to extract traffic patternsfrom both routing table and network traffic packetsand subsequently the K-means cluster algorithmadaptively generates a detection model.Through thecombination of these two algorithms,routing attackscan be detected effectively and automatically.Themain advantage of the proposed approach is that it isable to detect new attacks that have not previouslybeen seen.Moreover,the proposed detection schemeis based on no priori knowledge and then can beapplied to a wide range of different sensor networksfor a variety of routing attacks.展开更多
Secure control against cyber attacks becomes increasingly significant in cyber-physical systems(CPSs).False data injection attacks are a class of cyber attacks that aim to compromise CPS functions by injecting false d...Secure control against cyber attacks becomes increasingly significant in cyber-physical systems(CPSs).False data injection attacks are a class of cyber attacks that aim to compromise CPS functions by injecting false data such as sensor measurements and control signals.For quantified false data injection attacks,this paper establishes an effective defense framework from the energy conversion perspective.Then,we design an energy controller to dynamically adjust the system energy changes caused by unknown attacks.The designed energy controller stabilizes the attacked CPSs and ensures the dynamic performance of the system by adjusting the amount of damping injection.Moreover,with the disturbance attenuation technique,the burden of control system design is simplified because there is no need to design an attack observer.In addition,this secure control method is simple to implement because it avoids complicated mathematical operations.The effectiveness of our control method is demonstrated through an industrial CPS that controls a permanent magnet synchronous motor.展开更多
Cyber-physical systems(CPS)have been widely deployed in critical infrastructures and are vulnerable to various attacks.Data integrity attacks manipulate sensor measurements and cause control systems to fail,which are ...Cyber-physical systems(CPS)have been widely deployed in critical infrastructures and are vulnerable to various attacks.Data integrity attacks manipulate sensor measurements and cause control systems to fail,which are one of the prominent threats to CPS.Anomaly detection methods are proposed to secure CPS.However,existing anomaly detection studies usually require expert knowledge(e.g.,system model-based)or are lack of interpretability(e.g.,deep learning-based).In this paper,we present DEEPNOISE,a deep learning-based anomaly detection method for CPS with interpretability.Specifically,we utilize the sensor and process noise to detect data integrity attacks.Such noise represents the intrinsic characteristics of physical devices and the production process in CPS.One key enabler is that we use a robust deep autoencoder to automatically extract the noise from measurement data.Further,an LSTM-based detector is designed to inspect the obtained noise and detect anomalies.Data integrity attacks change noise patterns and thus are identified as the root cause of anomalies by DEEPNOISE.Evaluated on the SWaT testbed,DEEPNOISE achieves higher accuracy and recall compared with state-of-the-art model-based and deep learningbased methods.On average,when detecting direct attacks,the precision is 95.47%,the recall is 96.58%,and F_(1) is 95.98%.When detecting stealthy attacks,precision,recall,and F_(1) scores are between 96% and 99.5%.展开更多
Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverag...Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverage this loophole and design data poisoning attacks against ML systems.Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance.Data poisoning attacks are challenging to detect,and even more difficult to respond to,particularly in the Internet of Things(IoT)environment.To address this problem,we proposed DISTINIT,the first proactive data poisoning attack detection framework using distancemeasures.We found that Jaccard Distance(JD)can be used in the DISTINIT(among other distance measures)and we finally improved the JD to attain an Optimized JD(OJD)with lower time and space complexity.Our security analysis shows that the DISTINIT is secure against data poisoning attacks by considering key features of adversarial attacks.We conclude that the proposed OJD-based DISTINIT is effective and efficient against data poisoning attacks where in-time detection is critical for IoT applications with large volumes of streaming data.展开更多
With the increasing prevalence of social networks, more and more social network data are published for many applications, such as social network analysis and data mining. However, this brings privacy problems. For exa...With the increasing prevalence of social networks, more and more social network data are published for many applications, such as social network analysis and data mining. However, this brings privacy problems. For example, adversaries can get sensitive information of some individuals easily with little background knowledge. How to publish social network data for analysis purpose while preserving the privacy of individuals has raised many concerns. Many algorithms have been proposed to address this issue. In this paper, we discuss this privacy problem from two aspects: attack models and countermeasures. We analyse privacy conceres, model the background knowledge that adversary may utilize and review the recently developed attack models. We then survey the state-of-the-art privacy preserving methods in two categories: anonymization methods and differential privacy methods. We also provide research directions in this area.展开更多
The Internet of Things (IoT) paradigm enables end users to accessnetworking services amongst diverse kinds of electronic devices. IoT securitymechanism is a technology that concentrates on safeguarding the devicesand ...The Internet of Things (IoT) paradigm enables end users to accessnetworking services amongst diverse kinds of electronic devices. IoT securitymechanism is a technology that concentrates on safeguarding the devicesand networks connected in the IoT environment. In recent years, False DataInjection Attacks (FDIAs) have gained considerable interest in the IoT environment.Cybercriminals compromise the devices connected to the networkand inject the data. Such attacks on the IoT environment can result in a considerableloss and interrupt normal activities among the IoT network devices.The FDI attacks have been effectively overcome so far by conventional threatdetection techniques. The current research article develops a Hybrid DeepLearning to Combat Sophisticated False Data Injection Attacks detection(HDL-FDIAD) for the IoT environment. The presented HDL-FDIAD modelmajorly recognizes the presence of FDI attacks in the IoT environment.The HDL-FDIAD model exploits the Equilibrium Optimizer-based FeatureSelection (EO-FS) technique to select the optimal subset of the features.Moreover, the Long Short Term Memory with Recurrent Neural Network(LSTM-RNN) model is also utilized for the purpose of classification. At last,the Bayesian Optimization (BO) algorithm is employed as a hyperparameteroptimizer in this study. To validate the enhanced performance of the HDLFDIADmodel, a wide range of simulations was conducted, and the resultswere investigated in detail. A comparative study was conducted between theproposed model and the existing models. The outcomes revealed that theproposed HDL-FDIAD model is superior to other models.展开更多
Detecting cyber-attacks undoubtedly has become a big data problem. This paper presents a tutorial on data mining based cyber-attack detection. First,a data driven defence framework is presented in terms of cyber secur...Detecting cyber-attacks undoubtedly has become a big data problem. This paper presents a tutorial on data mining based cyber-attack detection. First,a data driven defence framework is presented in terms of cyber security situational awareness. Then, the process of data mining based cyber-attack detection is discussed. Next,a multi-loop learning architecture is presented for data mining based cyber-attack detection. Finally,common data mining techniques for cyber-attack detection are discussed.展开更多
This study aimed to develop a predictive model utilizing available data to forecast the risk of future shark attacks, making this critical information accessible for everyday public use. Employing a deep learning/neur...This study aimed to develop a predictive model utilizing available data to forecast the risk of future shark attacks, making this critical information accessible for everyday public use. Employing a deep learning/neural network methodology, the system was designed to produce a binary output that is subsequently classified into categories of low, medium, or high risk. A significant challenge encountered during the study was the identification and procurement of appropriate historical and forecasted marine weather data, which is integral to the model’s accuracy. Despite these challenges, the results of the study were startlingly optimistic, showcasing the model’s ability to predict with impressive accuracy. In conclusion, the developed forecasting tool not only offers promise in its immediate application but also sets a robust precedent for the adoption and adaptation of similar predictive systems in various analogous use cases in the marine environment and beyond.展开更多
Detecting sophisticated cyberattacks,mainly Distributed Denial of Service(DDoS)attacks,with unexpected patterns remains challenging in modern networks.Traditional detection systems often struggle to mitigate such atta...Detecting sophisticated cyberattacks,mainly Distributed Denial of Service(DDoS)attacks,with unexpected patterns remains challenging in modern networks.Traditional detection systems often struggle to mitigate such attacks in conventional and software-defined networking(SDN)environments.While Machine Learning(ML)models can distinguish between benign and malicious traffic,their limited feature scope hinders the detection of new zero-day or low-rate DDoS attacks requiring frequent retraining.In this paper,we propose a novel DDoS detection framework that combines Machine Learning(ML)and Ensemble Learning(EL)techniques to improve DDoS attack detection and mitigation in SDN environments.Our model leverages the“DDoS SDN”dataset for training and evaluation and employs a dynamic feature selection mechanism that enhances detection accuracy by focusing on the most relevant features.This adaptive approach addresses the limitations of conventional ML models and provides more accurate detection of various DDoS attack scenarios.Our proposed ensemble model introduces an additional layer of detection,increasing reliability through the innovative application of ensemble techniques.The proposed solution significantly enhances the model’s ability to identify and respond to dynamic threats in SDNs.It provides a strong foundation for proactive DDoS detection and mitigation,enhancing network defenses against evolving threats.Our comprehensive runtime analysis of Simultaneous Multi-Threading(SMT)on identical configurations shows superior accuracy and efficiency,with significantly reduced computational time,making it ideal for real-time DDoS detection in dynamic,rapidly changing SDNs.Experimental results demonstrate that our model achieves outstanding performance,outperforming traditional algorithms with 99%accuracy using Random Forest(RF)and K-Nearest Neighbors(KNN)and 98%accuracy using XGBoost.展开更多
The cyberspace has simultaneously presented opportunities and challenges alike for personal data security and privacy, as well as the process of research and learning. Moreover, information such as academic data, rese...The cyberspace has simultaneously presented opportunities and challenges alike for personal data security and privacy, as well as the process of research and learning. Moreover, information such as academic data, research data, personal data, proprietary knowledge, complex equipment designs and blueprints for yet to be patented products has all become extremely susceptible to Cybersecurity attacks. This research will investigate factors that affect that may have an influence on perceived ease of use of Cybersecurity, the influence of perceived ease of use on the attitude towards using Cybersecurity, the influence of attitude towards using Cybersecurity on the actual use of Cybersecurity and the influences of job positions on perceived ease of use of Cybersecurity and on the attitude towards using Cybersecurity and on the actual use of Cybersecurity. A model was constructed to investigate eight hypotheses that are related to the investigation. An online questionnaire was constructed to collect data and results showed that hypotheses 1 to 7 influence were significant. However, hypothesis 8 turned out to be insignificant and no influence was found between job positions and the actual use of Cybersecurity.展开更多
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
基金This work was supported in part by the National Natural Science Foundation of China(62206109)the Fundamental Research Funds for the Central Universities(21620346)。
文摘With advanced communication technologies,cyberphysical systems such as networked industrial control systems can be monitored and controlled by a remote control center via communication networks.While lots of benefits can be achieved with such a configuration,it also brings the concern of cyber attacks to the industrial control systems,such as networked manipulators that are widely adopted in industrial automation.For such systems,a false data injection attack on a control-center-to-manipulator(CC-M)communication channel is undesirable,and has negative effects on the manufacture quality.In this paper,we propose a resilient remote kinematic control method for serial manipulators undergoing a false data injection attack by leveraging the kinematic model.Theoretical analysis shows that the proposed method can guarantee asymptotic convergence of the regulation error to zero in the presence of a type of false data injection attack.The efficacy of the proposed method is validated via simulations.
基金supported by the National Natural Science Foundation of China(61925303,62173034,62088101,U20B2073,62173002)the National Key Research and Development Program of China(2021YFB1714800)Beijing Natural Science Foundation(4222045)。
文摘This paper investigates the security issue of multisensor remote estimation systems.An optimal stealthy false data injection(FDI)attack scheme based on historical and current residuals,which only tampers with the measurement residuals of partial sensors due to limited attack resources,is proposed to maximally degrade system estimation performance.The attack stealthiness condition is given,and then the estimation error covariance in compromised state is derived to quantify the system performance under attack.The optimal attack strategy is obtained by solving several convex optimization problems which maximize the trace of the compromised estimation error covariance subject to the stealthiness condition.Moreover,due to the constraint of attack resources,the selection principle of the attacked sensor is provided to determine which sensor is attacked so as to hold the most impact on system performance.Finally,simulation results are presented to verify the theoretical analysis.
基金This research was supported by the Universiti Sains Malaysia(USM)and the ministry of Higher Education Malaysia through Fundamental Research GrantScheme(FRGS-Grant No:FRGS/1/2020/TK0/USM/02/1).
文摘The recent developments in smart cities pose major security issues for the Internet of Things(IoT)devices.These security issues directly result from inappropriate security management protocols and their implementation by IoT gadget developers.Cyber-attackers take advantage of such gadgets’vulnerabilities through various attacks such as injection and Distributed Denial of Service(DDoS)attacks.In this background,Intrusion Detection(ID)is the only way to identify the attacks and mitigate their damage.The recent advancements in Machine Learning(ML)and Deep Learning(DL)models are useful in effectively classifying cyber-attacks.The current research paper introduces a new Coot Optimization Algorithm with a Deep Learning-based False Data Injection Attack Recognition(COADL-FDIAR)model for the IoT environment.The presented COADL-FDIAR technique aims to identify false data injection attacks in the IoT environment.To accomplish this,the COADL-FDIAR model initially preprocesses the input data and selects the features with the help of the Chi-square test.To detect and classify false data injection attacks,the Stacked Long Short-Term Memory(SLSTM)model is exploited in this study.Finally,the COA algorithm effectively adjusts the SLTSM model’s hyperparameters effectively and accomplishes a superior recognition efficiency.The proposed COADL-FDIAR model was experimentally validated using a standard dataset,and the outcomes were scrutinized under distinct aspects.The comparative analysis results assured the superior performance of the proposed COADL-FDIAR model over other recent approaches with a maximum accuracy of 98.84%.
基金supported in part by the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant No.2022C03174)the National Natural Science Foundation of China(No.92067103)+4 种基金the Key Research and Development Program of Shaanxi,China(No.2021ZDLGY06-02)the Natural Science Foundation of Shaanxi Province(No.2019ZDLGY12-02)the Shaanxi Innovation Team Project(No.2018TD-007)the Xi'an Science and technology Innovation Plan(No.201809168CX9JC10)the Fundamental Research Funds for the Central Universities(No.YJS2212)and National 111 Program of China B16037.
文摘The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.
文摘The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. This inadvertent leakage of sensitive information typically occurs when the models are subjected to black-box attacks. To address the growing concerns of safeguarding private and sensitive information while simultaneously preserving its utility, we analyze the performance of Targeted Catastrophic Forgetting (TCF). TCF involves preserving targeted pieces of sensitive information within datasets through an iterative pipeline which significantly reduces the likelihood of such information being leaked or reproduced by the model during black-box attacks, such as the autocompletion attack in our case. The experiments conducted using TCF evidently demonstrate its capability to reduce the extraction of PII while still preserving the context and utility of the target application.
基金the supports of the National Natural Science Foundation of China (60403027) the projects of science and research plan of Hubei provincial department of education (2003A011)the Natural Science Foundation Of Hubei Province of China (2005ABA243).
文摘With the increasing deployment of wireless sensordevices and networks,security becomes a criticalchallenge for sensor networks.In this paper,a schemeusing data mining is proposed for routing anomalydetection in wireless sensor networks.The schemeuses the Apriori algorithm to extract traffic patternsfrom both routing table and network traffic packetsand subsequently the K-means cluster algorithmadaptively generates a detection model.Through thecombination of these two algorithms,routing attackscan be detected effectively and automatically.Themain advantage of the proposed approach is that it isable to detect new attacks that have not previouslybeen seen.Moreover,the proposed detection schemeis based on no priori knowledge and then can beapplied to a wide range of different sensor networksfor a variety of routing attacks.
基金supported in part by the National Science Foundation of China(61873103,61433006)。
文摘Secure control against cyber attacks becomes increasingly significant in cyber-physical systems(CPSs).False data injection attacks are a class of cyber attacks that aim to compromise CPS functions by injecting false data such as sensor measurements and control signals.For quantified false data injection attacks,this paper establishes an effective defense framework from the energy conversion perspective.Then,we design an energy controller to dynamically adjust the system energy changes caused by unknown attacks.The designed energy controller stabilizes the attacked CPSs and ensures the dynamic performance of the system by adjusting the amount of damping injection.Moreover,with the disturbance attenuation technique,the burden of control system design is simplified because there is no need to design an attack observer.In addition,this secure control method is simple to implement because it avoids complicated mathematical operations.The effectiveness of our control method is demonstrated through an industrial CPS that controls a permanent magnet synchronous motor.
基金National Natural Science Foundation of China(No.62172308,U1626107,61972297,62172144)。
文摘Cyber-physical systems(CPS)have been widely deployed in critical infrastructures and are vulnerable to various attacks.Data integrity attacks manipulate sensor measurements and cause control systems to fail,which are one of the prominent threats to CPS.Anomaly detection methods are proposed to secure CPS.However,existing anomaly detection studies usually require expert knowledge(e.g.,system model-based)or are lack of interpretability(e.g.,deep learning-based).In this paper,we present DEEPNOISE,a deep learning-based anomaly detection method for CPS with interpretability.Specifically,we utilize the sensor and process noise to detect data integrity attacks.Such noise represents the intrinsic characteristics of physical devices and the production process in CPS.One key enabler is that we use a robust deep autoencoder to automatically extract the noise from measurement data.Further,an LSTM-based detector is designed to inspect the obtained noise and detect anomalies.Data integrity attacks change noise patterns and thus are identified as the root cause of anomalies by DEEPNOISE.Evaluated on the SWaT testbed,DEEPNOISE achieves higher accuracy and recall compared with state-of-the-art model-based and deep learningbased methods.On average,when detecting direct attacks,the precision is 95.47%,the recall is 96.58%,and F_(1) is 95.98%.When detecting stealthy attacks,precision,recall,and F_(1) scores are between 96% and 99.5%.
基金This work was supported by a National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)under Grant 2020R1A2B5B01002145.
文摘Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverage this loophole and design data poisoning attacks against ML systems.Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance.Data poisoning attacks are challenging to detect,and even more difficult to respond to,particularly in the Internet of Things(IoT)environment.To address this problem,we proposed DISTINIT,the first proactive data poisoning attack detection framework using distancemeasures.We found that Jaccard Distance(JD)can be used in the DISTINIT(among other distance measures)and we finally improved the JD to attain an Optimized JD(OJD)with lower time and space complexity.Our security analysis shows that the DISTINIT is secure against data poisoning attacks by considering key features of adversarial attacks.We conclude that the proposed OJD-based DISTINIT is effective and efficient against data poisoning attacks where in-time detection is critical for IoT applications with large volumes of streaming data.
文摘With the increasing prevalence of social networks, more and more social network data are published for many applications, such as social network analysis and data mining. However, this brings privacy problems. For example, adversaries can get sensitive information of some individuals easily with little background knowledge. How to publish social network data for analysis purpose while preserving the privacy of individuals has raised many concerns. Many algorithms have been proposed to address this issue. In this paper, we discuss this privacy problem from two aspects: attack models and countermeasures. We analyse privacy conceres, model the background knowledge that adversary may utilize and review the recently developed attack models. We then survey the state-of-the-art privacy preserving methods in two categories: anonymization methods and differential privacy methods. We also provide research directions in this area.
文摘The Internet of Things (IoT) paradigm enables end users to accessnetworking services amongst diverse kinds of electronic devices. IoT securitymechanism is a technology that concentrates on safeguarding the devicesand networks connected in the IoT environment. In recent years, False DataInjection Attacks (FDIAs) have gained considerable interest in the IoT environment.Cybercriminals compromise the devices connected to the networkand inject the data. Such attacks on the IoT environment can result in a considerableloss and interrupt normal activities among the IoT network devices.The FDI attacks have been effectively overcome so far by conventional threatdetection techniques. The current research article develops a Hybrid DeepLearning to Combat Sophisticated False Data Injection Attacks detection(HDL-FDIAD) for the IoT environment. The presented HDL-FDIAD modelmajorly recognizes the presence of FDI attacks in the IoT environment.The HDL-FDIAD model exploits the Equilibrium Optimizer-based FeatureSelection (EO-FS) technique to select the optimal subset of the features.Moreover, the Long Short Term Memory with Recurrent Neural Network(LSTM-RNN) model is also utilized for the purpose of classification. At last,the Bayesian Optimization (BO) algorithm is employed as a hyperparameteroptimizer in this study. To validate the enhanced performance of the HDLFDIADmodel, a wide range of simulations was conducted, and the resultswere investigated in detail. A comparative study was conducted between theproposed model and the existing models. The outcomes revealed that theproposed HDL-FDIAD model is superior to other models.
文摘Detecting cyber-attacks undoubtedly has become a big data problem. This paper presents a tutorial on data mining based cyber-attack detection. First,a data driven defence framework is presented in terms of cyber security situational awareness. Then, the process of data mining based cyber-attack detection is discussed. Next,a multi-loop learning architecture is presented for data mining based cyber-attack detection. Finally,common data mining techniques for cyber-attack detection are discussed.
文摘This study aimed to develop a predictive model utilizing available data to forecast the risk of future shark attacks, making this critical information accessible for everyday public use. Employing a deep learning/neural network methodology, the system was designed to produce a binary output that is subsequently classified into categories of low, medium, or high risk. A significant challenge encountered during the study was the identification and procurement of appropriate historical and forecasted marine weather data, which is integral to the model’s accuracy. Despite these challenges, the results of the study were startlingly optimistic, showcasing the model’s ability to predict with impressive accuracy. In conclusion, the developed forecasting tool not only offers promise in its immediate application but also sets a robust precedent for the adoption and adaptation of similar predictive systems in various analogous use cases in the marine environment and beyond.
文摘Detecting sophisticated cyberattacks,mainly Distributed Denial of Service(DDoS)attacks,with unexpected patterns remains challenging in modern networks.Traditional detection systems often struggle to mitigate such attacks in conventional and software-defined networking(SDN)environments.While Machine Learning(ML)models can distinguish between benign and malicious traffic,their limited feature scope hinders the detection of new zero-day or low-rate DDoS attacks requiring frequent retraining.In this paper,we propose a novel DDoS detection framework that combines Machine Learning(ML)and Ensemble Learning(EL)techniques to improve DDoS attack detection and mitigation in SDN environments.Our model leverages the“DDoS SDN”dataset for training and evaluation and employs a dynamic feature selection mechanism that enhances detection accuracy by focusing on the most relevant features.This adaptive approach addresses the limitations of conventional ML models and provides more accurate detection of various DDoS attack scenarios.Our proposed ensemble model introduces an additional layer of detection,increasing reliability through the innovative application of ensemble techniques.The proposed solution significantly enhances the model’s ability to identify and respond to dynamic threats in SDNs.It provides a strong foundation for proactive DDoS detection and mitigation,enhancing network defenses against evolving threats.Our comprehensive runtime analysis of Simultaneous Multi-Threading(SMT)on identical configurations shows superior accuracy and efficiency,with significantly reduced computational time,making it ideal for real-time DDoS detection in dynamic,rapidly changing SDNs.Experimental results demonstrate that our model achieves outstanding performance,outperforming traditional algorithms with 99%accuracy using Random Forest(RF)and K-Nearest Neighbors(KNN)and 98%accuracy using XGBoost.
文摘The cyberspace has simultaneously presented opportunities and challenges alike for personal data security and privacy, as well as the process of research and learning. Moreover, information such as academic data, research data, personal data, proprietary knowledge, complex equipment designs and blueprints for yet to be patented products has all become extremely susceptible to Cybersecurity attacks. This research will investigate factors that affect that may have an influence on perceived ease of use of Cybersecurity, the influence of perceived ease of use on the attitude towards using Cybersecurity, the influence of attitude towards using Cybersecurity on the actual use of Cybersecurity and the influences of job positions on perceived ease of use of Cybersecurity and on the attitude towards using Cybersecurity and on the actual use of Cybersecurity. A model was constructed to investigate eight hypotheses that are related to the investigation. An online questionnaire was constructed to collect data and results showed that hypotheses 1 to 7 influence were significant. However, hypothesis 8 turned out to be insignificant and no influence was found between job positions and the actual use of Cybersecurity.