期刊文献+
共找到24,404篇文章
< 1 2 250 >
每页显示 20 50 100
Distributed Platooning Control of Automated Vehicles Subject to Replay Attacks Based on Proportional Integral Observers 被引量:1
1
作者 Meiling Xie Derui Ding +3 位作者 Xiaohua Ge Qing-Long Han Hongli Dong Yan Song 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第9期1954-1966,共13页
Secure platooning control plays an important role in enhancing the cooperative driving safety of automated vehicles subject to various security vulnerabilities.This paper focuses on the distributed secure control issu... Secure platooning control plays an important role in enhancing the cooperative driving safety of automated vehicles subject to various security vulnerabilities.This paper focuses on the distributed secure control issue of automated vehicles affected by replay attacks.A proportional-integral-observer(PIO)with predetermined forgetting parameters is first constructed to acquire the dynamical information of vehicles.Then,a time-varying parameter and two positive scalars are employed to describe the temporal behavior of replay attacks.In light of such a scheme and the common properties of Laplace matrices,the closed-loop system with PIO-based controllers is transformed into a switched and time-delayed one.Furthermore,some sufficient conditions are derived to achieve the desired platooning performance by the view of the Lyapunov stability theory.The controller gains are analytically determined by resorting to the solution of certain matrix inequalities only dependent on maximum and minimum eigenvalues of communication topologies.Finally,a simulation example is provided to illustrate the effectiveness of the proposed control strategy. 展开更多
关键词 Automated vehicles platooning control proportional-integral-observers(PIOs) replay attacks TIME-DELAYS
下载PDF
Ensuring Secure Platooning of Constrained Intelligent and Connected Vehicles Against Byzantine Attacks:A Distributed MPC Framework 被引量:1
2
作者 Henglai Wei Hui Zhang +1 位作者 Kamal AI-Haddad Yang Shi 《Engineering》 SCIE EI CAS CSCD 2024年第2期35-46,共12页
This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control fram... This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control framework for such ICVs.This framework seamlessly integrates the predesigned optimal control with distributed model predictive control(DMPC)optimization and introduces a unique distributed attack detector to ensure the reliability of the transmitted information among vehicles.Notably,our strategy uses previously broadcasted information and a specialized convex set,termed the“resilience set”,to identify unreliable data.This approach significantly eases graph robustness prerequisites,requiring only an(F+1)-robust graph,in contrast to the established mean sequence reduced algorithms,which require a minimum(2F+1)-robust graph.Additionally,we introduce a verification algorithm to restore trust in vehicles under minor attacks,further reducing communication network robustness.Our analysis demonstrates the recursive feasibility of the DMPC optimization.Furthermore,the proposed method achieves exceptional control performance by minimizing the discrepancies between the DMPC control inputs and predesigned platoon control inputs,while ensuring constraint compliance and cybersecurity.Simulation results verify the effectiveness of our theoretical findings. 展开更多
关键词 Model predictive control Resilient control Platoon control Intelligent and connected vehicle Byzantine attacks
下载PDF
Phishing Attacks Detection Using EnsembleMachine Learning Algorithms
3
作者 Nisreen Innab Ahmed Abdelgader Fadol Osman +4 位作者 Mohammed Awad Mohammed Ataelfadiel Marwan Abu-Zanona Bassam Mohammad Elzaghmouri Farah H.Zawaideh Mouiad Fadeil Alawneh 《Computers, Materials & Continua》 SCIE EI 2024年第7期1325-1345,共21页
Phishing,an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions.Data indicates a persistent rise ... Phishing,an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions.Data indicates a persistent rise in phishing attacks.Moreover,these fraudulent schemes are progressively becoming more intricate,thereby rendering them more challenging to identify.Hence,it is imperative to utilize sophisticated algorithms to address this issue.Machine learning is a highly effective approach for identifying and uncovering these harmful behaviors.Machine learning(ML)approaches can identify common characteristics in most phishing assaults.In this paper,we propose an ensemble approach and compare it with six machine learning techniques to determine the type of website and whether it is normal or not based on two phishing datasets.After that,we used the normalization technique on the dataset to transform the range of all the features into the same range.The findings of this paper for all algorithms are as follows in the first dataset based on accuracy,precision,recall,and F1-score,respectively:Decision Tree(DT)(0.964,0.961,0.976,0.968),Random Forest(RF)(0.970,0.964,0.984,0.974),Gradient Boosting(GB)(0.960,0.959,0.971,0.965),XGBoost(XGB)(0.973,0.976,0.976,0.976),AdaBoost(0.934,0.934,0.950,0.942),Multi Layer Perceptron(MLP)(0.970,0.971,0.976,0.974)and Voting(0.978,0.975,0.987,0.981).So,the Voting classifier gave the best results.While in the second dataset,all the algorithms gave the same results in four evaluation metrics,which indicates that each of them can effectively accomplish the prediction process.Also,this approach outperformed the previous work in detecting phishing websites with high accuracy,a lower false negative rate,a shorter prediction time,and a lower false positive rate. 展开更多
关键词 Social engineering attacks phishing attacks machine learning SECURITY artificial intelligence
下载PDF
Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks:An Empirical Study
4
作者 Shahad Alzahrani Hatim Alsuwat Emad Alsuwat 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1635-1654,共20页
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ... Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data. 展开更多
关键词 Bayesian networks data poisoning attacks latent variables structure learning algorithms adversarial attacks
下载PDF
Novel cyber-physical collaborative detection and localization method against dynamic load altering attacks in smart energy grids
5
作者 Xinyu Wang Xiangjie Wang +2 位作者 Xiaoyuan Luo Xinping Guan Shuzheng Wang 《Global Energy Interconnection》 EI CSCD 2024年第3期362-376,共15页
Owing to the integration of energy digitization and artificial intelligence technology,smart energy grids can realize the stable,efficient and clean operation of power systems.However,the emergence of cyber-physical a... Owing to the integration of energy digitization and artificial intelligence technology,smart energy grids can realize the stable,efficient and clean operation of power systems.However,the emergence of cyber-physical attacks,such as dynamic load-altering attacks(DLAAs)has introduced great challenges to the security of smart energy grids.Thus,this study developed a novel cyber-physical collaborative security framework for DLAAs in smart energy grids.The proposed framework integrates attack prediction in the cyber layer with the detection and localization of attacks in the physical layer.First,a data-driven method was proposed to predict the DLAA sequence in the cyber layer.By designing a double radial basis function network,the influence of disturbances on attack prediction can be eliminated.Based on the prediction results,an unknown input observer-based detection and localization method was further developed for the physical layer.In addition,an adaptive threshold was designed to replace the traditional precomputed threshold and improve the detection performance of the DLAAs.Consequently,through the collaborative work of the cyber-physics layer,injected DLAAs were effectively detected and located.Compared with existing methodologies,the simulation results on IEEE 14-bus and 118-bus power systems verified the superiority of the proposed cyber-physical collaborative detection and localization against DLAAs. 展开更多
关键词 Smart energy grids Cyber-physical system Dynamic load altering attacks attack prediction Detection and localization
下载PDF
Attention-Guided Sparse Adversarial Attacks with Gradient Dropout
6
作者 ZHAO Hongzhi HAO Lingguang +2 位作者 HAO Kuangrong WEI Bing LIU Xiaoyan 《Journal of Donghua University(English Edition)》 CAS 2024年第5期545-556,共12页
Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based att... Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation. 展开更多
关键词 deep neural network adversarial attack sparse adversarial attack adversarial transferability adversarial example
下载PDF
Explainable AI-Based DDoS Attacks Classification Using Deep Transfer Learning
7
作者 Ahmad Alzu’bi Amjad Albashayreh +1 位作者 Abdelrahman Abuarqoub Mai A.M.Alfawair 《Computers, Materials & Continua》 SCIE EI 2024年第9期3785-3802,共18页
In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by Io... In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by IoT technology,the growing number of IoT devices escalates the likelihood of attacks,emphasizing the need for robust security tools to automatically detect and explain threats.This paper introduces a deep learning methodology for detecting and classifying distributed denial of service(DDoS)attacks,addressing a significant security concern within IoT environments.An effective procedure of deep transfer learning is applied to utilize deep learning backbones,which is then evaluated on two benchmarking datasets of DDoS attacks in terms of accuracy and time complexity.By leveraging several deep architectures,the study conducts thorough binary and multiclass experiments,each varying in the complexity of classifying attack types and demonstrating real-world scenarios.Additionally,this study employs an explainable artificial intelligence(XAI)AI technique to elucidate the contribution of extracted features in the process of attack detection.The experimental results demonstrate the effectiveness of the proposed method,achieving a recall of 99.39%by the XAI bidirectional long short-term memory(XAI-BiLSTM)model. 展开更多
关键词 DDoS attack classification deep learning explainable AI CYBERSECURITY
下载PDF
Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems:Hierarchical Poisoning Attacks and Defenses in Federated Learning
8
作者 Yongsheng Zhu Chong Liu +8 位作者 Chunlei Chen Xiaoting Lyu Zheng Chen Bin Wang Fuqiang Hu Hanxi Li Jiao Dai Baigen Cai Wei Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第11期1305-1325,共21页
The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning o... The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data.However,despite its privacy benefits,federated learning systems are vulnerable to poisoning attacks,where adversaries alter local model parameters on compromised clients and send malicious updates to the server,potentially compromising the global model’s accuracy.In this study,we introduce PMM(Perturbation coefficient Multiplied by Maximum value),a new poisoning attack method that perturbs model updates layer by layer,demonstrating the threat of poisoning attacks faced by federated learning.Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy.Additionally,we propose an effective defense method,namely CLBL(Cluster Layer By Layer).Experiment results on three datasets have confirmed CLBL’s effectiveness. 展开更多
关键词 PRIVACY-PRESERVING intelligent railway transportation system federated learning poisoning attacks DEFENSES
下载PDF
Rethinking multi-spatial information for transferable adversarial attacks on speaker recognition systems
9
作者 Junjian Zhang Hao Tan +2 位作者 Le Wang Yaguan Qian Zhaoquan Gu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第3期620-631,共12页
Adversarial attacks have been posing significant security concerns to intelligent systems,such as speaker recognition systems(SRSs).Most attacks assume the neural networks in the systems are known beforehand,while bla... Adversarial attacks have been posing significant security concerns to intelligent systems,such as speaker recognition systems(SRSs).Most attacks assume the neural networks in the systems are known beforehand,while black-box attacks are proposed without such information to meet practical situations.Existing black-box attacks improve trans-ferability by integrating multiple models or training on multiple datasets,but these methods are costly.Motivated by the optimisation strategy with spatial information on the perturbed paths and samples,we propose a Dual Spatial Momentum Iterative Fast Gradient Sign Method(DS-MI-FGSM)to improve the transferability of black-box at-tacks against SRSs.Specifically,DS-MI-FGSM only needs a single data and one model as the input;by extending to the data and model neighbouring spaces,it generates adver-sarial examples against the integrating models.To reduce the risk of overfitting,DS-MI-FGSM also introduces gradient masking to improve transferability.The authors conduct extensive experiments regarding the speaker recognition task,and the results demonstrate the effectiveness of their method,which can achieve up to 92%attack success rate on the victim model in black-box scenarios with only one known model. 展开更多
关键词 speaker recognition spoofing attacks
下载PDF
Mitigating Blackhole and Greyhole Routing Attacks in Vehicular Ad Hoc Networks Using Blockchain Based Smart Contracts
10
作者 Abdulatif Alabdulatif Mada Alharbi +1 位作者 Abir Mchergui Tarek Moulahi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期2005-2021,共17页
The rapid increase in vehicle traffic volume in modern societies has raised the need to develop innovative solutions to reduce traffic congestion and enhance traffic management efficiency.Revolutionary advanced techno... The rapid increase in vehicle traffic volume in modern societies has raised the need to develop innovative solutions to reduce traffic congestion and enhance traffic management efficiency.Revolutionary advanced technology,such as Intelligent Transportation Systems(ITS),enables improved traffic management,helps eliminate congestion,and supports a safer environment.ITS provides real-time information on vehicle traffic and transportation systems that can improve decision-making for road users.However,ITS suffers from routing issues at the network layer when utilising Vehicular Ad Hoc Networks(VANETs).This is because each vehicle plays the role of a router in this network,which leads to a complex vehicle communication network,causing issues such as repeated link breakages between vehicles resulting from the mobility of the network and rapid topological variation.This may lead to loss or delay in packet transmissions;this weakness can be exploited in routing attacks,such as black-hole and gray-hole attacks,that threaten the availability of ITS services.In this paper,a Blockchain-based smart contracts model is proposed to offer convenient and comprehensive security mechanisms,enhancing the trustworthiness between vehicles.Self-Classification Blockchain-Based Contracts(SCBC)and Voting-Classification Blockchain-Based Contracts(VCBC)are utilised in the proposed protocol.The results show that VCBC succeeds in attaining better results in PDR and TP performance even in the presence of Blackhole and Grayhole attacks. 展开更多
关键词 Blockchain data privacy machine learning routing attacks smart contract VANET
下载PDF
Anti-Byzantine Attacks Enabled Vehicle Selection for Asynchronous Federated Learning in Vehicular Edge Computing
11
作者 Zhang Cui Xu Xiao +4 位作者 Wu Qiong Fan Pingyi Fan Qiang Zhu Huiling Wang Jiangzhou 《China Communications》 SCIE CSCD 2024年第8期1-17,共17页
In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amount... In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model. 展开更多
关键词 asynchronous federated learning byzantine attacks vehicle selection vehicular edge computing
下载PDF
A Security Trade-Off Scheme of Anomaly Detection System in IoT to Defend against Data-Tampering Attacks
12
作者 Bing Liu Zhe Zhang +3 位作者 Shengrong Hu Song Sun Dapeng Liu Zhenyu Qiu 《Computers, Materials & Continua》 SCIE EI 2024年第3期4049-4069,共21页
Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misr... Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misreporting of normal data,which will impact the normal operation of IoT.To mitigate the impact caused by the high false positive rate of ADS,this paper proposes an ADS management scheme for clustered IoT.First,we model the data transmission and anomaly detection in clustered IoT.Then,the operation strategy of the clustered IoT is formulated as the running probabilities of all ADSs deployed on every IoT device.In the presence of a high false positive rate in ADSs,to deal with the trade-off between the security and availability of data,we develop a linear programming model referred to as a security trade-off(ST)model.Next,we develop an analysis framework for the ST model,and solve the ST model on an IoT simulation platform.Last,we reveal the effect of some factors on the maximum combined detection rate through theoretical analysis.Simulations show that the ADS management scheme can mitigate the data unavailability loss caused by the high false positive rates in ADS. 展开更多
关键词 Network security Internet of Things data-tampering attack anomaly detection
下载PDF
Recurrent Transient Ischemic Attacks Revealing Cerebral Amyloid Angiopathy: A Comprehensive Case
13
作者 Kenza Khelfaoui Tredano Houyam Tibar +3 位作者 Kaoutar El Alaoui Taoussi Wafae Regragui Abdeljalil El Quessar Ali Benomar 《World Journal of Neuroscience》 CAS 2024年第1期33-36,共4页
This case report investigates the manifestation of cerebral amyloid angiopathy (CAA) through recurrent Transient Ischemic Attacks (TIAs) in an 82-year-old patient. Despite initial diagnostic complexities, cerebral ang... This case report investigates the manifestation of cerebral amyloid angiopathy (CAA) through recurrent Transient Ischemic Attacks (TIAs) in an 82-year-old patient. Despite initial diagnostic complexities, cerebral angiography-MRI revealed features indicative of CAA. Symptomatic treatment resulted in improvement, but the patient later developed a fatal hematoma. The discussion navigates the intricate therapeutic landscape of repetitive TIAs in the elderly with cardiovascular risk factors, emphasizing the pivotal role of cerebral MRI and meticulous bleeding risk management. The conclusion stresses the importance of incorporating SWI sequences, specifically when suspecting a cardioembolic TIA, as a diagnostic measure to explore and exclude CAA in the differential diagnosis. This case report provides valuable insights into these challenges, highlighting the need to consider CAA in relevant cases. 展开更多
关键词 Cerebral Amyloid Angiopathy Transient Ischemic attacks Recurrent Hemiparesis Susceptibility-Weighted Imaging Cardioembolic Origin Bleeding Risk Management Differential Diagnosis
下载PDF
Adversarial attacks and defenses for digital communication signals identification
14
作者 Qiao Tian Sicheng Zhang +1 位作者 Shiwen Mao Yun Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第3期756-764,共9页
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ... As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research. 展开更多
关键词 Digital communication signals identification AI model Adversarial attacks Adversarial defenses Adversarial indicators
下载PDF
Distributed Fault Estimation for Nonlinear Systems With Sensor Saturation and Deception Attacks Using Stochastic Communication Protocols
15
作者 Weiwei Sun Xinci Gao +1 位作者 Lusong Ding Xiangyu Chen 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第8期1865-1876,共12页
This paper is aimed at the distributed fault estimation issue associated with the potential loss of actuator efficiency for a type of discrete-time nonlinear systems with sensor saturation.For the distributed estimati... This paper is aimed at the distributed fault estimation issue associated with the potential loss of actuator efficiency for a type of discrete-time nonlinear systems with sensor saturation.For the distributed estimation structure under consideration,an estimation center is not necessary,and the estimator derives its information from itself and neighboring nodes,which fuses the state vector and the measurement vector.In an effort to cut down data conflicts in communication networks,the stochastic communication protocol(SCP)is employed so that the output signals from sensors can be selected.Additionally,a recursive security estimator scheme is created since attackers randomly inject malicious signals into the selected data.On this basis,sufficient conditions for a fault estimator with less conservatism are presented which ensure an upper bound of the estimation error covariance and the mean-square exponential boundedness of the estimating error.Finally,a numerical example is used to show the reliability and effectiveness of the considered distributed estimation algorithm. 展开更多
关键词 Actuator fault deception attacks distributed estimation sensor saturation stochastic communication protocol(SCP).
下载PDF
Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications
16
作者 Harshvardhan Aditya Siddansh Chawla +6 位作者 Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期421-447,共27页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. 展开更多
关键词 Large Language Models PII Leakage Privacy Memorization OVERFITTING Membership Inference attack (MIA)
下载PDF
Security Concerns with IoT Routing: A Review of Attacks, Countermeasures, and Future Prospects
17
作者 Ali M. A. Abuagoub 《Advances in Internet of Things》 2024年第4期67-98,共32页
Today’s Internet of Things (IoT) application domains are widely distributed, which exposes them to several security risks and assaults, especially when data is being transferred between endpoints with constrained res... Today’s Internet of Things (IoT) application domains are widely distributed, which exposes them to several security risks and assaults, especially when data is being transferred between endpoints with constrained resources and the backbone network. Numerous researchers have put a lot of effort into addressing routing protocol security vulnerabilities, particularly regarding IoT RPL-based networks. Despite multiple studies on the security of IoT routing protocols, routing attacks remain a major focus of ongoing research in IoT contexts. This paper examines the different types of routing attacks, how they affect Internet of Things networks, and how to mitigate them. Then, it provides an overview of recently published work on routing threats, primarily focusing on countermeasures, highlighting noteworthy security contributions, and drawing conclusions. Consequently, it achieves the study’s main objectives by summarizing intriguing current research trends in IoT routing security, pointing out knowledge gaps in this field, and suggesting directions and recommendations for future research on IoT routing security. 展开更多
关键词 IoT Routing attacks RPL Security Resource attacks Topology attacks Traffic attacks
下载PDF
A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks
18
作者 Hong Huang Yunfei Wang +1 位作者 Guotao Yuan Xin Li 《Computers, Materials & Continua》 SCIE EI 2024年第7期361-387,共27页
Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim... Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim to investigate backdoor attack methods for image categorization tasks,to promote the development of DNN towards higher security.Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples,and the meticulous data screening by developers,hindering practical attack implementation.To overcome these challenges,this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation(GN-TUAP)algorithm.This approach restricts the direction of perturbations and normalizes abnormal pixel values,ensuring that perturbations progress as much as possible in a direction perpendicular to the decision hyperplane in linear problems.This limits anomalies within the perturbations improves their visual stealthiness,and makes them more challenging for defense methods to detect.To verify the effectiveness,stealthiness,and robustness of GN-TUAP,we proposed a comprehensive threat model.Based on this model,extensive experiments were conducted using the CIFAR-10,CIFAR-100,GTSRB,and MNIST datasets,comparing our method with existing state-of-the-art attack methods.We also tested our perturbation triggers using various defense methods and further experimented on the robustness of the triggers against noise filtering techniques.The experimental outcomes demonstrate that backdoor attacks leveraging perturbations generated via our algorithm exhibit cross-model attack effectiveness and superior stealthiness.Furthermore,they possess robust anti-detection capabilities and maintain commendable performance when subjected to noise-filtering methods. 展开更多
关键词 Image classification model backdoor attack gaussian distribution Artificial Intelligence(AI)security
下载PDF
A Probabilistic Trust Model and Control Algorithm to Protect 6G Networks against Malicious Data Injection Attacks in Edge Computing Environments
19
作者 Borja Bordel Sánchez Ramón Alcarria Tomás Robles 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期631-654,共24页
Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control l... Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control loops critical for managing Industry 5.0 deployments,digital agriculture systems,and essential infrastructures.The provision of extensive machine-type communications through 6G will render many of these innovative systems autonomous and unsupervised.While full automation will enhance industrial efficiency significantly,it concurrently introduces new cyber risks and vulnerabilities.In particular,unattended systems are highly susceptible to trust issues:malicious nodes and false information can be easily introduced into control loops.Additionally,Denialof-Service attacks can be executed by inundating the network with valueless noise.Current anomaly detection schemes require the entire transformation of the control software to integrate new steps and can only mitigate anomalies that conform to predefined mathematical models.Solutions based on an exhaustive data collection to detect anomalies are precise but extremely slow.Standard models,with their limited understanding of mobile networks,can achieve precision rates no higher than 75%.Therefore,more general and transversal protection mechanisms are needed to detect malicious behaviors transparently.This paper introduces a probabilistic trust model and control algorithm designed to address this gap.The model determines the probability of any node to be trustworthy.Communication channels are pruned for those nodes whose probability is below a given threshold.The trust control algorithmcomprises three primary phases,which feed themodel with three different probabilities,which are weighted and combined.Initially,anomalous nodes are identified using Gaussian mixture models and clustering technologies.Next,traffic patterns are studied using digital Bessel functions and the functional scalar product.Finally,the information coherence and content are analyzed.The noise content and abnormal information sequences are detected using a Volterra filter and a bank of Finite Impulse Response filters.An experimental validation based on simulation tools and environments was carried out.Results show the proposed solution can successfully detect up to 92%of malicious data injection attacks. 展开更多
关键词 6G networks noise injection attacks Gaussian mixture model Bessel function traffic filter Volterra filter
下载PDF
A Novel Intrusion Detection Model of Unknown Attacks Using Convolutional Neural Networks
20
作者 Abdullah Alsaleh 《Computer Systems Science & Engineering》 2024年第2期431-449,共19页
With the increasing number of connected devices in the Internet of Things(IoT)era,the number of intrusions is also increasing.An intrusion detection system(IDS)is a secondary intelligent system for monitoring,detectin... With the increasing number of connected devices in the Internet of Things(IoT)era,the number of intrusions is also increasing.An intrusion detection system(IDS)is a secondary intelligent system for monitoring,detecting and alerting against malicious activity.IDS is important in developing advanced security models.This study reviews the importance of various techniques,tools,and methods used in IoT detection and/or prevention systems.Specifically,it focuses on machine learning(ML)and deep learning(DL)techniques for IDS.This paper proposes an accurate intrusion detection model to detect traditional and new attacks on the Internet of Vehicles.To speed up the detection of recent attacks,the proposed network architecture developed at the data processing layer is incorporated with a convolutional neural network(CNN),which performs better than a support vector machine(SVM).Processing data are enhanced using the synthetic minority oversampling technique to ensure learning accuracy.The nearest class mean classifier is applied during the testing phase to identify new attacks.Experimental results using the AWID dataset,which is one of the most common open intrusion detection datasets,revealed a higher detection accuracy(94%)compared to SVM and random forest methods. 展开更多
关键词 Internet of Vehicles intrusion detection machine learning unknown attacks data processing layer
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部