期刊文献+
共找到426篇文章
< 1 2 22 >
每页显示 20 50 100
Enhancing Healthcare Data Security and Disease Detection Using Crossover-Based Multilayer Perceptron in Smart Healthcare Systems
1
作者 Mustufa Haider Abidi Hisham Alkhalefah Mohamed K.Aboudaif 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期977-997,共21页
The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthca... The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthcare Systems(SHS)to extract valuable features fromheterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities.These methods are employed across different domains that are susceptible to adversarial attacks,necessitating careful consideration.Hence,this paper proposes a crossover-based Multilayer Perceptron(CMLP)model.The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on themedical records of patients.Once an attack is detected,healthcare professionals are promptly alerted to prevent data leakage.The paper utilizes two datasets,namely the synthetic dataset and the University of Queensland Vital Signs(UQVS)dataset,from which numerous samples are collected.Experimental results are conducted to evaluate the performance of the proposed CMLP model,utilizing various performancemeasures such as Recall,Precision,Accuracy,and F1-score to predict patient activities.Comparing the proposed method with existing approaches,it achieves the highest accuracy,precision,recall,and F1-score.Specifically,the proposedmethod achieves a precision of 93%,an accuracy of 97%,an F1-score of 92%,and a recall of 92%. 展开更多
关键词 Smart healthcare systems multilayer perceptron CYBERSECURITY adversarial attack detection Healthcare 4.0
下载PDF
Adversarial Training-Aided Time-Varying Channel Prediction for TDD/FDD Systems 被引量:2
2
作者 Zhen Zhang Yuxiang Zhang +1 位作者 Jianhua Zhang Feifei Gao 《China Communications》 SCIE CSCD 2023年第6期100-115,共16页
In this paper, a time-varying channel prediction method based on conditional generative adversarial network(CPcGAN) is proposed for time division duplexing/frequency division duplexing(TDD/FDD) systems. CPc GAN utiliz... In this paper, a time-varying channel prediction method based on conditional generative adversarial network(CPcGAN) is proposed for time division duplexing/frequency division duplexing(TDD/FDD) systems. CPc GAN utilizes a discriminator to calculate the divergence between the predicted downlink channel state information(CSI) and the real sample distributions under a conditional constraint that is previous uplink CSI. The generator of CPcGAN learns the function relationship between the conditional constraint and the predicted downlink CSI and reduces the divergence between predicted CSI and real CSI.The capability of CPcGAN fitting data distribution can capture the time-varying and multipath characteristics of the channel well. Considering the propagation characteristics of real channel, we further develop a channel prediction error indicator to determine whether the generator reaches the best state. Simulations show that the CPcGAN can obtain higher prediction accuracy and lower system bit error rate than the existing methods under the same user speeds. 展开更多
关键词 channel prediction time-varying channel conditional generative adversarial network multipath channel deep learning
下载PDF
VeriFace:Defending against Adversarial Attacks in Face Verification Systems
3
作者 Awny Sayed Sohair Kinlany +1 位作者 Alaa Zaki Ahmed Mahfouz 《Computers, Materials & Continua》 SCIE EI 2023年第9期3151-3166,共16页
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi... Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods. 展开更多
关键词 Adversarial attacks face aerification adversarial detection perturbation removal
下载PDF
DeepGan-Privacy Preserving of HealthCare System Using DL
4
作者 Sultan Mesfer Aldossary 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期2199-2212,共14页
The challenge of encrypting sensitive information of a medical image in a healthcare system is still one that requires a high level of computing complexity,despite the ongoing development of cryptography.After looking... The challenge of encrypting sensitive information of a medical image in a healthcare system is still one that requires a high level of computing complexity,despite the ongoing development of cryptography.After looking through the previous research,it has become clear that the security issues still need to be looked into further because there is room for expansion in the research field.Recently,neural networks have emerged as a cost-effective and effective optimization strategy in terms of providing security for images.This revelation came about as a result of current developments.Nevertheless,such an implementation is a technique that is expensive to compute and does not handle the huge variety of different assaults that may be made on pictures.The primary objective of the system that has been described is to provide evidence of a complex framework in which deep neural networks have been applied to improve the efficiency of basic encryption techniques.Our research has led to the development and proposal of an enhanced version of methods that have previously been used to encrypt pictures.Instead,the generative adversarial network(GAN),commonly known as GAN,will serve as the learning network that generates the private key.The transformation domain,which reflects the one-of-a-kind fashion of the private key that is to be formed,is also meant to lead the learning network in the process of actually accomplishing the private key creation procedure.This scheme may be utilized to train an excellent Deep Neural Networks(DNN)model while instantaneously maintaining the confidentiality of training medical images.It was tested by the proposed approach DeepGAN on open-source medical datasets,and three sets of data:The Ultrasonic Brachial Plexus,the Montgomery County Chest X-ray,and the BraTS18.The findings indicate that it is successful in maintaining both performance and privacy,and the findings of the assessment and the findings of the security investigation suggest that the development of suitable generation technologies is capable of generating private keys with a high level of security. 展开更多
关键词 Healthcare CRYPTOGRAPHY deep learning adversarial network PRIVACY
下载PDF
Adversarial Examples Protect Your Privacy on Speech Enhancement System
5
作者 Mingyu Dong Diqun Yan Rangding Wang 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期1-12,共12页
Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automa... Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automatic speech recognition(ASR)technology by some applications on phone devices.To guarantee that the recognized speech content is accurate,speech enhancement technology is used to denoise the input speech.Speech enhancement technology has developed rapidly along with deep neural networks(DNNs),but adversarial examples can cause DNNs to fail.Considering that the vulnerability of DNN can be used to protect the privacy in speech.In this work,we propose an adversarial method to degrade speech enhancement systems,which can prevent the malicious extraction of private information in speech.Experimental results show that the generated enhanced adversarial examples can be removed most content of the target speech or replaced with target speech content by speech enhancement.The word error rate(WER)between the enhanced original example and enhanced adversarial example recognition result can reach 89.0%.WER of target attack between enhanced adversarial example and target example is low at 33.75%.The adversarial perturbation in the adversarial example can bring much more change than itself.The rate of difference between two enhanced examples and adversarial perturbation can reach more than 1.4430.Meanwhile,the transferability between different speech enhancement models is also investigated.The low transferability of the method can be used to ensure the content in the adversarial example is not damaged,the useful information can be extracted by the friendly ASR.This work can prevent the malicious extraction of speech. 展开更多
关键词 Adversarial example speech enhancement privacy protection deep neural network
下载PDF
Virtual sample generation for model-based prognostics and health management of on-board high-speed train control system
6
作者 Jiang Liu Baigen Cair +1 位作者 Jinlan Wang Jian Wang 《High-Speed Railway》 2023年第3期153-161,共9页
In view of class imbalance in data-driven modeling for Prognostics and Health Management(PHM),existing classification methods may fail in generating effective fault prediction models for the on-board high-speed train ... In view of class imbalance in data-driven modeling for Prognostics and Health Management(PHM),existing classification methods may fail in generating effective fault prediction models for the on-board high-speed train control equipment.A virtual sample generation solution based on Generative Adversarial Network(GAN)is proposed to overcome this shortcoming.Aiming at augmenting the sample classes with the imbalanced data problem,the GAN-based virtual sample generation strategy is embedded into the establishment of fault prediction models.Under the PHM framework of the on-board train control system,the virtual sample generation principle and the detailed procedures are presented.With the enhanced class-balancing mechanism and the designed sample augmentation logic,the PHM scheme of the on-board train control equipment has powerful data condition adaptability and can effectively predict the fault probability and life cycle status.Practical data from a specific type of on-board train control system is employed for the validation of the presented solution.The comparative results indicate that GAN-based sample augmentation is capable of achieving a desirable sample balancing level and enhancing the performance of correspondingly derived fault prediction models for the Condition-based Maintenance(CBM)operations. 展开更多
关键词 High-speed railway Prognostics and health management Train control Virtual sample Generative adversarial network
下载PDF
Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection
7
作者 Chengsheng Yuan Baojie Cui +2 位作者 Zhili Zhou Xinting Li Qingming Jonathan Wu 《Computers, Materials & Continua》 SCIE EI 2024年第1期899-914,共16页
In recent years,deep learning has been the mainstream technology for fingerprint liveness detection(FLD)tasks because of its remarkable performance.However,recent studies have shown that these deep fake fingerprint de... In recent years,deep learning has been the mainstream technology for fingerprint liveness detection(FLD)tasks because of its remarkable performance.However,recent studies have shown that these deep fake fingerprint detection(DFFD)models are not resistant to attacks by adversarial examples,which are generated by the introduction of subtle perturbations in the fingerprint image,allowing the model to make fake judgments.Most of the existing adversarial example generation methods are based on gradient optimization,which is easy to fall into local optimal,resulting in poor transferability of adversarial attacks.In addition,the perturbation added to the blank area of the fingerprint image is easily perceived by the human eye,leading to poor visual quality.In response to the above challenges,this paper proposes a novel adversarial attack method based on local adaptive gradient variance for DFFD.The ridge texture area within the fingerprint image has been identified and designated as the region for perturbation generation.Subsequently,the images are fed into the targeted white-box model,and the gradient direction is optimized to compute gradient variance.Additionally,an adaptive parameter search method is proposed using stochastic gradient ascent to explore the parameter values during adversarial example generation,aiming to maximize adversarial attack performance.Experimental results on two publicly available fingerprint datasets show that ourmethod achieves higher attack transferability and robustness than existing methods,and the perturbation is harder to perceive. 展开更多
关键词 FLD adversarial attacks adversarial examples gradient optimization transferability
下载PDF
LDAS&ET-AD:Learnable Distillation Attack Strategies and Evolvable Teachers Adversarial Distillation
8
作者 Shuyi Li Hongchao Hu +3 位作者 Xiaohan Yang Guozhen Cheng Wenyan Liu Wei Guo 《Computers, Materials & Continua》 SCIE EI 2024年第5期2331-2359,共29页
Adversarial distillation(AD)has emerged as a potential solution to tackle the challenging optimization problem of loss with hard labels in adversarial training.However,fixed sample-agnostic and student-egocentric atta... Adversarial distillation(AD)has emerged as a potential solution to tackle the challenging optimization problem of loss with hard labels in adversarial training.However,fixed sample-agnostic and student-egocentric attack strategies are unsuitable for distillation.Additionally,the reliability of guidance from static teachers diminishes as target models become more robust.This paper proposes an AD method called Learnable Distillation Attack Strategies and Evolvable Teachers Adversarial Distillation(LDAS&ET-AD).Firstly,a learnable distillation attack strategies generating mechanism is developed to automatically generate sample-dependent attack strategies tailored for distillation.A strategy model is introduced to produce attack strategies that enable adversarial examples(AEs)to be created in areas where the target model significantly diverges from the teachers by competing with the target model in minimizing or maximizing the AD loss.Secondly,a teacher evolution strategy is introduced to enhance the reliability and effectiveness of knowledge in improving the generalization performance of the target model.By calculating the experimentally updated target model’s validation performance on both clean samples and AEs,the impact of distillation from each training sample and AE on the target model’s generalization and robustness abilities is assessed to serve as feedback to fine-tune standard and robust teachers accordingly.Experiments evaluate the performance of LDAS&ET-AD against different adversarial attacks on the CIFAR-10 and CIFAR-100 datasets.The experimental results demonstrate that the proposed method achieves a robust precision of 45.39%and 42.63%against AutoAttack(AA)on the CIFAR-10 dataset for ResNet-18 and MobileNet-V2,respectively,marking an improvement of 2.31%and 3.49%over the baseline method.In comparison to state-of-the-art adversarial defense techniques,our method surpasses Introspective Adversarial Distillation,the top-performing method in terms of robustness under AA attack for the CIFAR-10 dataset,with enhancements of 1.40%and 1.43%for ResNet-18 and MobileNet-V2,respectively.These findings demonstrate the effectiveness of our proposed method in enhancing the robustness of deep learning networks(DNNs)against prevalent adversarial attacks when compared to other competing methods.In conclusion,LDAS&ET-AD provides reliable and informative soft labels to one of the most promising defense methods,AT,alleviating the limitations of untrusted teachers and unsuitable AEs in existing AD techniques.We hope this paper promotes the development of DNNs in real-world trust-sensitive fields and helps ensure a more secure and dependable future for artificial intelligence systems. 展开更多
关键词 Adversarial training adversarial distillation learnable distillation attack strategies teacher evolution strategy
下载PDF
Toward Trustworthy Decision-Making for Autonomous Vehicles:A Robust Reinforcement Learning Approach with Safety Guarantees
9
作者 Xiangkun He Wenhui Huang Chen Lv 《Engineering》 SCIE EI CAS CSCD 2024年第2期77-89,共13页
While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present... While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present a novel robust reinforcement learning approach with safety guarantees to attain trustworthy decision-making for autonomous vehicles.The proposed technique ensures decision trustworthiness in terms of policy robustness and collision safety.Specifically,an adversary model is learned online to simulate the worst-case uncertainty by approximating the optimal adversarial perturbations on the observed states and environmental dynamics.In addition,an adversarial robust actor-critic algorithm is developed to enable the agent to learn robust policies against perturbations in observations and dynamics.Moreover,we devise a safety mask to guarantee the collision safety of the autonomous driving agent during both the training and testing processes using an interpretable knowledge model known as the Responsibility-Sensitive Safety Model.Finally,the proposed approach is evaluated through both simulations and experiments.These results indicate that the autonomous driving agent can make trustworthy decisions and drastically reduce the number of collisions through robust safety policies. 展开更多
关键词 Autonomous vehicle DECISION-MAKING Reinforcement learning Adversarial attack Safety guarantee
下载PDF
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
10
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
下载PDF
Semi-supervised surface defect detection of wind turbine blades with YOLOv4
11
作者 Chao Huang Minghui Chen Long Wang 《Global Energy Interconnection》 EI CSCD 2024年第3期284-292,共9页
Timely inspection of defects on the surfaces of wind turbine blades can effectively prevent unpredictable accidents.To this end,this study proposes a semi-supervised object-detection network based on You Only Looking ... Timely inspection of defects on the surfaces of wind turbine blades can effectively prevent unpredictable accidents.To this end,this study proposes a semi-supervised object-detection network based on You Only Looking Once version 4(YOLOv4).A semi-supervised structure comprising a generative adversarial network(GAN)was designed to overcome the difficulty in obtaining sufficient samples and sample labeling.In a GAN,the generator is realized by an encoder-decoder network,where the backbone of the encoder is YOLOv4 and the decoder comprises inverse convolutional layers.Partial features from the generator are passed to the defect detection network.Deploying several unlabeled images can significantly improve the generalization and recognition capabilities of defect-detection models.The small-scale object detection capacity of the network can be improved by enhancing essential features in the feature map by adding the concurrent spatial and channel squeeze and excitation(scSE)attention module to the three parts of the YOLOv4 network.A balancing improvement was made to the loss function of YOLOv4 to overcome the imbalance problem of the defective species.The results for both the single-and multi-category defect datasets show that the improved model can make good use of the features of the unlabeled images.The accuracy of wind turbine blade defect detection also has a significant advantage over classical object detection algorithms,including faster R-CNN and DETR. 展开更多
关键词 Defect detection Generative adversarial network scSE attention Semi-supervision Wind turbine
下载PDF
Image segmentation of exfoliated two-dimensional materials by generative adversarial network-based data augmentation
12
作者 程晓昱 解晨雪 +6 位作者 刘宇伦 白瑞雪 肖南海 任琰博 张喜林 马惠 蒋崇云 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期112-117,共6页
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b... Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices. 展开更多
关键词 two-dimensional materials deep learning data augmentation generating adversarial networks
下载PDF
Multi-distortion suppression for neutron radiographic images based on generative adversarial network
13
作者 Cheng-Bo Meng Wang-Wei Zhu +4 位作者 Zhen Zhang Zi-Tong Wang Chen-Yi Zhao Shuang Qiao Tian Zhang 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第4期176-188,共13页
Neutron radiography is a crucial nondestructive testing technology widely used in the aerospace,military,and nuclear industries.However,because of the physical limitations of neutron sources and collimators,the result... Neutron radiography is a crucial nondestructive testing technology widely used in the aerospace,military,and nuclear industries.However,because of the physical limitations of neutron sources and collimators,the resulting neutron radiographic images inevitably exhibit multiple distortions,including noise,geometric unsharpness,and white spots.Furthermore,these distortions are particularly significant in compact neutron radiography systems with low neutron fluxes.Therefore,in this study,we devised a multi-distortion suppression network that employs a modified generative adversarial network to improve the quality of degraded neutron radiographic images.Real neutron radiographic image datasets with various types and levels of distortion were built for the first time as multi-distortion suppression datasets.Thereafter,the coordinate attention mechanism was incorporated into the backbone network to augment the capability of the proposed network to learn the abstract relationship between ideally clear and degraded images.Extensive experiments were performed;the results show that the proposed method can effectively suppress multiple distortions in real neutron radiographic images and achieve state-of-theart perceptual visual quality,thus demonstrating its application potential in neutron radiography. 展开更多
关键词 Neutron radiography Multi-distortion suppression Generative adversarial network Coordinate attention mechanism
下载PDF
Physics-Constrained Robustness Enhancement for Tree Ensembles Applied in Smart Grid
14
作者 Zhibo Yang Xiaohan Huang +2 位作者 Bingdong Wang Bin Hu Zhenyong Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第8期3001-3019,共19页
With the widespread use of machine learning(ML)technology,the operational efficiency and responsiveness of power grids have been significantly enhanced,allowing smart grids to achieve high levels of automation and int... With the widespread use of machine learning(ML)technology,the operational efficiency and responsiveness of power grids have been significantly enhanced,allowing smart grids to achieve high levels of automation and intelligence.However,tree ensemble models commonly used in smart grids are vulnerable to adversarial attacks,making it urgent to enhance their robustness.To address this,we propose a robustness enhancement method that incorporates physical constraints into the node-splitting decisions of tree ensembles.Our algorithm improves robustness by developing a dataset of adversarial examples that comply with physical laws,ensuring training data accurately reflects possible attack scenarios while adhering to physical rules.In our experiments,the proposed method increased robustness against adversarial attacks by 100%when applied to real grid data under physical constraints.These results highlight the advantages of our method in maintaining efficient and secure operation of smart grids under adversarial conditions. 展开更多
关键词 Tree ensemble robustness enhancement adversarial attack smart grid
下载PDF
General multi-attack detection for continuous-variable quantum key distribution with local local oscillator
15
作者 康茁 刘维琪 +1 位作者 齐锦 贺晨 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期255-262,共8页
Continuous-variable quantum key distribution with a local local oscillator(LLO CVQKD)has been extensively researched due to its simplicity and security.For practical security of an LLO CVQKD system,there are two main ... Continuous-variable quantum key distribution with a local local oscillator(LLO CVQKD)has been extensively researched due to its simplicity and security.For practical security of an LLO CVQKD system,there are two main attack modes referred to as reference pulse attack and polarization attack presently.However,there is currently no general defense strategy against such attacks,and the security of the system needs further investigation.Here,we employ a deep learning framework called generative adversarial networks(GANs)to detect both attacks.We first analyze the data in different cases,derive a feature vector as input to a GAN model,and then show the training and testing process of the GAN model for attack classification.The proposed model has two parts,a discriminator and a generator,both of which employ a convolutional neural network(CNN)to improve accuracy.Simulation results show that the proposed scheme can detect and classify attacks without reducing the secret key rate and the maximum transmission distance.It only establishes a detection model by monitoring features of the pulse without adding additional devices. 展开更多
关键词 CVQKD generative adversarial network attack classification
下载PDF
MTTSNet:Military time-sensitive targets stealth network via real-time mask generation
16
作者 Siyu Wang Xiaogang Yang +4 位作者 Ruitao Lu Zhengjie Zhu Fangjia Lian Qing-ge Li Jiwei Fan 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期601-612,共12页
The automatic stealth task of military time-sensitive targets plays a crucial role in maintaining national military security and mastering battlefield dynamics in military applications.We propose a novel Military Time... The automatic stealth task of military time-sensitive targets plays a crucial role in maintaining national military security and mastering battlefield dynamics in military applications.We propose a novel Military Time-sensitive Targets Stealth Network via Real-time Mask Generation(MTTSNet).According to our knowledge,this is the first technology to automatically remove military targets in real-time from videos.The critical steps of MTTSNet are as follows:First,we designed a real-time mask generation network based on the encoder-decoder framework,combined with the domain expansion structure,to effectively extract mask images.Specifically,the ASPP structure in the encoder could achieve advanced semantic feature fusion.The decoder stacked high-dimensional information with low-dimensional information to obtain an effective mask layer.Subsequently,the domain expansion module guided the adaptive expansion of mask images.Second,a context adversarial generation network based on gated convolution was constructed to achieve background restoration of mask positions in the original image.In addition,our method worked in an end-to-end manner.A particular semantic segmentation dataset for military time-sensitive targets has been constructed,called the Military Time-sensitive Target Masking Dataset(MTMD).The MTMD dataset experiment successfully demonstrated that this method could create a mask that completely occludes the target and that the target could be hidden in real time using this mask.We demonstrated the concealment performance of our proposed method by comparing it to a number of well-known and highly optimized baselines. 展开更多
关键词 Deep learning Military application Targets stealth network Mask generation Generative adversarial network
下载PDF
CMAES-WFD:Adversarial Website Fingerprinting Defense Based on Covariance Matrix Adaptation Evolution Strategy
17
作者 Di Wang Yuefei Zhu +1 位作者 Jinlong Fei Maohua Guo 《Computers, Materials & Continua》 SCIE EI 2024年第5期2253-2276,共24页
Website fingerprinting,also known asWF,is a traffic analysis attack that enables local eavesdroppers to infer a user’s browsing destination,even when using the Tor anonymity network.While advanced attacks based on de... Website fingerprinting,also known asWF,is a traffic analysis attack that enables local eavesdroppers to infer a user’s browsing destination,even when using the Tor anonymity network.While advanced attacks based on deep neural network(DNN)can performfeature engineering and attain accuracy rates of over 98%,research has demonstrated thatDNNis vulnerable to adversarial samples.As a result,many researchers have explored using adversarial samples as a defense mechanism against DNN-based WF attacks and have achieved considerable success.However,these methods suffer from high bandwidth overhead or require access to the target model,which is unrealistic.This paper proposes CMAES-WFD,a black-box WF defense based on adversarial samples.The process of generating adversarial examples is transformed into a constrained optimization problem solved by utilizing the Covariance Matrix Adaptation Evolution Strategy(CMAES)optimization algorithm.Perturbations are injected into the local parts of the original traffic to control bandwidth overhead.According to the experiment results,CMAES-WFD was able to significantly decrease the accuracy of Deep Fingerprinting(DF)and VarCnn to below 8.3%and the bandwidth overhead to a maximum of only 14.6%and 20.5%,respectively.Specially,for Automated Website Fingerprinting(AWF)with simple structure,CMAES-WFD reduced the classification accuracy to only 6.7%and the bandwidth overhead to less than 7.4%.Moreover,it was demonstrated that CMAES-WFD was robust against adversarial training to a certain extent. 展开更多
关键词 Traffic analysis deep neural network adversarial sample TOR website fingerprinting
下载PDF
Quantum generative adversarial networks based on a readout error mitigation method with fault tolerant mechanism
18
作者 赵润盛 马鸿洋 +2 位作者 程涛 王爽 范兴奎 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第4期285-295,共11页
Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NIS... Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models. 展开更多
关键词 readout errors quantum generative adversarial networks bit-flip averaging method fault tolerant mechanisms
下载PDF
Correcting Climate Model Sea Surface Temperature Simulations with Generative Adversarial Networks:Climatology,Interannual Variability,and Extremes
19
作者 Ya WANG Gang HUANG +6 位作者 Baoxiang PAN Pengfei LIN Niklas BOERS Weichen TAO Yutong CHEN BO LIU Haijie LI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1299-1312,共14页
Climate models are vital for understanding and projecting global climate change and its associated impacts.However,these models suffer from biases that limit their accuracy in historical simulations and the trustworth... Climate models are vital for understanding and projecting global climate change and its associated impacts.However,these models suffer from biases that limit their accuracy in historical simulations and the trustworthiness of future projections.Addressing these challenges requires addressing internal variability,hindering the direct alignment between model simulations and observations,and thwarting conventional supervised learning methods.Here,we employ an unsupervised Cycle-consistent Generative Adversarial Network(CycleGAN),to correct daily Sea Surface Temperature(SST)simulations from the Community Earth System Model 2(CESM2).Our results reveal that the CycleGAN not only corrects climatological biases but also improves the simulation of major dynamic modes including the El Niño-Southern Oscillation(ENSO)and the Indian Ocean Dipole mode,as well as SST extremes.Notably,it substantially corrects climatological SST biases,decreasing the globally averaged Root-Mean-Square Error(RMSE)by 58%.Intriguingly,the CycleGAN effectively addresses the well-known excessive westward bias in ENSO SST anomalies,a common issue in climate models that traditional methods,like quantile mapping,struggle to rectify.Additionally,it substantially improves the simulation of SST extremes,raising the pattern correlation coefficient(PCC)from 0.56 to 0.88 and lowering the RMSE from 0.5 to 0.32.This enhancement is attributed to better representations of interannual,intraseasonal,and synoptic scales variabilities.Our study offers a novel approach to correct global SST simulations and underscores its effectiveness across different time scales and primary dynamical modes. 展开更多
关键词 generative adversarial networks model bias deep learning El Niño-Southern Oscillation marine heatwaves
下载PDF
Boosting Adversarial Training with Learnable Distribution
20
作者 Kai Chen Jinwei Wang +2 位作者 James Msughter Adeke Guangjie Liu Yuewei Dai 《Computers, Materials & Continua》 SCIE EI 2024年第3期3247-3265,共19页
In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.How... In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.However,the difference in the feature space between natural and adversarial examples hinders the accuracy and robustness of the model in adversarial training.This paper proposes a learnable distribution adversarial training method,aiming to construct the same distribution for training data utilizing the Gaussian mixture model.The distribution centroid is built to classify samples and constrain the distribution of the sample features.The natural and adversarial examples are pushed to the same distribution centroid to improve the accuracy and robustness of the model.The proposed method generates adversarial examples to close the distribution gap between the natural and adversarial examples through an attack algorithm explicitly designed for adversarial training.This algorithm gradually increases the accuracy and robustness of the model by scaling perturbation.Finally,the proposed method outputs the predicted labels and the distance between the sample and the distribution centroid.The distribution characteristics of the samples can be utilized to detect adversarial cases that can potentially evade the model defense.The effectiveness of the proposed method is demonstrated through comprehensive experiments. 展开更多
关键词 Adversarial training feature space learnable distribution distribution centroid
下载PDF
上一页 1 2 22 下一页 到第
使用帮助 返回顶部