期刊文献+
共找到444篇文章
< 1 2 23 >
每页显示 20 50 100
Correcting Climate Model Sea Surface Temperature Simulations with Generative Adversarial Networks:Climatology,Interannual Variability,and Extremes 被引量:2
1
作者 Ya WANG Gang HUANG +6 位作者 Baoxiang PAN Pengfei LIN Niklas BOERS Weichen TAO Yutong CHEN BO LIU Haijie LI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1299-1312,共14页
Climate models are vital for understanding and projecting global climate change and its associated impacts.However,these models suffer from biases that limit their accuracy in historical simulations and the trustworth... Climate models are vital for understanding and projecting global climate change and its associated impacts.However,these models suffer from biases that limit their accuracy in historical simulations and the trustworthiness of future projections.Addressing these challenges requires addressing internal variability,hindering the direct alignment between model simulations and observations,and thwarting conventional supervised learning methods.Here,we employ an unsupervised Cycle-consistent Generative Adversarial Network(CycleGAN),to correct daily Sea Surface Temperature(SST)simulations from the Community Earth System Model 2(CESM2).Our results reveal that the CycleGAN not only corrects climatological biases but also improves the simulation of major dynamic modes including the El Niño-Southern Oscillation(ENSO)and the Indian Ocean Dipole mode,as well as SST extremes.Notably,it substantially corrects climatological SST biases,decreasing the globally averaged Root-Mean-Square Error(RMSE)by 58%.Intriguingly,the CycleGAN effectively addresses the well-known excessive westward bias in ENSO SST anomalies,a common issue in climate models that traditional methods,like quantile mapping,struggle to rectify.Additionally,it substantially improves the simulation of SST extremes,raising the pattern correlation coefficient(PCC)from 0.56 to 0.88 and lowering the RMSE from 0.5 to 0.32.This enhancement is attributed to better representations of interannual,intraseasonal,and synoptic scales variabilities.Our study offers a novel approach to correct global SST simulations and underscores its effectiveness across different time scales and primary dynamical modes. 展开更多
关键词 generative adversarial networks model bias deep learning El Niño-Southern Oscillation marine heatwaves
下载PDF
Practical Secret Sharing Scheme Realizing Generalized Adversary Structure 被引量:2
2
作者 Yuan-BoGuo Jian-FengMa 《Journal of Computer Science & Technology》 SCIE EI CSCD 2004年第4期564-569,共6页
Most existing secret sharing schemes are constructed to realize generalaccess structure, which is defined in terms of authorized groups of participants, and is unable tobe applied directly to the design of intrusion t... Most existing secret sharing schemes are constructed to realize generalaccess structure, which is defined in terms of authorized groups of participants, and is unable tobe applied directly to the design of intrusion tolerant system, which often concerns corruptiblegroups of participants instead of authorized ones. Instead, the generalized adversary structure,which specifies the corruptible subsets of participants, can be determined directly by exploit ofthe system setting and the attributes of all participants. In this paper an efficient secret sharingscheme realizing generalized adversary structure is proposed, and it is proved that the schemesatisfies both properties of the secret sharing scheme, i.e., the reconstruction property and theperfect property. The main features of this scheme are that it performs modular additions andsubtractions only, and each share appears in multiple share sets and is thus replicated. The formeris an advantage in terms of computational complexity, and the latter is an advantage when recoveryof some corrupted participants is necessary. So our scheme can achieve lower computation cost andhigher availability. Some reduction on the scheme is also done finally, based on an equivalencerelation defined over adversary structure. Analysis shows that reduced scheme still preserves theproperties of the original one. 展开更多
关键词 secret sharing generalized adversary structure write structure equivalenceclass REDUCTION
原文传递
Adversarial attacks and defenses for digital communication signals identification
3
作者 Qiao Tian Sicheng Zhang +1 位作者 Shiwen Mao Yun Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第3期756-764,共9页
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ... As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research. 展开更多
关键词 Digital communication signals identification AI model Adversarial attacks Adversarial defenses Adversarial indicators
下载PDF
Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection
4
作者 Chengsheng Yuan Baojie Cui +2 位作者 Zhili Zhou Xinting Li Qingming Jonathan Wu 《Computers, Materials & Continua》 SCIE EI 2024年第1期899-914,共16页
In recent years,deep learning has been the mainstream technology for fingerprint liveness detection(FLD)tasks because of its remarkable performance.However,recent studies have shown that these deep fake fingerprint de... In recent years,deep learning has been the mainstream technology for fingerprint liveness detection(FLD)tasks because of its remarkable performance.However,recent studies have shown that these deep fake fingerprint detection(DFFD)models are not resistant to attacks by adversarial examples,which are generated by the introduction of subtle perturbations in the fingerprint image,allowing the model to make fake judgments.Most of the existing adversarial example generation methods are based on gradient optimization,which is easy to fall into local optimal,resulting in poor transferability of adversarial attacks.In addition,the perturbation added to the blank area of the fingerprint image is easily perceived by the human eye,leading to poor visual quality.In response to the above challenges,this paper proposes a novel adversarial attack method based on local adaptive gradient variance for DFFD.The ridge texture area within the fingerprint image has been identified and designated as the region for perturbation generation.Subsequently,the images are fed into the targeted white-box model,and the gradient direction is optimized to compute gradient variance.Additionally,an adaptive parameter search method is proposed using stochastic gradient ascent to explore the parameter values during adversarial example generation,aiming to maximize adversarial attack performance.Experimental results on two publicly available fingerprint datasets show that ourmethod achieves higher attack transferability and robustness than existing methods,and the perturbation is harder to perceive. 展开更多
关键词 FLD adversarial attacks adversarial examples gradient optimization transferability
下载PDF
LDAS&ET-AD:Learnable Distillation Attack Strategies and Evolvable Teachers Adversarial Distillation
5
作者 Shuyi Li Hongchao Hu +3 位作者 Xiaohan Yang Guozhen Cheng Wenyan Liu Wei Guo 《Computers, Materials & Continua》 SCIE EI 2024年第5期2331-2359,共29页
Adversarial distillation(AD)has emerged as a potential solution to tackle the challenging optimization problem of loss with hard labels in adversarial training.However,fixed sample-agnostic and student-egocentric atta... Adversarial distillation(AD)has emerged as a potential solution to tackle the challenging optimization problem of loss with hard labels in adversarial training.However,fixed sample-agnostic and student-egocentric attack strategies are unsuitable for distillation.Additionally,the reliability of guidance from static teachers diminishes as target models become more robust.This paper proposes an AD method called Learnable Distillation Attack Strategies and Evolvable Teachers Adversarial Distillation(LDAS&ET-AD).Firstly,a learnable distillation attack strategies generating mechanism is developed to automatically generate sample-dependent attack strategies tailored for distillation.A strategy model is introduced to produce attack strategies that enable adversarial examples(AEs)to be created in areas where the target model significantly diverges from the teachers by competing with the target model in minimizing or maximizing the AD loss.Secondly,a teacher evolution strategy is introduced to enhance the reliability and effectiveness of knowledge in improving the generalization performance of the target model.By calculating the experimentally updated target model’s validation performance on both clean samples and AEs,the impact of distillation from each training sample and AE on the target model’s generalization and robustness abilities is assessed to serve as feedback to fine-tune standard and robust teachers accordingly.Experiments evaluate the performance of LDAS&ET-AD against different adversarial attacks on the CIFAR-10 and CIFAR-100 datasets.The experimental results demonstrate that the proposed method achieves a robust precision of 45.39%and 42.63%against AutoAttack(AA)on the CIFAR-10 dataset for ResNet-18 and MobileNet-V2,respectively,marking an improvement of 2.31%and 3.49%over the baseline method.In comparison to state-of-the-art adversarial defense techniques,our method surpasses Introspective Adversarial Distillation,the top-performing method in terms of robustness under AA attack for the CIFAR-10 dataset,with enhancements of 1.40%and 1.43%for ResNet-18 and MobileNet-V2,respectively.These findings demonstrate the effectiveness of our proposed method in enhancing the robustness of deep learning networks(DNNs)against prevalent adversarial attacks when compared to other competing methods.In conclusion,LDAS&ET-AD provides reliable and informative soft labels to one of the most promising defense methods,AT,alleviating the limitations of untrusted teachers and unsuitable AEs in existing AD techniques.We hope this paper promotes the development of DNNs in real-world trust-sensitive fields and helps ensure a more secure and dependable future for artificial intelligence systems. 展开更多
关键词 Adversarial training adversarial distillation learnable distillation attack strategies teacher evolution strategy
下载PDF
Toward Trustworthy Decision-Making for Autonomous Vehicles:A Robust Reinforcement Learning Approach with Safety Guarantees
6
作者 Xiangkun He Wenhui Huang Chen Lv 《Engineering》 SCIE EI CAS CSCD 2024年第2期77-89,共13页
While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present... While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present a novel robust reinforcement learning approach with safety guarantees to attain trustworthy decision-making for autonomous vehicles.The proposed technique ensures decision trustworthiness in terms of policy robustness and collision safety.Specifically,an adversary model is learned online to simulate the worst-case uncertainty by approximating the optimal adversarial perturbations on the observed states and environmental dynamics.In addition,an adversarial robust actor-critic algorithm is developed to enable the agent to learn robust policies against perturbations in observations and dynamics.Moreover,we devise a safety mask to guarantee the collision safety of the autonomous driving agent during both the training and testing processes using an interpretable knowledge model known as the Responsibility-Sensitive Safety Model.Finally,the proposed approach is evaluated through both simulations and experiments.These results indicate that the autonomous driving agent can make trustworthy decisions and drastically reduce the number of collisions through robust safety policies. 展开更多
关键词 Autonomous vehicle DECISION-MAKING Reinforcement learning Adversarial attack Safety guarantee
下载PDF
Image segmentation of exfoliated two-dimensional materials by generative adversarial network-based data augmentation
7
作者 程晓昱 解晨雪 +6 位作者 刘宇伦 白瑞雪 肖南海 任琰博 张喜林 马惠 蒋崇云 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期112-117,共6页
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b... Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices. 展开更多
关键词 two-dimensional materials deep learning data augmentation generating adversarial networks
下载PDF
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
8
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
下载PDF
Semi-supervised surface defect detection of wind turbine blades with YOLOv4
9
作者 Chao Huang Minghui Chen Long Wang 《Global Energy Interconnection》 EI CSCD 2024年第3期284-292,共9页
Timely inspection of defects on the surfaces of wind turbine blades can effectively prevent unpredictable accidents.To this end,this study proposes a semi-supervised object-detection network based on You Only Looking ... Timely inspection of defects on the surfaces of wind turbine blades can effectively prevent unpredictable accidents.To this end,this study proposes a semi-supervised object-detection network based on You Only Looking Once version 4(YOLOv4).A semi-supervised structure comprising a generative adversarial network(GAN)was designed to overcome the difficulty in obtaining sufficient samples and sample labeling.In a GAN,the generator is realized by an encoder-decoder network,where the backbone of the encoder is YOLOv4 and the decoder comprises inverse convolutional layers.Partial features from the generator are passed to the defect detection network.Deploying several unlabeled images can significantly improve the generalization and recognition capabilities of defect-detection models.The small-scale object detection capacity of the network can be improved by enhancing essential features in the feature map by adding the concurrent spatial and channel squeeze and excitation(scSE)attention module to the three parts of the YOLOv4 network.A balancing improvement was made to the loss function of YOLOv4 to overcome the imbalance problem of the defective species.The results for both the single-and multi-category defect datasets show that the improved model can make good use of the features of the unlabeled images.The accuracy of wind turbine blade defect detection also has a significant advantage over classical object detection algorithms,including faster R-CNN and DETR. 展开更多
关键词 Defect detection Generative adversarial network scSE attention Semi-supervision Wind turbine
下载PDF
Multi-distortion suppression for neutron radiographic images based on generative adversarial network
10
作者 Cheng-Bo Meng Wang-Wei Zhu +4 位作者 Zhen Zhang Zi-Tong Wang Chen-Yi Zhao Shuang Qiao Tian Zhang 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第4期176-188,共13页
Neutron radiography is a crucial nondestructive testing technology widely used in the aerospace,military,and nuclear industries.However,because of the physical limitations of neutron sources and collimators,the result... Neutron radiography is a crucial nondestructive testing technology widely used in the aerospace,military,and nuclear industries.However,because of the physical limitations of neutron sources and collimators,the resulting neutron radiographic images inevitably exhibit multiple distortions,including noise,geometric unsharpness,and white spots.Furthermore,these distortions are particularly significant in compact neutron radiography systems with low neutron fluxes.Therefore,in this study,we devised a multi-distortion suppression network that employs a modified generative adversarial network to improve the quality of degraded neutron radiographic images.Real neutron radiographic image datasets with various types and levels of distortion were built for the first time as multi-distortion suppression datasets.Thereafter,the coordinate attention mechanism was incorporated into the backbone network to augment the capability of the proposed network to learn the abstract relationship between ideally clear and degraded images.Extensive experiments were performed;the results show that the proposed method can effectively suppress multiple distortions in real neutron radiographic images and achieve state-of-theart perceptual visual quality,thus demonstrating its application potential in neutron radiography. 展开更多
关键词 Neutron radiography Multi-distortion suppression Generative adversarial network Coordinate attention mechanism
下载PDF
Physics-Constrained Robustness Enhancement for Tree Ensembles Applied in Smart Grid
11
作者 Zhibo Yang Xiaohan Huang +2 位作者 Bingdong Wang Bin Hu Zhenyong Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第8期3001-3019,共19页
With the widespread use of machine learning(ML)technology,the operational efficiency and responsiveness of power grids have been significantly enhanced,allowing smart grids to achieve high levels of automation and int... With the widespread use of machine learning(ML)technology,the operational efficiency and responsiveness of power grids have been significantly enhanced,allowing smart grids to achieve high levels of automation and intelligence.However,tree ensemble models commonly used in smart grids are vulnerable to adversarial attacks,making it urgent to enhance their robustness.To address this,we propose a robustness enhancement method that incorporates physical constraints into the node-splitting decisions of tree ensembles.Our algorithm improves robustness by developing a dataset of adversarial examples that comply with physical laws,ensuring training data accurately reflects possible attack scenarios while adhering to physical rules.In our experiments,the proposed method increased robustness against adversarial attacks by 100%when applied to real grid data under physical constraints.These results highlight the advantages of our method in maintaining efficient and secure operation of smart grids under adversarial conditions. 展开更多
关键词 Tree ensemble robustness enhancement adversarial attack smart grid
下载PDF
Enhancing Healthcare Data Security and Disease Detection Using Crossover-Based Multilayer Perceptron in Smart Healthcare Systems
12
作者 Mustufa Haider Abidi Hisham Alkhalefah Mohamed K.Aboudaif 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期977-997,共21页
The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthca... The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthcare Systems(SHS)to extract valuable features fromheterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities.These methods are employed across different domains that are susceptible to adversarial attacks,necessitating careful consideration.Hence,this paper proposes a crossover-based Multilayer Perceptron(CMLP)model.The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on themedical records of patients.Once an attack is detected,healthcare professionals are promptly alerted to prevent data leakage.The paper utilizes two datasets,namely the synthetic dataset and the University of Queensland Vital Signs(UQVS)dataset,from which numerous samples are collected.Experimental results are conducted to evaluate the performance of the proposed CMLP model,utilizing various performancemeasures such as Recall,Precision,Accuracy,and F1-score to predict patient activities.Comparing the proposed method with existing approaches,it achieves the highest accuracy,precision,recall,and F1-score.Specifically,the proposedmethod achieves a precision of 93%,an accuracy of 97%,an F1-score of 92%,and a recall of 92%. 展开更多
关键词 Smart healthcare systems multilayer perceptron CYBERSECURITY adversarial attack detection Healthcare 4.0
下载PDF
General multi-attack detection for continuous-variable quantum key distribution with local local oscillator
13
作者 康茁 刘维琪 +1 位作者 齐锦 贺晨 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期255-262,共8页
Continuous-variable quantum key distribution with a local local oscillator(LLO CVQKD)has been extensively researched due to its simplicity and security.For practical security of an LLO CVQKD system,there are two main ... Continuous-variable quantum key distribution with a local local oscillator(LLO CVQKD)has been extensively researched due to its simplicity and security.For practical security of an LLO CVQKD system,there are two main attack modes referred to as reference pulse attack and polarization attack presently.However,there is currently no general defense strategy against such attacks,and the security of the system needs further investigation.Here,we employ a deep learning framework called generative adversarial networks(GANs)to detect both attacks.We first analyze the data in different cases,derive a feature vector as input to a GAN model,and then show the training and testing process of the GAN model for attack classification.The proposed model has two parts,a discriminator and a generator,both of which employ a convolutional neural network(CNN)to improve accuracy.Simulation results show that the proposed scheme can detect and classify attacks without reducing the secret key rate and the maximum transmission distance.It only establishes a detection model by monitoring features of the pulse without adding additional devices. 展开更多
关键词 CVQKD generative adversarial network attack classification
下载PDF
MTTSNet:Military time-sensitive targets stealth network via real-time mask generation
14
作者 Siyu Wang Xiaogang Yang +4 位作者 Ruitao Lu Zhengjie Zhu Fangjia Lian Qing-ge Li Jiwei Fan 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期601-612,共12页
The automatic stealth task of military time-sensitive targets plays a crucial role in maintaining national military security and mastering battlefield dynamics in military applications.We propose a novel Military Time... The automatic stealth task of military time-sensitive targets plays a crucial role in maintaining national military security and mastering battlefield dynamics in military applications.We propose a novel Military Time-sensitive Targets Stealth Network via Real-time Mask Generation(MTTSNet).According to our knowledge,this is the first technology to automatically remove military targets in real-time from videos.The critical steps of MTTSNet are as follows:First,we designed a real-time mask generation network based on the encoder-decoder framework,combined with the domain expansion structure,to effectively extract mask images.Specifically,the ASPP structure in the encoder could achieve advanced semantic feature fusion.The decoder stacked high-dimensional information with low-dimensional information to obtain an effective mask layer.Subsequently,the domain expansion module guided the adaptive expansion of mask images.Second,a context adversarial generation network based on gated convolution was constructed to achieve background restoration of mask positions in the original image.In addition,our method worked in an end-to-end manner.A particular semantic segmentation dataset for military time-sensitive targets has been constructed,called the Military Time-sensitive Target Masking Dataset(MTMD).The MTMD dataset experiment successfully demonstrated that this method could create a mask that completely occludes the target and that the target could be hidden in real time using this mask.We demonstrated the concealment performance of our proposed method by comparing it to a number of well-known and highly optimized baselines. 展开更多
关键词 Deep learning Military application Targets stealth network Mask generation Generative adversarial network
下载PDF
Cloud-Edge Collaborative Federated GAN Based Data Processing for IoT-Empowered Multi-Flow Integrated Energy Aggregation Dispatch
15
作者 Zhan Shi 《Computers, Materials & Continua》 SCIE EI 2024年第7期973-994,共22页
The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial... The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time. 展开更多
关键词 IOT federated learning generative adversarial network data processing multi-flowintegration energy aggregation dispatch
下载PDF
CMAES-WFD:Adversarial Website Fingerprinting Defense Based on Covariance Matrix Adaptation Evolution Strategy
16
作者 Di Wang Yuefei Zhu +1 位作者 Jinlong Fei Maohua Guo 《Computers, Materials & Continua》 SCIE EI 2024年第5期2253-2276,共24页
Website fingerprinting,also known asWF,is a traffic analysis attack that enables local eavesdroppers to infer a user’s browsing destination,even when using the Tor anonymity network.While advanced attacks based on de... Website fingerprinting,also known asWF,is a traffic analysis attack that enables local eavesdroppers to infer a user’s browsing destination,even when using the Tor anonymity network.While advanced attacks based on deep neural network(DNN)can performfeature engineering and attain accuracy rates of over 98%,research has demonstrated thatDNNis vulnerable to adversarial samples.As a result,many researchers have explored using adversarial samples as a defense mechanism against DNN-based WF attacks and have achieved considerable success.However,these methods suffer from high bandwidth overhead or require access to the target model,which is unrealistic.This paper proposes CMAES-WFD,a black-box WF defense based on adversarial samples.The process of generating adversarial examples is transformed into a constrained optimization problem solved by utilizing the Covariance Matrix Adaptation Evolution Strategy(CMAES)optimization algorithm.Perturbations are injected into the local parts of the original traffic to control bandwidth overhead.According to the experiment results,CMAES-WFD was able to significantly decrease the accuracy of Deep Fingerprinting(DF)and VarCnn to below 8.3%and the bandwidth overhead to a maximum of only 14.6%and 20.5%,respectively.Specially,for Automated Website Fingerprinting(AWF)with simple structure,CMAES-WFD reduced the classification accuracy to only 6.7%and the bandwidth overhead to less than 7.4%.Moreover,it was demonstrated that CMAES-WFD was robust against adversarial training to a certain extent. 展开更多
关键词 Traffic analysis deep neural network adversarial sample TOR website fingerprinting
下载PDF
Quantum generative adversarial networks based on a readout error mitigation method with fault tolerant mechanism
17
作者 赵润盛 马鸿洋 +2 位作者 程涛 王爽 范兴奎 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第4期285-295,共11页
Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NIS... Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models. 展开更多
关键词 readout errors quantum generative adversarial networks bit-flip averaging method fault tolerant mechanisms
下载PDF
An Enhanced GAN for Image Generation
18
作者 Chunwei Tian Haoyang Gao +1 位作者 Pengwei Wang Bob Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第7期105-118,共14页
Generative adversarial networks(GANs)with gaming abilities have been widely applied in image generation.However,gamistic generators and discriminators may reduce the robustness of the obtained GANs in image generation... Generative adversarial networks(GANs)with gaming abilities have been widely applied in image generation.However,gamistic generators and discriminators may reduce the robustness of the obtained GANs in image generation under varying scenes.Enhancing the relation of hierarchical information in a generation network and enlarging differences of different network architectures can facilitate more structural information to improve the generation effect for image generation.In this paper,we propose an enhanced GAN via improving a generator for image generation(EIGGAN).EIGGAN applies a spatial attention to a generator to extract salient information to enhance the truthfulness of the generated images.Taking into relation the context account,parallel residual operations are fused into a generation network to extract more structural information from the different layers.Finally,a mixed loss function in a GAN is exploited to make a tradeoff between speed and accuracy to generate more realistic images.Experimental results show that the proposed method is superior to popular methods,i.e.,Wasserstein GAN with gradient penalty(WGAN-GP)in terms of many indexes,i.e.,Frechet Inception Distance,Learned Perceptual Image Patch Similarity,Multi-Scale Structural Similarity Index Measure,Kernel Inception Distance,Number of Statistically-Different Bins,Inception Score and some visual images for image generation. 展开更多
关键词 Generative adversarial networks spatial attention mixed loss image generation
下载PDF
Boosting Adversarial Training with Learnable Distribution
19
作者 Kai Chen Jinwei Wang +2 位作者 James Msughter Adeke Guangjie Liu Yuewei Dai 《Computers, Materials & Continua》 SCIE EI 2024年第3期3247-3265,共19页
In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.How... In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.However,the difference in the feature space between natural and adversarial examples hinders the accuracy and robustness of the model in adversarial training.This paper proposes a learnable distribution adversarial training method,aiming to construct the same distribution for training data utilizing the Gaussian mixture model.The distribution centroid is built to classify samples and constrain the distribution of the sample features.The natural and adversarial examples are pushed to the same distribution centroid to improve the accuracy and robustness of the model.The proposed method generates adversarial examples to close the distribution gap between the natural and adversarial examples through an attack algorithm explicitly designed for adversarial training.This algorithm gradually increases the accuracy and robustness of the model by scaling perturbation.Finally,the proposed method outputs the predicted labels and the distance between the sample and the distribution centroid.The distribution characteristics of the samples can be utilized to detect adversarial cases that can potentially evade the model defense.The effectiveness of the proposed method is demonstrated through comprehensive experiments. 展开更多
关键词 Adversarial training feature space learnable distribution distribution centroid
下载PDF
Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks:An Empirical Study
20
作者 Shahad Alzahrani Hatim Alsuwat Emad Alsuwat 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1635-1654,共20页
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ... Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data. 展开更多
关键词 Bayesian networks data poisoning attacks latent variables structure learning algorithms adversarial attacks
下载PDF
上一页 1 2 23 下一页 到第
使用帮助 返回顶部