Steganography based on generative adversarial networks(GANs)has become a hot topic among researchers.Due to GANs being unsuitable for text fields with discrete characteristics,researchers have proposed GANbased stegan...Steganography based on generative adversarial networks(GANs)has become a hot topic among researchers.Due to GANs being unsuitable for text fields with discrete characteristics,researchers have proposed GANbased steganography methods that are less dependent on text.In this paper,we propose a new method of generative lyrics steganography based on GANs,called GAN-GLS.The proposed method uses the GAN model and the largescale lyrics corpus to construct and train a lyrics generator.In this method,the GAN uses a previously generated line of a lyric as the input sentence in order to generate the next line of the lyric.Using a strategy based on the penalty mechanism in training,the GAN model generates non-repetitive and diverse lyrics.The secret information is then processed according to the data characteristics of the generated lyrics in order to hide information.Unlike other text generation-based linguistic steganographic methods,our method changes the way that multiple generated candidate items are selected as the candidate groups in order to encode the conditional probability distribution.The experimental results demonstrate that our method can generate highquality lyrics as stego-texts.Moreover,compared with other similar methods,the proposed method achieves good performance in terms of imperceptibility,embedding rate,effectiveness,extraction success rate and security.展开更多
Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for tem...Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for temporal coherence across frames.In this paper,we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network(DD-GAN).The DDGAN comprises a Deep Deconvolutional Neural Network(DDNN)as a Generator(G)and a modified Deep Convolutional Neural Network(DCNN)as a Discriminator(D)to ensure temporal coherence between adjacent frames.The proposed research involves several steps.First,the input text is fed into a Long Short Term Memory(LSTM)based text encoder and then smoothed using Conditioning Augmentation(CA)techniques to enhance the effectiveness of the Generator(G).Next,using a DDNN to generate video frames by incorporating enhanced text and random noise and modifying a DCNN to act as a Discriminator(D),effectively distinguishing between generated and real videos.This research evaluates the quality of the generated videos using standard metrics like Inception Score(IS),Fréchet Inception Distance(FID),Fréchet Inception Distance for video(FID2vid),and Generative Adversarial Metric(GAM),along with a human study based on realism,coherence,and relevance.By conducting experiments on Single-Digit Bouncing MNIST GIFs(SBMG),Two-Digit Bouncing MNIST GIFs(TBMG),and a custom dataset of essential mathematics videos with related text,this research demonstrates significant improvements in both metrics and human study results,confirming the effectiveness of DD-GAN.This research also took the exciting challenge of generating preschool math videos from text,handling complex structures,digits,and symbols,and achieving successful results.The proposed research demonstrates promising results for generating coherent videos from textual input.展开更多
Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication.This study addresses challenges associated with small datasets and class imba...Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication.This study addresses challenges associated with small datasets and class imbalances in sarcasm detection by employing comprehensive data pre-processing and Generative Adversial Network(GAN)based augmentation on diverse datasets,including iSarcasm,SemEval-18,and Ghosh.This research offers a novel pipeline for augmenting sarcasm data with Reverse Generative Adversarial Network(RGAN).The proposed RGAN method works by inverting labels between original and synthetic data during the training process.This inversion of labels provides feedback to the generator for generating high-quality data closely resembling the original distribution.Notably,the proposed RGAN model exhibits performance on par with standard GAN,showcasing its robust efficacy in augmenting text data.The exploration of various datasets highlights the nuanced impact of augmentation on model performance,with cautionary insights into maintaining a delicate balance between synthetic and original data.The methodological framework encompasses comprehensive data pre-processing and GAN-based augmentation,with a meticulous comparison against Natural Language Processing Augmentation(NLPAug)as an alternative augmentation technique.Overall,the F1-score of our proposed technique outperforms that of the synonym replacement augmentation technique using NLPAug.The increase in F1-score in experiments using RGAN ranged from 0.066%to 1.054%,and the use of standard GAN resulted in a 2.88%increase in F1-score.The proposed RGAN model outperformed the NLPAug method and demonstrated comparable performance to standard GAN,emphasizing its efficacy in text data augmentation.展开更多
Ceramic tiles are one of the most indispensable materials for interior decoration.The ceramic patterns can’t match the design requirements in terms of diversity and interactivity due to their natural textures.In this...Ceramic tiles are one of the most indispensable materials for interior decoration.The ceramic patterns can’t match the design requirements in terms of diversity and interactivity due to their natural textures.In this paper,we propose a sketch-based generation method for generating diverse ceramic tile images based on a hand-drawn sketches using Generative Adversarial Network(GAN).The generated tile images can be tailored to meet the specific needs of the user for the tile textures.The proposed method consists of four steps.Firstly,a dataset of ceramic tile images with diverse distributions is created and then pre-trained based on GAN.Secondly,for each ceramic tile image in the dataset,the corresponding sketch image is generated and then the mapping relationship between the images is trained based on a sketch extraction network using ResNet Block and jump connection to improve the quality of the generated sketches.Thirdly,the sketch style is redefined according to the characteristics of the ceramic tile images and then double cross-domain adversarial loss functions are employed to guide the ceramic tile generation network for fitting in the direction of the sketch style and to improve the training speed.Finally,we apply hidden space perturbation and interpolation for further enriching the output textures style and satisfying the concept of“one style with multiple faces”.We conduct the training process of the proposed generation network on 2583 ceramic tile images dataset.To measure the generative diversity and quality,we use Frechet Inception Distance(FID)and Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)metrics.The experimental results prove that the proposed model greatly enhances the generation results of the ceramic tile images,with FID of 32.47 and BRISQUE of 28.44.展开更多
针对现有合成孔径雷达(SAR)图像数据生成方法大多无法同时生成舰船图像及其检测标签的问题,面向SAR舰船图像生成及目标检测任务,构建基于位置信息的条件生成对抗网络(PCGAN).首先,提出将舰船位置信息作为约束条件用于限制生成图像中舰...针对现有合成孔径雷达(SAR)图像数据生成方法大多无法同时生成舰船图像及其检测标签的问题,面向SAR舰船图像生成及目标检测任务,构建基于位置信息的条件生成对抗网络(PCGAN).首先,提出将舰船位置信息作为约束条件用于限制生成图像中舰船的位置,并将其作为舰船图像的检测标签;随后,引入Wasserstein距离稳定PCGAN的训练过程;最后,利用生成的SAR舰船图像及对应检测标签完成YOLOv3网络的端到端训练,实现舰船数据增强与目标检测的协同学习,进而获得更耦合目标检测实际应用的多样性数据.在HRSID(high resolution SAR image dataset)数据集上的实验结果表明,PCGAN方法能生成清晰、鲁棒的SAR舰船数据,舰船检测准确度最高提升1.01%,验证了所提出方法的有效性.展开更多
The automatic stealth task of military time-sensitive targets plays a crucial role in maintaining national military security and mastering battlefield dynamics in military applications.We propose a novel Military Time...The automatic stealth task of military time-sensitive targets plays a crucial role in maintaining national military security and mastering battlefield dynamics in military applications.We propose a novel Military Time-sensitive Targets Stealth Network via Real-time Mask Generation(MTTSNet).According to our knowledge,this is the first technology to automatically remove military targets in real-time from videos.The critical steps of MTTSNet are as follows:First,we designed a real-time mask generation network based on the encoder-decoder framework,combined with the domain expansion structure,to effectively extract mask images.Specifically,the ASPP structure in the encoder could achieve advanced semantic feature fusion.The decoder stacked high-dimensional information with low-dimensional information to obtain an effective mask layer.Subsequently,the domain expansion module guided the adaptive expansion of mask images.Second,a context adversarial generation network based on gated convolution was constructed to achieve background restoration of mask positions in the original image.In addition,our method worked in an end-to-end manner.A particular semantic segmentation dataset for military time-sensitive targets has been constructed,called the Military Time-sensitive Target Masking Dataset(MTMD).The MTMD dataset experiment successfully demonstrated that this method could create a mask that completely occludes the target and that the target could be hidden in real time using this mask.We demonstrated the concealment performance of our proposed method by comparing it to a number of well-known and highly optimized baselines.展开更多
In the rapidly evolving field of cybersecurity,the challenge of providing realistic exercise scenarios that accurately mimic real-world threats has become increasingly critical.Traditional methods often fall short in ...In the rapidly evolving field of cybersecurity,the challenge of providing realistic exercise scenarios that accurately mimic real-world threats has become increasingly critical.Traditional methods often fall short in capturing the dynamic and complex nature of modern cyber threats.To address this gap,we propose a comprehensive framework designed to create authentic network environments tailored for cybersecurity exercise systems.Our framework leverages advanced simulation techniques to generate scenarios that mirror actual network conditions faced by professionals in the field.The cornerstone of our approach is the use of a conditional tabular generative adversarial network(CTGAN),a sophisticated tool that synthesizes realistic synthetic network traffic by learning fromreal data patterns.This technology allows us to handle technical components and sensitive information with high fidelity,ensuring that the synthetic data maintains statistical characteristics similar to those observed in real network environments.By meticulously analyzing the data collected from various network layers and translating these into structured tabular formats,our framework can generate network traffic that closely resembles that found in actual scenarios.An integral part of our process involves deploying this synthetic data within a simulated network environment,structured on software-defined networking(SDN)principles,to test and refine the traffic patterns.This simulation not only facilitates a direct comparison between the synthetic and real traffic but also enables us to identify discrepancies and refine the accuracy of our simulations.Our initial findings indicate an error rate of approximately 29.28%between the synthetic and real traffic data,highlighting areas for further improvement and adjustment.By providing a diverse array of network scenarios through our framework,we aim to enhance the exercise systems used by cybersecurity professionals.This not only improves their ability to respond to actual cyber threats but also ensures that the exercise is cost-effective and efficient.展开更多
Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural netwo...Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels.展开更多
Generative adversarial networks(GANs)with gaming abilities have been widely applied in image generation.However,gamistic generators and discriminators may reduce the robustness of the obtained GANs in image generation...Generative adversarial networks(GANs)with gaming abilities have been widely applied in image generation.However,gamistic generators and discriminators may reduce the robustness of the obtained GANs in image generation under varying scenes.Enhancing the relation of hierarchical information in a generation network and enlarging differences of different network architectures can facilitate more structural information to improve the generation effect for image generation.In this paper,we propose an enhanced GAN via improving a generator for image generation(EIGGAN).EIGGAN applies a spatial attention to a generator to extract salient information to enhance the truthfulness of the generated images.Taking into relation the context account,parallel residual operations are fused into a generation network to extract more structural information from the different layers.Finally,a mixed loss function in a GAN is exploited to make a tradeoff between speed and accuracy to generate more realistic images.Experimental results show that the proposed method is superior to popular methods,i.e.,Wasserstein GAN with gradient penalty(WGAN-GP)in terms of many indexes,i.e.,Frechet Inception Distance,Learned Perceptual Image Patch Similarity,Multi-Scale Structural Similarity Index Measure,Kernel Inception Distance,Number of Statistically-Different Bins,Inception Score and some visual images for image generation.展开更多
基金This work was supported in part by the National Natural Science Foundation of China under Grant 61872134,61672222,author Y.L.Liu,http://www.nsfc.gov.cn/in part by Science and Technology Development Center of the Ministry of Education under Grant 2019J01020,author Y.L.Liu,http://www.moe.gov.cn/+1 种基金in part by Science and Technology Project of Transport Department of Hunan Province under Grant 201935,author Y.L.Liu,http://jtt.hunan.gov.cn/Science and Technology Program of Changsha City under Grant kh200519,kq2004021,author Y.L.Liu,http://kjj.changsha.gov.cn/.
文摘Steganography based on generative adversarial networks(GANs)has become a hot topic among researchers.Due to GANs being unsuitable for text fields with discrete characteristics,researchers have proposed GANbased steganography methods that are less dependent on text.In this paper,we propose a new method of generative lyrics steganography based on GANs,called GAN-GLS.The proposed method uses the GAN model and the largescale lyrics corpus to construct and train a lyrics generator.In this method,the GAN uses a previously generated line of a lyric as the input sentence in order to generate the next line of the lyric.Using a strategy based on the penalty mechanism in training,the GAN model generates non-repetitive and diverse lyrics.The secret information is then processed according to the data characteristics of the generated lyrics in order to hide information.Unlike other text generation-based linguistic steganographic methods,our method changes the way that multiple generated candidate items are selected as the candidate groups in order to encode the conditional probability distribution.The experimental results demonstrate that our method can generate highquality lyrics as stego-texts.Moreover,compared with other similar methods,the proposed method achieves good performance in terms of imperceptibility,embedding rate,effectiveness,extraction success rate and security.
基金supported by the General Program of the National Natural Science Foundation of China(Grant No.61977029).
文摘Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for temporal coherence across frames.In this paper,we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network(DD-GAN).The DDGAN comprises a Deep Deconvolutional Neural Network(DDNN)as a Generator(G)and a modified Deep Convolutional Neural Network(DCNN)as a Discriminator(D)to ensure temporal coherence between adjacent frames.The proposed research involves several steps.First,the input text is fed into a Long Short Term Memory(LSTM)based text encoder and then smoothed using Conditioning Augmentation(CA)techniques to enhance the effectiveness of the Generator(G).Next,using a DDNN to generate video frames by incorporating enhanced text and random noise and modifying a DCNN to act as a Discriminator(D),effectively distinguishing between generated and real videos.This research evaluates the quality of the generated videos using standard metrics like Inception Score(IS),Fréchet Inception Distance(FID),Fréchet Inception Distance for video(FID2vid),and Generative Adversarial Metric(GAM),along with a human study based on realism,coherence,and relevance.By conducting experiments on Single-Digit Bouncing MNIST GIFs(SBMG),Two-Digit Bouncing MNIST GIFs(TBMG),and a custom dataset of essential mathematics videos with related text,this research demonstrates significant improvements in both metrics and human study results,confirming the effectiveness of DD-GAN.This research also took the exciting challenge of generating preschool math videos from text,handling complex structures,digits,and symbols,and achieving successful results.The proposed research demonstrates promising results for generating coherent videos from textual input.
文摘Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication.This study addresses challenges associated with small datasets and class imbalances in sarcasm detection by employing comprehensive data pre-processing and Generative Adversial Network(GAN)based augmentation on diverse datasets,including iSarcasm,SemEval-18,and Ghosh.This research offers a novel pipeline for augmenting sarcasm data with Reverse Generative Adversarial Network(RGAN).The proposed RGAN method works by inverting labels between original and synthetic data during the training process.This inversion of labels provides feedback to the generator for generating high-quality data closely resembling the original distribution.Notably,the proposed RGAN model exhibits performance on par with standard GAN,showcasing its robust efficacy in augmenting text data.The exploration of various datasets highlights the nuanced impact of augmentation on model performance,with cautionary insights into maintaining a delicate balance between synthetic and original data.The methodological framework encompasses comprehensive data pre-processing and GAN-based augmentation,with a meticulous comparison against Natural Language Processing Augmentation(NLPAug)as an alternative augmentation technique.Overall,the F1-score of our proposed technique outperforms that of the synonym replacement augmentation technique using NLPAug.The increase in F1-score in experiments using RGAN ranged from 0.066%to 1.054%,and the use of standard GAN resulted in a 2.88%increase in F1-score.The proposed RGAN model outperformed the NLPAug method and demonstrated comparable performance to standard GAN,emphasizing its efficacy in text data augmentation.
基金funded by the Public Welfare Technology Research Project of Zhejiang Province(Grant No.LGF21F020014)the Opening Project ofKey Laboratory of Public Security Information Application Based on Big-Data Architecture,Ministry of Public Security of Zhejiang Police College(Grant No.2021DSJSYS002).
文摘Ceramic tiles are one of the most indispensable materials for interior decoration.The ceramic patterns can’t match the design requirements in terms of diversity and interactivity due to their natural textures.In this paper,we propose a sketch-based generation method for generating diverse ceramic tile images based on a hand-drawn sketches using Generative Adversarial Network(GAN).The generated tile images can be tailored to meet the specific needs of the user for the tile textures.The proposed method consists of four steps.Firstly,a dataset of ceramic tile images with diverse distributions is created and then pre-trained based on GAN.Secondly,for each ceramic tile image in the dataset,the corresponding sketch image is generated and then the mapping relationship between the images is trained based on a sketch extraction network using ResNet Block and jump connection to improve the quality of the generated sketches.Thirdly,the sketch style is redefined according to the characteristics of the ceramic tile images and then double cross-domain adversarial loss functions are employed to guide the ceramic tile generation network for fitting in the direction of the sketch style and to improve the training speed.Finally,we apply hidden space perturbation and interpolation for further enriching the output textures style and satisfying the concept of“one style with multiple faces”.We conduct the training process of the proposed generation network on 2583 ceramic tile images dataset.To measure the generative diversity and quality,we use Frechet Inception Distance(FID)and Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)metrics.The experimental results prove that the proposed model greatly enhances the generation results of the ceramic tile images,with FID of 32.47 and BRISQUE of 28.44.
文摘针对现有合成孔径雷达(SAR)图像数据生成方法大多无法同时生成舰船图像及其检测标签的问题,面向SAR舰船图像生成及目标检测任务,构建基于位置信息的条件生成对抗网络(PCGAN).首先,提出将舰船位置信息作为约束条件用于限制生成图像中舰船的位置,并将其作为舰船图像的检测标签;随后,引入Wasserstein距离稳定PCGAN的训练过程;最后,利用生成的SAR舰船图像及对应检测标签完成YOLOv3网络的端到端训练,实现舰船数据增强与目标检测的协同学习,进而获得更耦合目标检测实际应用的多样性数据.在HRSID(high resolution SAR image dataset)数据集上的实验结果表明,PCGAN方法能生成清晰、鲁棒的SAR舰船数据,舰船检测准确度最高提升1.01%,验证了所提出方法的有效性.
基金supported in part by the National Natural Science Foundation of China(Grant No.62276274)Shaanxi Natural Science Foundation(Grant No.2023-JC-YB-528)Chinese aeronautical establishment(Grant No.201851U8012)。
文摘The automatic stealth task of military time-sensitive targets plays a crucial role in maintaining national military security and mastering battlefield dynamics in military applications.We propose a novel Military Time-sensitive Targets Stealth Network via Real-time Mask Generation(MTTSNet).According to our knowledge,this is the first technology to automatically remove military targets in real-time from videos.The critical steps of MTTSNet are as follows:First,we designed a real-time mask generation network based on the encoder-decoder framework,combined with the domain expansion structure,to effectively extract mask images.Specifically,the ASPP structure in the encoder could achieve advanced semantic feature fusion.The decoder stacked high-dimensional information with low-dimensional information to obtain an effective mask layer.Subsequently,the domain expansion module guided the adaptive expansion of mask images.Second,a context adversarial generation network based on gated convolution was constructed to achieve background restoration of mask positions in the original image.In addition,our method worked in an end-to-end manner.A particular semantic segmentation dataset for military time-sensitive targets has been constructed,called the Military Time-sensitive Target Masking Dataset(MTMD).The MTMD dataset experiment successfully demonstrated that this method could create a mask that completely occludes the target and that the target could be hidden in real time using this mask.We demonstrated the concealment performance of our proposed method by comparing it to a number of well-known and highly optimized baselines.
基金supported in part by the Korea Research Institute for Defense Technology Planning and Advancement(KRIT)funded by the Korean Government’s Defense Acquisition Program Administration(DAPA)under Grant KRIT-CT-21-037in part by the Ministry of Education,Republic of Koreain part by the National Research Foundation of Korea under Grant RS-2023-00211871.
文摘In the rapidly evolving field of cybersecurity,the challenge of providing realistic exercise scenarios that accurately mimic real-world threats has become increasingly critical.Traditional methods often fall short in capturing the dynamic and complex nature of modern cyber threats.To address this gap,we propose a comprehensive framework designed to create authentic network environments tailored for cybersecurity exercise systems.Our framework leverages advanced simulation techniques to generate scenarios that mirror actual network conditions faced by professionals in the field.The cornerstone of our approach is the use of a conditional tabular generative adversarial network(CTGAN),a sophisticated tool that synthesizes realistic synthetic network traffic by learning fromreal data patterns.This technology allows us to handle technical components and sensitive information with high fidelity,ensuring that the synthetic data maintains statistical characteristics similar to those observed in real network environments.By meticulously analyzing the data collected from various network layers and translating these into structured tabular formats,our framework can generate network traffic that closely resembles that found in actual scenarios.An integral part of our process involves deploying this synthetic data within a simulated network environment,structured on software-defined networking(SDN)principles,to test and refine the traffic patterns.This simulation not only facilitates a direct comparison between the synthetic and real traffic but also enables us to identify discrepancies and refine the accuracy of our simulations.Our initial findings indicate an error rate of approximately 29.28%between the synthetic and real traffic data,highlighting areas for further improvement and adjustment.By providing a diverse array of network scenarios through our framework,we aim to enhance the exercise systems used by cybersecurity professionals.This not only improves their ability to respond to actual cyber threats but also ensures that the exercise is cost-effective and efficient.
基金Subjects funded by the National Natural Science Foundation of China(Nos.62275216 and 61775181)the Natural Science Basic Research Programme of Shaanxi Province-Major Basic Research Special Project(Nos.S2018-ZC-TD-0061 and TZ0393)the Special Project for the Development of National Key Scientific Instruments and Equipment No.(51927804).
文摘Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels.
基金supported in part by the Science and Technology Development Fund,Macao S.A.R(FDCT)0028/2023/RIA1,in part by Leading Talents in Gusu Innovation and Entrepreneurship Grant ZXL2023170in part by the TCL Science and Technology Innovation Fund under Grant D5140240118in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2021A1515110079.
文摘Generative adversarial networks(GANs)with gaming abilities have been widely applied in image generation.However,gamistic generators and discriminators may reduce the robustness of the obtained GANs in image generation under varying scenes.Enhancing the relation of hierarchical information in a generation network and enlarging differences of different network architectures can facilitate more structural information to improve the generation effect for image generation.In this paper,we propose an enhanced GAN via improving a generator for image generation(EIGGAN).EIGGAN applies a spatial attention to a generator to extract salient information to enhance the truthfulness of the generated images.Taking into relation the context account,parallel residual operations are fused into a generation network to extract more structural information from the different layers.Finally,a mixed loss function in a GAN is exploited to make a tradeoff between speed and accuracy to generate more realistic images.Experimental results show that the proposed method is superior to popular methods,i.e.,Wasserstein GAN with gradient penalty(WGAN-GP)in terms of many indexes,i.e.,Frechet Inception Distance,Learned Perceptual Image Patch Similarity,Multi-Scale Structural Similarity Index Measure,Kernel Inception Distance,Number of Statistically-Different Bins,Inception Score and some visual images for image generation.