Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w...Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.展开更多
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende...Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.展开更多
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b...Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.展开更多
Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequentl...Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequently,various studies have been conducted on deep learning techniques related to the detection of parcel damage.This study proposes a deep learning-based damage detectionmethod for various types of parcels.Themethod is intended to be part of a parcel information-recognition systemthat identifies the volume and shipping information of parcels,and determines whether they are damaged;this method is intended for use in the actual parcel-transportation process.For this purpose,1)the study acquired image data in an environment simulating the actual parcel-transportation process,and 2)the training dataset was expanded based on StyleGAN3 with adaptive discriminator augmentation.Additionally,3)a preliminary distinction was made between the appearance of parcels and their damage status to enhance the performance of the parcel damage detection model and analyze the causes of parcel damage.Finally,using the dataset constructed based on the proposed method,a damage type detection model was trained,and its mean average precision was confirmed.This model can improve customer satisfaction and reduce return costs for parcel delivery companies.展开更多
Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore...Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore,data augmentation is crucial for this task.Existing data augmentationmethods often employ pixel-wise transformations,whichmay inadvertently disrupt edge features.In this paper,we propose a data augmentationmethod formonocular depth estimation,which we refer to as the Perpendicular-Cutdepth method.This method involves cutting realworld depth maps along perpendicular directions and pasting them onto input images,thereby diversifying the data without compromising edge features.To validate the effectiveness of the algorithm,we compared it with existing convolutional neural network(CNN)against the current mainstream data augmentation algorithms.Additionally,to verify the algorithm’s applicability to Transformer networks,we designed an encoder-decoder network structure based on Transformer to assess the generalization of our proposed algorithm.Experimental results demonstrate that,in the field of monocular depth estimation,our proposed method,Perpendicular-Cutdepth,outperforms traditional data augmentationmethods.On the indoor dataset NYU,our method increases accuracy from0.900 to 0.907 and reduces the error rate from0.357 to 0.351.On the outdoor dataset KITTI,our method improves accuracy from 0.9638 to 0.9642 and decreases the error rate from 0.060 to 0.0598.展开更多
The limited amount of data in the healthcare domain and the necessity of training samples for increased performance of deep learning models is a recurrent challenge,especially in medical imaging.Newborn Solutions aims...The limited amount of data in the healthcare domain and the necessity of training samples for increased performance of deep learning models is a recurrent challenge,especially in medical imaging.Newborn Solutions aims to enhance its non-invasive white blood cell counting device,Neosonics,by creating synthetic in vitro ultrasound images to facilitate a more efficient image generation process.This study addresses the data scarcity issue by designing and evaluating a continuous scalar conditional Generative Adversarial Network(GAN)to augment in vitro peritoneal dialysis ultrasound images,increasing both the volume and variability of training samples.The developed GAN architecture incorporates novel design features:varying kernel sizes in the generator’s transposed convolutional layers and a latent intermediate space,projecting noise and condition values for enhanced image resolution and specificity.The experimental results show that the GAN successfully generated diverse images of high visual quality,closely resembling real ultrasound samples.While visual results were promising,the use of GAN-based data augmentation did not consistently improve the performance of an image regressor in distinguishing features specific to varied white blood cell concentrations.Ultimately,while this continuous scalar conditional GAN model made strides in generating realistic images,further work is needed to achieve consistent gains in regression tasks,aiming for robust model generalization.展开更多
A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is es...A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is essential to diagnose at the beginning.Notwithstanding,the manual evaluation process utilizing Magnetic Resonance Imaging(MRI)causes a few worries,remarkably inefficient and inaccurate brain tumor diagnoses.Similarly,the examination process of brain tumors is intricate as they display high unbalance in nature like shape,size,appearance,and location.Therefore,a precise and expeditious prognosis of brain tumors is essential for implementing the of an implicit treatment.Several computer models adapted to diagnose the tumor,but the accuracy of the model needs to be tested.Considering all the above mentioned things,this work aims to identify the best classification system by considering the prediction accuracy out of Alex-Net,ResNet 50,and Inception V3.Data augmentation is performed on the database and fed into the three convolutions neural network(CNN)models.A comparison line is drawn between the three models based on accuracy and performance.An accuracy of 96.2%is obtained for AlexNet with augmentation and performed better than ResNet 50 and Inception V3 for the 120th epoch.With the suggested model with higher accuracy,it is highly reliable if brain tumors are diagnosed with available datasets.展开更多
It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone h...It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.展开更多
The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalizatio...The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalization performance and can be seen as a type of implicit regularization.Thismethod is recommended in the casewhere the amount of high-quality data is limited,and gaining new examples is costly and time-consuming.In this paper,we trained YOLOv7 with a dataset that is part of the Open Images dataset that has 8,600 images with four classes(Car,Bus,Motorcycle,and Person).We used five different data augmentations techniques for duplicates and improvement of our dataset.The performance of the object detection algorithm was compared when using the proposed augmented dataset with a combination of two and three types of data augmentation with the result of the original data.The evaluation result for the augmented data gives a promising result for every object,and every kind of data augmentation gives a different improvement.The mAP@.5 of all classes was 76%,and F1-score was 74%.The proposed method increased the mAP@.5 value by+13%and F1-score by+10%for all objects.展开更多
Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real...Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real-world engineering scenarios to train an intelligent diagnosis system is challenging.This paper proposes a fault diagnosis method combining several augmentation schemes to alleviate the problem of limited fault data.We begin by identifying relevant parameters that influence the construction of a spectrogram.We leverage the uncertainty principle in processing time-frequency domain signals,making it impossible to simultaneously achieve good time and frequency resolutions.A key determinant of this phenomenon is the window function's choice and length used in implementing the shorttime Fourier transform.The Gaussian,Kaiser,and rectangular windows are selected in the experimentation due to their diverse characteristics.The overlap parameter's size also influences the outcome and resolution of the spectrogram.A 50%overlap is used in the original data transformation,and±25%is used in implementing an effective augmentation policy to which two-stage regular CNN can be applied to achieve improved performance.The best model reaches an accuracy of 99.98%and a cross-domain accuracy of 92.54%.When combined with data augmentation,the proposed model yields cutting-edge results.展开更多
With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificia...With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificial intelligence.However,it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage.There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology.Therefore,this study proposes a mixed contour data augmentation technique,which is a data augmentation technique using contour images,to solve a problem caused by a lack of data.ResNet,a famous convolutional neural network(CNN)architecture,and CIFAR-10,a benchmark data set,are used for experimental performance evaluation to prove the superiority of the proposed method.And to prove that high performance improvement can be achieved even with a small training dataset,the ratio of the training dataset was divided into 70%,50%,and 30%for comparative analysis.As a result of applying the mixed contour data augmentation technique,it was possible to achieve a classification accuracy improvement of up to 4.64%and high accuracy even with a small amount of data set.In addition,it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.展开更多
Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.Howeve...Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.展开更多
In the machine learning(ML)paradigm,data augmentation serves as a regularization approach for creating ML models.The increase in the diversification of training samples increases the generalization capabilities,which ...In the machine learning(ML)paradigm,data augmentation serves as a regularization approach for creating ML models.The increase in the diversification of training samples increases the generalization capabilities,which enhances the prediction performance of classifiers when tested on unseen examples.Deep learning(DL)models have a lot of parameters,and they frequently overfit.Effectively,to avoid overfitting,data plays a major role to augment the latest improvements in DL.Nevertheless,reliable data collection is a major limiting factor.Frequently,this problem is undertaken by combining augmentation of data,transfer learning,dropout,and methods of normalization in batches.In this paper,we introduce the application of data augmentation in the field of image classification using Random Multi-model Deep Learning(RMDL)which uses the association approaches of multi-DL to yield random models for classification.We present a methodology for using Generative Adversarial Networks(GANs)to generate images for data augmenting.Through experiments,we discover that samples generated by GANs when fed into RMDL improve both accuracy and model efficiency.Experimenting across both MNIST and CIAFAR-10 datasets show that,error rate with proposed approach has been decreased with different random models.展开更多
Convolutional neural networks(CNNs)are widely used to tackling complex tasks,which are prone to overfitting if the datasets are noisy.Therefore,we propose folding fan cropping and splicing(FFCS)regularization strategy...Convolutional neural networks(CNNs)are widely used to tackling complex tasks,which are prone to overfitting if the datasets are noisy.Therefore,we propose folding fan cropping and splicing(FFCS)regularization strategy to enhance representation abilities of CNNs.In particular,we propose two different methods considering the effect of different segmentation numbers on classification results.One is the random folding fan method,and the other is the fixed folding fan method.Experimental results showed that FFCS reduced the classification errors both with the value of 0.88%on CIFAR-10 dataset and 1.86%on ImageNet dataset.Moreover,FFCS consistently outperformed Mixup and Random Erasing approaches on classification tasks.Therefore,FFCS effectively prevents overfitting and reduces the impact of background noises on classification tasks.展开更多
This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained mode...This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework.展开更多
With the advent of deep learning,self-driving schemes based on deep learning are becoming more and more popular.Robust perception-action models should learn from data with different scenarios and real behaviors,while ...With the advent of deep learning,self-driving schemes based on deep learning are becoming more and more popular.Robust perception-action models should learn from data with different scenarios and real behaviors,while current end-to-end model learning is generally limited to training of massive data,innovation of deep network architecture,and learning in-situ model in a simulation environment.Therefore,we introduce a new image style transfer method into data augmentation,and improve the diversity of limited data by changing the texture,contrast ratio and color of the image,and then it is extended to the scenarios that the model has been unobserved before.Inspired by rapid style transfer and artistic style neural algorithms,we propose an arbitrary style generation network architecture,including style transfer network,style learning network,style loss network and multivariate Gaussian distribution function.The style embedding vector is randomly sampled from the multivariate Gaussian distribution and linearly interpolated with the embedded vector predicted by the input image on the style learning network,which provides a set of normalization constants for the style transfer network,and finally realizes the diversity of the image style.In order to verify the effectiveness of the method,image classification and simulation experiments were performed separately.Finally,we built a small-sized smart car experiment platform,and apply the data augmentation technology based on image style transfer drive to the experiment of automatic driving for the first time.The experimental results show that:(1)The proposed scheme can improve the prediction accuracy of the end-to-end model and reduce the model’s error accumulation;(2)the method based on image style transfer provides a new scheme for data augmentation technology,and also provides a solution for the high cost that many deep models rely heavily on a large number of label data.展开更多
Artificial Intelligence(AI)becomes one hotspot in the field of the medical images analysis and provides rather promising solution.Although some research has been explored in smart diagnosis for the common diseases of ...Artificial Intelligence(AI)becomes one hotspot in the field of the medical images analysis and provides rather promising solution.Although some research has been explored in smart diagnosis for the common diseases of urinary system,some problems remain unsolved completely A nine-layer Convolutional Neural Network(CNN)is proposed in this paper to classify the renal Computed Tomography(CT)images.Four group of comparative experiments prove the structure of this CNN is optimal and can achieve good performance with average accuracy about 92.07±1.67%.Although our renal CT data is not very large,we do augment the training data by affine,translating,rotating and scaling geometric transformation and gamma,noise transformation in color space.Experimental results validate the Data Augmentation(DA)on training data can improve the performance of our proposed CNN compared to without DA with the average accuracy about 0.85%.This proposed algorithm gives a promising solution to help clinical doctors automatically recognize the abnormal images faster than manual judgment and more accurately than previous methods.展开更多
Deep Learning(DL)techniques as a subfield of data science are getting overwhelming attention mainly because of their ability to understand the underlying pattern of data in making classifications.These techniques requ...Deep Learning(DL)techniques as a subfield of data science are getting overwhelming attention mainly because of their ability to understand the underlying pattern of data in making classifications.These techniques require a considerable amount of data to efficiently train the DL models.Generally,when the data size is larger,the DL models perform better.However,it is not possible to have a considerable amount of data in different domains such as healthcare.In healthcare,it is impossible to have a substantial amount of data to solve medical problems using Artificial Intelligence,mainly due to ethical issues and the privacy of patients.To solve this problem of small dataset,different techniques of data augmentation are used that can increase the size of the training set.However,these techniques only change the shape of the image and hence the classification model does not increase accuracy.Generative Adversarial Networks(GANs)are very powerful techniques to augment training data as new samples are created.This technique helps the classification models to increase their accuracy.In this paper,we have investigated augmentation techniques in healthcare image classification.The objective of this research paper is to develop a novel augmentation technique that can increase the size of the training set,to enable deep learning techniques to achieve higher accuracy.We have compared the performance of the image classifiers using the standard augmentation technique and GANs.Our results demonstrate that GANs increase the training data,and eventually,the classifier achieves an accuracy of 90%compared to standard data augmentation techniques,which achieve an accuracy of up to 70%.Other advanced CNN models are also tested and have demonstrated that more deep architectures can achieve more than 98%accuracy for making classification on Oral Squamous Cell Carcinoma.展开更多
An important issue for deep learning models is the acquisition of training of data.Without abundant data from a real production environment for training,deep learning models would not be as widely used as they are tod...An important issue for deep learning models is the acquisition of training of data.Without abundant data from a real production environment for training,deep learning models would not be as widely used as they are today.However,the cost of obtaining abundant real-world environment is high,especially for underwater environments.It is more straightforward to simulate data that is closed to that from real environment.In this paper,a simple and easy symmetric learning data augmentation model(SLDAM)is proposed for underwater target radiate-noise data expansion and generation.The SLDAM,taking the optimal classifier of an initial dataset as the discriminator,makes use of the structure of the classifier to construct a symmetric generator based on antagonistic generation.It generates data similar to the initial dataset that can be used to supplement training data sets.This model has taken into consideration feature loss and sample loss function in model training,and is able to reduce the dependence of the generation and expansion on the feature set.We verified that the SLDAM is able to data expansion with low calculation complexity.Our results showed that the SLDAM is able to generate new data without compromising data recognition accuracy,for practical application in a production environment.展开更多
Generative adversarial networks(GANs)have considerable potential to alleviate challenges linked to data scarcity.Recent research has demonstrated the good performance of this method for data augmentation because GANs ...Generative adversarial networks(GANs)have considerable potential to alleviate challenges linked to data scarcity.Recent research has demonstrated the good performance of this method for data augmentation because GANs synthesize semantically meaningful data from standard signal distribution.The goal of this study was to solve the overfitting problem that is caused by the training process of convolution networks with a small dataset.In this context,we propose a data augmentation method based on an evolutionary generative adversarial network for cardiac magnetic resonance images to extend the training data.In our structure of the evolutionary GAN,the most optimal generator is chosen that considers the quality and diversity of generated images simultaneously from many generator mutations.Also,to expand the distribution of the whole training set,we combine the linear interpolation of eigenvectors to synthesize new training samples and synthesize related linear interpolation labels.This approach makes the discrete sample space become continuous and improves the smoothness between domains.The visual quality of the augmented cardiac magnetic resonance images is improved by our proposed method as shown by the data-augmented experiments.In addition,the effectiveness of our proposed method is verified by the classification experiments.The influence of the proportion of synthesized samples on the classification results of cardiac magnetic resonance images is also explored.展开更多
基金Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korea government(Grant No.20214000000140,Graduate School of Convergence for Clean Energy Integrated Power Generation)Korea Basic Science Institute(National Research Facilities and Equipment Center)grant funded by the Ministry of Education(2021R1A6C101A449)the National Research Foundation of Korea grant funded by the Ministry of Science and ICT(2021R1A2C1095139),Republic of Korea。
文摘Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.
基金This research was financially supported by the Ministry of Trade,Industry,and Energy(MOTIE),Korea,under the“Project for Research and Development with Middle Markets Enterprises and DNA(Data,Network,AI)Universities”(AI-based Safety Assessment and Management System for Concrete Structures)(ReferenceNumber P0024559)supervised by theKorea Institute for Advancement of Technology(KIAT).
文摘Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.
基金Project supported by the National Key Research and Development Program of China(Grant No.2022YFB2803900)the National Natural Science Foundation of China(Grant Nos.61974075 and 61704121)+2 种基金the Natural Science Foundation of Tianjin Municipality(Grant Nos.22JCZDJC00460 and 19JCQNJC00700)Tianjin Municipal Education Commission(Grant No.2019KJ028)Fundamental Research Funds for the Central Universities(Grant No.22JCZDJC00460).
文摘Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.
基金supported by a Korea Agency for Infrastructure Technology Advancement(KAIA)grant funded by the Ministry of Land,Infrastructure,and Transport(Grant 1615013176)(https://www.kaia.re.kr/eng/main.do,accessed on 01/06/2024)supported by a Korea Evaluation Institute of Industrial Technology(KEIT)grant funded by the Korean Government(MOTIE)(141518499)(https://www.keit.re.kr/index.es?sid=a2,accessed on 01/06/2024).
文摘Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequently,various studies have been conducted on deep learning techniques related to the detection of parcel damage.This study proposes a deep learning-based damage detectionmethod for various types of parcels.Themethod is intended to be part of a parcel information-recognition systemthat identifies the volume and shipping information of parcels,and determines whether they are damaged;this method is intended for use in the actual parcel-transportation process.For this purpose,1)the study acquired image data in an environment simulating the actual parcel-transportation process,and 2)the training dataset was expanded based on StyleGAN3 with adaptive discriminator augmentation.Additionally,3)a preliminary distinction was made between the appearance of parcels and their damage status to enhance the performance of the parcel damage detection model and analyze the causes of parcel damage.Finally,using the dataset constructed based on the proposed method,a damage type detection model was trained,and its mean average precision was confirmed.This model can improve customer satisfaction and reduce return costs for parcel delivery companies.
基金the Grant of Program for Scientific ResearchInnovation Team in Colleges and Universities of Anhui Province(2022AH010095)The Grant ofScientific Research and Talent Development Foundation of the Hefei University(No.21-22RC15)+2 种基金The Key Research Plan of Anhui Province(No.2022k07020011)The Grant of Anhui Provincial940 CMC,2024,vol.79,no.1Natural Science Foundation,No.2308085MF213The Open Fund of Information Materials andIntelligent Sensing Laboratory of Anhui Province IMIS202205,as well as the AI General ComputingPlatform of Hefei University.
文摘Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore,data augmentation is crucial for this task.Existing data augmentationmethods often employ pixel-wise transformations,whichmay inadvertently disrupt edge features.In this paper,we propose a data augmentationmethod formonocular depth estimation,which we refer to as the Perpendicular-Cutdepth method.This method involves cutting realworld depth maps along perpendicular directions and pasting them onto input images,thereby diversifying the data without compromising edge features.To validate the effectiveness of the algorithm,we compared it with existing convolutional neural network(CNN)against the current mainstream data augmentation algorithms.Additionally,to verify the algorithm’s applicability to Transformer networks,we designed an encoder-decoder network structure based on Transformer to assess the generalization of our proposed algorithm.Experimental results demonstrate that,in the field of monocular depth estimation,our proposed method,Perpendicular-Cutdepth,outperforms traditional data augmentationmethods.On the indoor dataset NYU,our method increases accuracy from0.900 to 0.907 and reduces the error rate from0.357 to 0.351.On the outdoor dataset KITTI,our method improves accuracy from 0.9638 to 0.9642 and decreases the error rate from 0.060 to 0.0598.
文摘The limited amount of data in the healthcare domain and the necessity of training samples for increased performance of deep learning models is a recurrent challenge,especially in medical imaging.Newborn Solutions aims to enhance its non-invasive white blood cell counting device,Neosonics,by creating synthetic in vitro ultrasound images to facilitate a more efficient image generation process.This study addresses the data scarcity issue by designing and evaluating a continuous scalar conditional Generative Adversarial Network(GAN)to augment in vitro peritoneal dialysis ultrasound images,increasing both the volume and variability of training samples.The developed GAN architecture incorporates novel design features:varying kernel sizes in the generator’s transposed convolutional layers and a latent intermediate space,projecting noise and condition values for enhanced image resolution and specificity.The experimental results show that the GAN successfully generated diverse images of high visual quality,closely resembling real ultrasound samples.While visual results were promising,the use of GAN-based data augmentation did not consistently improve the performance of an image regressor in distinguishing features specific to varied white blood cell concentrations.Ultimately,while this continuous scalar conditional GAN model made strides in generating realistic images,further work is needed to achieve consistent gains in regression tasks,aiming for robust model generalization.
基金Ahmed Alhussen would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2022-####.
文摘A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is essential to diagnose at the beginning.Notwithstanding,the manual evaluation process utilizing Magnetic Resonance Imaging(MRI)causes a few worries,remarkably inefficient and inaccurate brain tumor diagnoses.Similarly,the examination process of brain tumors is intricate as they display high unbalance in nature like shape,size,appearance,and location.Therefore,a precise and expeditious prognosis of brain tumors is essential for implementing the of an implicit treatment.Several computer models adapted to diagnose the tumor,but the accuracy of the model needs to be tested.Considering all the above mentioned things,this work aims to identify the best classification system by considering the prediction accuracy out of Alex-Net,ResNet 50,and Inception V3.Data augmentation is performed on the database and fed into the three convolutions neural network(CNN)models.A comparison line is drawn between the three models based on accuracy and performance.An accuracy of 96.2%is obtained for AlexNet with augmentation and performed better than ResNet 50 and Inception V3 for the 120th epoch.With the suggested model with higher accuracy,it is highly reliable if brain tumors are diagnosed with available datasets.
基金The authors received the research fun T2022-CN-006 for this study.
文摘It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.
基金the United States Air Force Office of Scientific Research(AFOSR)contract FA9550-22-1-0268 awarded to KHA,https://www.afrl.af.mil/AFOSR/.The contract is entitled:“Investigating Improving Safety of Autonomous Exploring Intelligent Agents with Human-in-the-Loop Reinforcement Learning,”and in part by Jackson State University.
文摘The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalization performance and can be seen as a type of implicit regularization.Thismethod is recommended in the casewhere the amount of high-quality data is limited,and gaining new examples is costly and time-consuming.In this paper,we trained YOLOv7 with a dataset that is part of the Open Images dataset that has 8,600 images with four classes(Car,Bus,Motorcycle,and Person).We used five different data augmentations techniques for duplicates and improvement of our dataset.The performance of the object detection algorithm was compared when using the proposed augmented dataset with a combination of two and three types of data augmentation with the result of the original data.The evaluation result for the augmented data gives a promising result for every object,and every kind of data augmentation gives a different improvement.The mAP@.5 of all classes was 76%,and F1-score was 74%.The proposed method increased the mAP@.5 value by+13%and F1-score by+10%for all objects.
基金supported by the National Natural Science Foundation of China(42027805)the National Aeronautical Fund(ASFC-20172080005)。
文摘Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real-world engineering scenarios to train an intelligent diagnosis system is challenging.This paper proposes a fault diagnosis method combining several augmentation schemes to alleviate the problem of limited fault data.We begin by identifying relevant parameters that influence the construction of a spectrogram.We leverage the uncertainty principle in processing time-frequency domain signals,making it impossible to simultaneously achieve good time and frequency resolutions.A key determinant of this phenomenon is the window function's choice and length used in implementing the shorttime Fourier transform.The Gaussian,Kaiser,and rectangular windows are selected in the experimentation due to their diverse characteristics.The overlap parameter's size also influences the outcome and resolution of the spectrogram.A 50%overlap is used in the original data transformation,and±25%is used in implementing an effective augmentation policy to which two-stage regular CNN can be applied to achieve improved performance.The best model reaches an accuracy of 99.98%and a cross-domain accuracy of 92.54%.When combined with data augmentation,the proposed model yields cutting-edge results.
文摘With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificial intelligence.However,it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage.There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology.Therefore,this study proposes a mixed contour data augmentation technique,which is a data augmentation technique using contour images,to solve a problem caused by a lack of data.ResNet,a famous convolutional neural network(CNN)architecture,and CIFAR-10,a benchmark data set,are used for experimental performance evaluation to prove the superiority of the proposed method.And to prove that high performance improvement can be achieved even with a small training dataset,the ratio of the training dataset was divided into 70%,50%,and 30%for comparative analysis.As a result of applying the mixed contour data augmentation technique,it was possible to achieve a classification accuracy improvement of up to 4.64%and high accuracy even with a small amount of data set.In addition,it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.
文摘Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.
基金The researchers would like to thank the Deanship of Scientific Research,Qassim University for funding the publication of this project.
文摘In the machine learning(ML)paradigm,data augmentation serves as a regularization approach for creating ML models.The increase in the diversification of training samples increases the generalization capabilities,which enhances the prediction performance of classifiers when tested on unseen examples.Deep learning(DL)models have a lot of parameters,and they frequently overfit.Effectively,to avoid overfitting,data plays a major role to augment the latest improvements in DL.Nevertheless,reliable data collection is a major limiting factor.Frequently,this problem is undertaken by combining augmentation of data,transfer learning,dropout,and methods of normalization in batches.In this paper,we introduce the application of data augmentation in the field of image classification using Random Multi-model Deep Learning(RMDL)which uses the association approaches of multi-DL to yield random models for classification.We present a methodology for using Generative Adversarial Networks(GANs)to generate images for data augmenting.Through experiments,we discover that samples generated by GANs when fed into RMDL improve both accuracy and model efficiency.Experimenting across both MNIST and CIAFAR-10 datasets show that,error rate with proposed approach has been decreased with different random models.
文摘Convolutional neural networks(CNNs)are widely used to tackling complex tasks,which are prone to overfitting if the datasets are noisy.Therefore,we propose folding fan cropping and splicing(FFCS)regularization strategy to enhance representation abilities of CNNs.In particular,we propose two different methods considering the effect of different segmentation numbers on classification results.One is the random folding fan method,and the other is the fixed folding fan method.Experimental results showed that FFCS reduced the classification errors both with the value of 0.88%on CIFAR-10 dataset and 1.86%on ImageNet dataset.Moreover,FFCS consistently outperformed Mixup and Random Erasing approaches on classification tasks.Therefore,FFCS effectively prevents overfitting and reduces the impact of background noises on classification tasks.
基金This work was supported by the Science and Technology Innovation 2030-Key Project of“New Generation Artificial Intelligence”of China under Grant 2018AAA0102303the Natural Science Foundation for Distinguished Young Scholars of Jiangsu Province(No.BK20190030)the National Natural Science Foundation of China(No.61631020,No.61871398,No.61931011 and No.U20B2038).
文摘This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework.
基金the National Natural Science Foundation of China(51965008)Science and Technology projects of Guizhou[2018]2168Excellent Young Researcher Project of Guizhou[2017]5630.
文摘With the advent of deep learning,self-driving schemes based on deep learning are becoming more and more popular.Robust perception-action models should learn from data with different scenarios and real behaviors,while current end-to-end model learning is generally limited to training of massive data,innovation of deep network architecture,and learning in-situ model in a simulation environment.Therefore,we introduce a new image style transfer method into data augmentation,and improve the diversity of limited data by changing the texture,contrast ratio and color of the image,and then it is extended to the scenarios that the model has been unobserved before.Inspired by rapid style transfer and artistic style neural algorithms,we propose an arbitrary style generation network architecture,including style transfer network,style learning network,style loss network and multivariate Gaussian distribution function.The style embedding vector is randomly sampled from the multivariate Gaussian distribution and linearly interpolated with the embedded vector predicted by the input image on the style learning network,which provides a set of normalization constants for the style transfer network,and finally realizes the diversity of the image style.In order to verify the effectiveness of the method,image classification and simulation experiments were performed separately.Finally,we built a small-sized smart car experiment platform,and apply the data augmentation technology based on image style transfer drive to the experiment of automatic driving for the first time.The experimental results show that:(1)The proposed scheme can improve the prediction accuracy of the end-to-end model and reduce the model’s error accumulation;(2)the method based on image style transfer provides a new scheme for data augmentation technology,and also provides a solution for the high cost that many deep models rely heavily on a large number of label data.
基金This study was supported by National Educational Science Plan Foundation“in 13th Five-Year”(DIA170375),ChinaGuangxi Key Laboratory of Trusted Software(kx201901)British Heart Foundation Accelerator Award,UK.
文摘Artificial Intelligence(AI)becomes one hotspot in the field of the medical images analysis and provides rather promising solution.Although some research has been explored in smart diagnosis for the common diseases of urinary system,some problems remain unsolved completely A nine-layer Convolutional Neural Network(CNN)is proposed in this paper to classify the renal Computed Tomography(CT)images.Four group of comparative experiments prove the structure of this CNN is optimal and can achieve good performance with average accuracy about 92.07±1.67%.Although our renal CT data is not very large,we do augment the training data by affine,translating,rotating and scaling geometric transformation and gamma,noise transformation in color space.Experimental results validate the Data Augmentation(DA)on training data can improve the performance of our proposed CNN compared to without DA with the average accuracy about 0.85%.This proposed algorithm gives a promising solution to help clinical doctors automatically recognize the abnormal images faster than manual judgment and more accurately than previous methods.
基金supported by Taif University Researchers Supporting Project No.(TURSP-2020/254),Taif University,Taif,Saudi Arabia.
文摘Deep Learning(DL)techniques as a subfield of data science are getting overwhelming attention mainly because of their ability to understand the underlying pattern of data in making classifications.These techniques require a considerable amount of data to efficiently train the DL models.Generally,when the data size is larger,the DL models perform better.However,it is not possible to have a considerable amount of data in different domains such as healthcare.In healthcare,it is impossible to have a substantial amount of data to solve medical problems using Artificial Intelligence,mainly due to ethical issues and the privacy of patients.To solve this problem of small dataset,different techniques of data augmentation are used that can increase the size of the training set.However,these techniques only change the shape of the image and hence the classification model does not increase accuracy.Generative Adversarial Networks(GANs)are very powerful techniques to augment training data as new samples are created.This technique helps the classification models to increase their accuracy.In this paper,we have investigated augmentation techniques in healthcare image classification.The objective of this research paper is to develop a novel augmentation technique that can increase the size of the training set,to enable deep learning techniques to achieve higher accuracy.We have compared the performance of the image classifiers using the standard augmentation technique and GANs.Our results demonstrate that GANs increase the training data,and eventually,the classifier achieves an accuracy of 90%compared to standard data augmentation techniques,which achieve an accuracy of up to 70%.Other advanced CNN models are also tested and have demonstrated that more deep architectures can achieve more than 98%accuracy for making classification on Oral Squamous Cell Carcinoma.
基金This work was funded by the National Natural Science Foundation of China under Grant(No.61772152 and No.61502037)the Basic Research Project(No.JCKY2016206B001,JCKY2014206C002 and JCKY2017604C010)the Technical Foundation Project(No.JSQB2017206C002).
文摘An important issue for deep learning models is the acquisition of training of data.Without abundant data from a real production environment for training,deep learning models would not be as widely used as they are today.However,the cost of obtaining abundant real-world environment is high,especially for underwater environments.It is more straightforward to simulate data that is closed to that from real environment.In this paper,a simple and easy symmetric learning data augmentation model(SLDAM)is proposed for underwater target radiate-noise data expansion and generation.The SLDAM,taking the optimal classifier of an initial dataset as the discriminator,makes use of the structure of the classifier to construct a symmetric generator based on antagonistic generation.It generates data similar to the initial dataset that can be used to supplement training data sets.This model has taken into consideration feature loss and sample loss function in model training,and is able to reduce the dependence of the generation and expansion on the feature set.We verified that the SLDAM is able to data expansion with low calculation complexity.Our results showed that the SLDAM is able to generate new data without compromising data recognition accuracy,for practical application in a production environment.
基金funding in part from the Sichuan Science and Technology Program(http://kjt.sc.gov.cn/)under Grant 2019ZDZX0005the Chinese Scholarship Council(https://www.csc.edu.cn/)under Grant 201908515022.
文摘Generative adversarial networks(GANs)have considerable potential to alleviate challenges linked to data scarcity.Recent research has demonstrated the good performance of this method for data augmentation because GANs synthesize semantically meaningful data from standard signal distribution.The goal of this study was to solve the overfitting problem that is caused by the training process of convolution networks with a small dataset.In this context,we propose a data augmentation method based on an evolutionary generative adversarial network for cardiac magnetic resonance images to extend the training data.In our structure of the evolutionary GAN,the most optimal generator is chosen that considers the quality and diversity of generated images simultaneously from many generator mutations.Also,to expand the distribution of the whole training set,we combine the linear interpolation of eigenvectors to synthesize new training samples and synthesize related linear interpolation labels.This approach makes the discrete sample space become continuous and improves the smoothness between domains.The visual quality of the augmented cardiac magnetic resonance images is improved by our proposed method as shown by the data-augmented experiments.In addition,the effectiveness of our proposed method is verified by the classification experiments.The influence of the proportion of synthesized samples on the classification results of cardiac magnetic resonance images is also explored.