Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w...Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.展开更多
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende...Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.展开更多
Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore...Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore,data augmentation is crucial for this task.Existing data augmentationmethods often employ pixel-wise transformations,whichmay inadvertently disrupt edge features.In this paper,we propose a data augmentationmethod formonocular depth estimation,which we refer to as the Perpendicular-Cutdepth method.This method involves cutting realworld depth maps along perpendicular directions and pasting them onto input images,thereby diversifying the data without compromising edge features.To validate the effectiveness of the algorithm,we compared it with existing convolutional neural network(CNN)against the current mainstream data augmentation algorithms.Additionally,to verify the algorithm’s applicability to Transformer networks,we designed an encoder-decoder network structure based on Transformer to assess the generalization of our proposed algorithm.Experimental results demonstrate that,in the field of monocular depth estimation,our proposed method,Perpendicular-Cutdepth,outperforms traditional data augmentationmethods.On the indoor dataset NYU,our method increases accuracy from0.900 to 0.907 and reduces the error rate from0.357 to 0.351.On the outdoor dataset KITTI,our method improves accuracy from 0.9638 to 0.9642 and decreases the error rate from 0.060 to 0.0598.展开更多
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b...Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is es...A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is essential to diagnose at the beginning.Notwithstanding,the manual evaluation process utilizing Magnetic Resonance Imaging(MRI)causes a few worries,remarkably inefficient and inaccurate brain tumor diagnoses.Similarly,the examination process of brain tumors is intricate as they display high unbalance in nature like shape,size,appearance,and location.Therefore,a precise and expeditious prognosis of brain tumors is essential for implementing the of an implicit treatment.Several computer models adapted to diagnose the tumor,but the accuracy of the model needs to be tested.Considering all the above mentioned things,this work aims to identify the best classification system by considering the prediction accuracy out of Alex-Net,ResNet 50,and Inception V3.Data augmentation is performed on the database and fed into the three convolutions neural network(CNN)models.A comparison line is drawn between the three models based on accuracy and performance.An accuracy of 96.2%is obtained for AlexNet with augmentation and performed better than ResNet 50 and Inception V3 for the 120th epoch.With the suggested model with higher accuracy,it is highly reliable if brain tumors are diagnosed with available datasets.展开更多
Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.Howeve...Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.展开更多
Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real...Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real-world engineering scenarios to train an intelligent diagnosis system is challenging.This paper proposes a fault diagnosis method combining several augmentation schemes to alleviate the problem of limited fault data.We begin by identifying relevant parameters that influence the construction of a spectrogram.We leverage the uncertainty principle in processing time-frequency domain signals,making it impossible to simultaneously achieve good time and frequency resolutions.A key determinant of this phenomenon is the window function's choice and length used in implementing the shorttime Fourier transform.The Gaussian,Kaiser,and rectangular windows are selected in the experimentation due to their diverse characteristics.The overlap parameter's size also influences the outcome and resolution of the spectrogram.A 50%overlap is used in the original data transformation,and±25%is used in implementing an effective augmentation policy to which two-stage regular CNN can be applied to achieve improved performance.The best model reaches an accuracy of 99.98%and a cross-domain accuracy of 92.54%.When combined with data augmentation,the proposed model yields cutting-edge results.展开更多
In the machine learning(ML)paradigm,data augmentation serves as a regularization approach for creating ML models.The increase in the diversification of training samples increases the generalization capabilities,which ...In the machine learning(ML)paradigm,data augmentation serves as a regularization approach for creating ML models.The increase in the diversification of training samples increases the generalization capabilities,which enhances the prediction performance of classifiers when tested on unseen examples.Deep learning(DL)models have a lot of parameters,and they frequently overfit.Effectively,to avoid overfitting,data plays a major role to augment the latest improvements in DL.Nevertheless,reliable data collection is a major limiting factor.Frequently,this problem is undertaken by combining augmentation of data,transfer learning,dropout,and methods of normalization in batches.In this paper,we introduce the application of data augmentation in the field of image classification using Random Multi-model Deep Learning(RMDL)which uses the association approaches of multi-DL to yield random models for classification.We present a methodology for using Generative Adversarial Networks(GANs)to generate images for data augmenting.Through experiments,we discover that samples generated by GANs when fed into RMDL improve both accuracy and model efficiency.Experimenting across both MNIST and CIAFAR-10 datasets show that,error rate with proposed approach has been decreased with different random models.展开更多
It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone h...It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.展开更多
With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificia...With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificial intelligence.However,it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage.There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology.Therefore,this study proposes a mixed contour data augmentation technique,which is a data augmentation technique using contour images,to solve a problem caused by a lack of data.ResNet,a famous convolutional neural network(CNN)architecture,and CIFAR-10,a benchmark data set,are used for experimental performance evaluation to prove the superiority of the proposed method.And to prove that high performance improvement can be achieved even with a small training dataset,the ratio of the training dataset was divided into 70%,50%,and 30%for comparative analysis.As a result of applying the mixed contour data augmentation technique,it was possible to achieve a classification accuracy improvement of up to 4.64%and high accuracy even with a small amount of data set.In addition,it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.展开更多
The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalizatio...The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalization performance and can be seen as a type of implicit regularization.Thismethod is recommended in the casewhere the amount of high-quality data is limited,and gaining new examples is costly and time-consuming.In this paper,we trained YOLOv7 with a dataset that is part of the Open Images dataset that has 8,600 images with four classes(Car,Bus,Motorcycle,and Person).We used five different data augmentations techniques for duplicates and improvement of our dataset.The performance of the object detection algorithm was compared when using the proposed augmented dataset with a combination of two and three types of data augmentation with the result of the original data.The evaluation result for the augmented data gives a promising result for every object,and every kind of data augmentation gives a different improvement.The mAP@.5 of all classes was 76%,and F1-score was 74%.The proposed method increased the mAP@.5 value by+13%and F1-score by+10%for all objects.展开更多
Convolutional neural networks(CNNs)are widely used to tackling complex tasks,which are prone to overfitting if the datasets are noisy.Therefore,we propose folding fan cropping and splicing(FFCS)regularization strategy...Convolutional neural networks(CNNs)are widely used to tackling complex tasks,which are prone to overfitting if the datasets are noisy.Therefore,we propose folding fan cropping and splicing(FFCS)regularization strategy to enhance representation abilities of CNNs.In particular,we propose two different methods considering the effect of different segmentation numbers on classification results.One is the random folding fan method,and the other is the fixed folding fan method.Experimental results showed that FFCS reduced the classification errors both with the value of 0.88%on CIFAR-10 dataset and 1.86%on ImageNet dataset.Moreover,FFCS consistently outperformed Mixup and Random Erasing approaches on classification tasks.Therefore,FFCS effectively prevents overfitting and reduces the impact of background noises on classification tasks.展开更多
Encrypted traffic classification has become a hot issue in network security research.The class imbalance problem of traffic samples often causes the deterioration of Machine Learning based classifier performance.Altho...Encrypted traffic classification has become a hot issue in network security research.The class imbalance problem of traffic samples often causes the deterioration of Machine Learning based classifier performance.Although the Generative Adversarial Network(GAN)method can generate new samples by learning the feature distribution of the original samples,it is confronted with the problems of unstable training andmode collapse.To this end,a novel data augmenting approach called Graph CWGAN-GP is proposed in this paper.The traffic data is first converted into grayscale images as the input for the proposed model.Then,the minority class data is augmented with our proposed model,which is built by introducing conditional constraints and a new distance metric in typical GAN.Finally,the classical deep learning model is adopted as a classifier to classify datasets augmented by the Condition GAN(CGAN),Wasserstein GAN-Gradient Penalty(WGAN-GP)and Graph CWGAN-GP,respectively.Compared with the state-of-the-art GAN methods,the Graph CWGAN-GP cannot only control the modes of the data to be generated,but also overcome the problem of unstable training and generate more realistic and diverse samples.The experimental results show that the classification precision,recall and F1-Score of theminority class in the balanced dataset augmented in this paper have improved by more than 2.37%,3.39% and 4.57%,respectively.展开更多
With the advent of deep learning,self-driving schemes based on deep learning are becoming more and more popular.Robust perception-action models should learn from data with different scenarios and real behaviors,while ...With the advent of deep learning,self-driving schemes based on deep learning are becoming more and more popular.Robust perception-action models should learn from data with different scenarios and real behaviors,while current end-to-end model learning is generally limited to training of massive data,innovation of deep network architecture,and learning in-situ model in a simulation environment.Therefore,we introduce a new image style transfer method into data augmentation,and improve the diversity of limited data by changing the texture,contrast ratio and color of the image,and then it is extended to the scenarios that the model has been unobserved before.Inspired by rapid style transfer and artistic style neural algorithms,we propose an arbitrary style generation network architecture,including style transfer network,style learning network,style loss network and multivariate Gaussian distribution function.The style embedding vector is randomly sampled from the multivariate Gaussian distribution and linearly interpolated with the embedded vector predicted by the input image on the style learning network,which provides a set of normalization constants for the style transfer network,and finally realizes the diversity of the image style.In order to verify the effectiveness of the method,image classification and simulation experiments were performed separately.Finally,we built a small-sized smart car experiment platform,and apply the data augmentation technology based on image style transfer drive to the experiment of automatic driving for the first time.The experimental results show that:(1)The proposed scheme can improve the prediction accuracy of the end-to-end model and reduce the model’s error accumulation;(2)the method based on image style transfer provides a new scheme for data augmentation technology,and also provides a solution for the high cost that many deep models rely heavily on a large number of label data.展开更多
This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained mode...This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework.展开更多
The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased si...The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.展开更多
Intelligent identification of sandstone slice images using deep learning technology is the development trend of mineral identification,and accurate mineral particle segmentation is the most critical step for intellige...Intelligent identification of sandstone slice images using deep learning technology is the development trend of mineral identification,and accurate mineral particle segmentation is the most critical step for intelligent identification.A typical identification model requires many training samples to learn as many distinguishable features as possible.However,limited by the difficulty of data acquisition,the high cost of labeling,and privacy protection,this has led to a sparse sample number and cannot meet the training requirements of deep learning image identification models.In order to increase the number of samples and improve the training effect of deep learning models,this paper proposes a tight sandstone image data augmentation method by combining the advantages of the data deformation method and the data oversampling method in the Putaohua reservoir in the Sanzhao Sag of the Songliao Basin as the target area.First,the Style Generative Adversarial Network(StyleGAN)is improved to generate high-resolution tight sandstone images to improve data diversity.Second,we improve the Automatic Data Augmentation(AutoAugment)algorithm to search for the optimal augmentation strategy to expand the data scale.Finally,we design comparison experiments to demonstrate that this method has obvious advantages in generating image quality and improving the identification effect of deep learning models in real application scenarios.展开更多
Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication.This study addresses challenges associated with small datasets and class imba...Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication.This study addresses challenges associated with small datasets and class imbalances in sarcasm detection by employing comprehensive data pre-processing and Generative Adversial Network(GAN)based augmentation on diverse datasets,including iSarcasm,SemEval-18,and Ghosh.This research offers a novel pipeline for augmenting sarcasm data with Reverse Generative Adversarial Network(RGAN).The proposed RGAN method works by inverting labels between original and synthetic data during the training process.This inversion of labels provides feedback to the generator for generating high-quality data closely resembling the original distribution.Notably,the proposed RGAN model exhibits performance on par with standard GAN,showcasing its robust efficacy in augmenting text data.The exploration of various datasets highlights the nuanced impact of augmentation on model performance,with cautionary insights into maintaining a delicate balance between synthetic and original data.The methodological framework encompasses comprehensive data pre-processing and GAN-based augmentation,with a meticulous comparison against Natural Language Processing Augmentation(NLPAug)as an alternative augmentation technique.Overall,the F1-score of our proposed technique outperforms that of the synonym replacement augmentation technique using NLPAug.The increase in F1-score in experiments using RGAN ranged from 0.066%to 1.054%,and the use of standard GAN resulted in a 2.88%increase in F1-score.The proposed RGAN model outperformed the NLPAug method and demonstrated comparable performance to standard GAN,emphasizing its efficacy in text data augmentation.展开更多
Women from middle age to old age are mostly screened positive for Breast cancer which leads to death.Times over the past decades,the overall sur-vival rate in breast cancer has improved due to advancements in early-st...Women from middle age to old age are mostly screened positive for Breast cancer which leads to death.Times over the past decades,the overall sur-vival rate in breast cancer has improved due to advancements in early-stage diag-nosis and tailored therapy.Today all hospital brings high awareness and early detection technologies for breast cancer.This increases the survival rate of women.Though traditional breast cancer treatment takes so long,early cancer techniques require an automation system.This research provides a new methodol-ogy for classifying breast cancer using ultrasound pictures that use deep learning and the combination of the best characteristics.Initially,after successful learning of Convolutional Neural Network(CNN)algorithms,data augmentation is used to enhance the representation of the feature dataset.Then it uses BreastNet18 withfine-tuned VGG-16 model for pre-training the augmented dataset.For feature classification,Entropy controlled Whale Optimization Algorithm(EWOA)is used.The features that have been optimized using the EWOA were utilized to fuse and optimize the data.To identify the breast cancer pictures,training classifiers are used.By using the novel probability-based serial technique,the best-chosen characteristics are fused and categorized by machine learning techniques.The main objective behind the research is to increase tumor prediction accuracy for saving human life.The testing was performed using a dataset of enhanced Breast Ultrasound Images(BUSI).The proposed method improves the accuracy com-pared with the existing methods.展开更多
基金Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korea government(Grant No.20214000000140,Graduate School of Convergence for Clean Energy Integrated Power Generation)Korea Basic Science Institute(National Research Facilities and Equipment Center)grant funded by the Ministry of Education(2021R1A6C101A449)the National Research Foundation of Korea grant funded by the Ministry of Science and ICT(2021R1A2C1095139),Republic of Korea。
文摘Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.
基金This research was financially supported by the Ministry of Trade,Industry,and Energy(MOTIE),Korea,under the“Project for Research and Development with Middle Markets Enterprises and DNA(Data,Network,AI)Universities”(AI-based Safety Assessment and Management System for Concrete Structures)(ReferenceNumber P0024559)supervised by theKorea Institute for Advancement of Technology(KIAT).
文摘Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.
基金the Grant of Program for Scientific ResearchInnovation Team in Colleges and Universities of Anhui Province(2022AH010095)The Grant ofScientific Research and Talent Development Foundation of the Hefei University(No.21-22RC15)+2 种基金The Key Research Plan of Anhui Province(No.2022k07020011)The Grant of Anhui Provincial940 CMC,2024,vol.79,no.1Natural Science Foundation,No.2308085MF213The Open Fund of Information Materials andIntelligent Sensing Laboratory of Anhui Province IMIS202205,as well as the AI General ComputingPlatform of Hefei University.
文摘Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore,data augmentation is crucial for this task.Existing data augmentationmethods often employ pixel-wise transformations,whichmay inadvertently disrupt edge features.In this paper,we propose a data augmentationmethod formonocular depth estimation,which we refer to as the Perpendicular-Cutdepth method.This method involves cutting realworld depth maps along perpendicular directions and pasting them onto input images,thereby diversifying the data without compromising edge features.To validate the effectiveness of the algorithm,we compared it with existing convolutional neural network(CNN)against the current mainstream data augmentation algorithms.Additionally,to verify the algorithm’s applicability to Transformer networks,we designed an encoder-decoder network structure based on Transformer to assess the generalization of our proposed algorithm.Experimental results demonstrate that,in the field of monocular depth estimation,our proposed method,Perpendicular-Cutdepth,outperforms traditional data augmentationmethods.On the indoor dataset NYU,our method increases accuracy from0.900 to 0.907 and reduces the error rate from0.357 to 0.351.On the outdoor dataset KITTI,our method improves accuracy from 0.9638 to 0.9642 and decreases the error rate from 0.060 to 0.0598.
基金Project supported by the National Key Research and Development Program of China(Grant No.2022YFB2803900)the National Natural Science Foundation of China(Grant Nos.61974075 and 61704121)+2 种基金the Natural Science Foundation of Tianjin Municipality(Grant Nos.22JCZDJC00460 and 19JCQNJC00700)Tianjin Municipal Education Commission(Grant No.2019KJ028)Fundamental Research Funds for the Central Universities(Grant No.22JCZDJC00460).
文摘Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
基金Ahmed Alhussen would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2022-####.
文摘A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is essential to diagnose at the beginning.Notwithstanding,the manual evaluation process utilizing Magnetic Resonance Imaging(MRI)causes a few worries,remarkably inefficient and inaccurate brain tumor diagnoses.Similarly,the examination process of brain tumors is intricate as they display high unbalance in nature like shape,size,appearance,and location.Therefore,a precise and expeditious prognosis of brain tumors is essential for implementing the of an implicit treatment.Several computer models adapted to diagnose the tumor,but the accuracy of the model needs to be tested.Considering all the above mentioned things,this work aims to identify the best classification system by considering the prediction accuracy out of Alex-Net,ResNet 50,and Inception V3.Data augmentation is performed on the database and fed into the three convolutions neural network(CNN)models.A comparison line is drawn between the three models based on accuracy and performance.An accuracy of 96.2%is obtained for AlexNet with augmentation and performed better than ResNet 50 and Inception V3 for the 120th epoch.With the suggested model with higher accuracy,it is highly reliable if brain tumors are diagnosed with available datasets.
文摘Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.
基金supported by the National Natural Science Foundation of China(42027805)the National Aeronautical Fund(ASFC-20172080005)。
文摘Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real-world engineering scenarios to train an intelligent diagnosis system is challenging.This paper proposes a fault diagnosis method combining several augmentation schemes to alleviate the problem of limited fault data.We begin by identifying relevant parameters that influence the construction of a spectrogram.We leverage the uncertainty principle in processing time-frequency domain signals,making it impossible to simultaneously achieve good time and frequency resolutions.A key determinant of this phenomenon is the window function's choice and length used in implementing the shorttime Fourier transform.The Gaussian,Kaiser,and rectangular windows are selected in the experimentation due to their diverse characteristics.The overlap parameter's size also influences the outcome and resolution of the spectrogram.A 50%overlap is used in the original data transformation,and±25%is used in implementing an effective augmentation policy to which two-stage regular CNN can be applied to achieve improved performance.The best model reaches an accuracy of 99.98%and a cross-domain accuracy of 92.54%.When combined with data augmentation,the proposed model yields cutting-edge results.
基金The researchers would like to thank the Deanship of Scientific Research,Qassim University for funding the publication of this project.
文摘In the machine learning(ML)paradigm,data augmentation serves as a regularization approach for creating ML models.The increase in the diversification of training samples increases the generalization capabilities,which enhances the prediction performance of classifiers when tested on unseen examples.Deep learning(DL)models have a lot of parameters,and they frequently overfit.Effectively,to avoid overfitting,data plays a major role to augment the latest improvements in DL.Nevertheless,reliable data collection is a major limiting factor.Frequently,this problem is undertaken by combining augmentation of data,transfer learning,dropout,and methods of normalization in batches.In this paper,we introduce the application of data augmentation in the field of image classification using Random Multi-model Deep Learning(RMDL)which uses the association approaches of multi-DL to yield random models for classification.We present a methodology for using Generative Adversarial Networks(GANs)to generate images for data augmenting.Through experiments,we discover that samples generated by GANs when fed into RMDL improve both accuracy and model efficiency.Experimenting across both MNIST and CIAFAR-10 datasets show that,error rate with proposed approach has been decreased with different random models.
基金The authors received the research fun T2022-CN-006 for this study.
文摘It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.
文摘With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificial intelligence.However,it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage.There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology.Therefore,this study proposes a mixed contour data augmentation technique,which is a data augmentation technique using contour images,to solve a problem caused by a lack of data.ResNet,a famous convolutional neural network(CNN)architecture,and CIFAR-10,a benchmark data set,are used for experimental performance evaluation to prove the superiority of the proposed method.And to prove that high performance improvement can be achieved even with a small training dataset,the ratio of the training dataset was divided into 70%,50%,and 30%for comparative analysis.As a result of applying the mixed contour data augmentation technique,it was possible to achieve a classification accuracy improvement of up to 4.64%and high accuracy even with a small amount of data set.In addition,it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.
基金the United States Air Force Office of Scientific Research(AFOSR)contract FA9550-22-1-0268 awarded to KHA,https://www.afrl.af.mil/AFOSR/.The contract is entitled:“Investigating Improving Safety of Autonomous Exploring Intelligent Agents with Human-in-the-Loop Reinforcement Learning,”and in part by Jackson State University.
文摘The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalization performance and can be seen as a type of implicit regularization.Thismethod is recommended in the casewhere the amount of high-quality data is limited,and gaining new examples is costly and time-consuming.In this paper,we trained YOLOv7 with a dataset that is part of the Open Images dataset that has 8,600 images with four classes(Car,Bus,Motorcycle,and Person).We used five different data augmentations techniques for duplicates and improvement of our dataset.The performance of the object detection algorithm was compared when using the proposed augmented dataset with a combination of two and three types of data augmentation with the result of the original data.The evaluation result for the augmented data gives a promising result for every object,and every kind of data augmentation gives a different improvement.The mAP@.5 of all classes was 76%,and F1-score was 74%.The proposed method increased the mAP@.5 value by+13%and F1-score by+10%for all objects.
文摘Convolutional neural networks(CNNs)are widely used to tackling complex tasks,which are prone to overfitting if the datasets are noisy.Therefore,we propose folding fan cropping and splicing(FFCS)regularization strategy to enhance representation abilities of CNNs.In particular,we propose two different methods considering the effect of different segmentation numbers on classification results.One is the random folding fan method,and the other is the fixed folding fan method.Experimental results showed that FFCS reduced the classification errors both with the value of 0.88%on CIFAR-10 dataset and 1.86%on ImageNet dataset.Moreover,FFCS consistently outperformed Mixup and Random Erasing approaches on classification tasks.Therefore,FFCS effectively prevents overfitting and reduces the impact of background noises on classification tasks.
基金supported by the National Natural Science Foundation of China (Grants Nos.61931004,62072250)the Talent Launch Fund of Nanjing University of Information Science and Technology (2020r061).
文摘Encrypted traffic classification has become a hot issue in network security research.The class imbalance problem of traffic samples often causes the deterioration of Machine Learning based classifier performance.Although the Generative Adversarial Network(GAN)method can generate new samples by learning the feature distribution of the original samples,it is confronted with the problems of unstable training andmode collapse.To this end,a novel data augmenting approach called Graph CWGAN-GP is proposed in this paper.The traffic data is first converted into grayscale images as the input for the proposed model.Then,the minority class data is augmented with our proposed model,which is built by introducing conditional constraints and a new distance metric in typical GAN.Finally,the classical deep learning model is adopted as a classifier to classify datasets augmented by the Condition GAN(CGAN),Wasserstein GAN-Gradient Penalty(WGAN-GP)and Graph CWGAN-GP,respectively.Compared with the state-of-the-art GAN methods,the Graph CWGAN-GP cannot only control the modes of the data to be generated,but also overcome the problem of unstable training and generate more realistic and diverse samples.The experimental results show that the classification precision,recall and F1-Score of theminority class in the balanced dataset augmented in this paper have improved by more than 2.37%,3.39% and 4.57%,respectively.
基金the National Natural Science Foundation of China(51965008)Science and Technology projects of Guizhou[2018]2168Excellent Young Researcher Project of Guizhou[2017]5630.
文摘With the advent of deep learning,self-driving schemes based on deep learning are becoming more and more popular.Robust perception-action models should learn from data with different scenarios and real behaviors,while current end-to-end model learning is generally limited to training of massive data,innovation of deep network architecture,and learning in-situ model in a simulation environment.Therefore,we introduce a new image style transfer method into data augmentation,and improve the diversity of limited data by changing the texture,contrast ratio and color of the image,and then it is extended to the scenarios that the model has been unobserved before.Inspired by rapid style transfer and artistic style neural algorithms,we propose an arbitrary style generation network architecture,including style transfer network,style learning network,style loss network and multivariate Gaussian distribution function.The style embedding vector is randomly sampled from the multivariate Gaussian distribution and linearly interpolated with the embedded vector predicted by the input image on the style learning network,which provides a set of normalization constants for the style transfer network,and finally realizes the diversity of the image style.In order to verify the effectiveness of the method,image classification and simulation experiments were performed separately.Finally,we built a small-sized smart car experiment platform,and apply the data augmentation technology based on image style transfer drive to the experiment of automatic driving for the first time.The experimental results show that:(1)The proposed scheme can improve the prediction accuracy of the end-to-end model and reduce the model’s error accumulation;(2)the method based on image style transfer provides a new scheme for data augmentation technology,and also provides a solution for the high cost that many deep models rely heavily on a large number of label data.
基金This work was supported by the Science and Technology Innovation 2030-Key Project of“New Generation Artificial Intelligence”of China under Grant 2018AAA0102303the Natural Science Foundation for Distinguished Young Scholars of Jiangsu Province(No.BK20190030)the National Natural Science Foundation of China(No.61631020,No.61871398,No.61931011 and No.U20B2038).
文摘This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework.
基金supported in part by the National Natural Science Foundation of China(NSFC)(92167106,61833014)Key Research and Development Program of Zhejiang Province(2022C01206)。
文摘The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.
基金This research was funded by the National Natural Science Foundation of China(Project No.42172161)Heilongjiang Provincial Natural Science Foundation of China(Project No.LH2020F003)+1 种基金Heilongjiang Provincial Department of Education Project of China(Project No.UNPYSCT-2020144)Northeast Petroleum University Guided Innovation Fund(2021YDL-12).
文摘Intelligent identification of sandstone slice images using deep learning technology is the development trend of mineral identification,and accurate mineral particle segmentation is the most critical step for intelligent identification.A typical identification model requires many training samples to learn as many distinguishable features as possible.However,limited by the difficulty of data acquisition,the high cost of labeling,and privacy protection,this has led to a sparse sample number and cannot meet the training requirements of deep learning image identification models.In order to increase the number of samples and improve the training effect of deep learning models,this paper proposes a tight sandstone image data augmentation method by combining the advantages of the data deformation method and the data oversampling method in the Putaohua reservoir in the Sanzhao Sag of the Songliao Basin as the target area.First,the Style Generative Adversarial Network(StyleGAN)is improved to generate high-resolution tight sandstone images to improve data diversity.Second,we improve the Automatic Data Augmentation(AutoAugment)algorithm to search for the optimal augmentation strategy to expand the data scale.Finally,we design comparison experiments to demonstrate that this method has obvious advantages in generating image quality and improving the identification effect of deep learning models in real application scenarios.
文摘Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication.This study addresses challenges associated with small datasets and class imbalances in sarcasm detection by employing comprehensive data pre-processing and Generative Adversial Network(GAN)based augmentation on diverse datasets,including iSarcasm,SemEval-18,and Ghosh.This research offers a novel pipeline for augmenting sarcasm data with Reverse Generative Adversarial Network(RGAN).The proposed RGAN method works by inverting labels between original and synthetic data during the training process.This inversion of labels provides feedback to the generator for generating high-quality data closely resembling the original distribution.Notably,the proposed RGAN model exhibits performance on par with standard GAN,showcasing its robust efficacy in augmenting text data.The exploration of various datasets highlights the nuanced impact of augmentation on model performance,with cautionary insights into maintaining a delicate balance between synthetic and original data.The methodological framework encompasses comprehensive data pre-processing and GAN-based augmentation,with a meticulous comparison against Natural Language Processing Augmentation(NLPAug)as an alternative augmentation technique.Overall,the F1-score of our proposed technique outperforms that of the synonym replacement augmentation technique using NLPAug.The increase in F1-score in experiments using RGAN ranged from 0.066%to 1.054%,and the use of standard GAN resulted in a 2.88%increase in F1-score.The proposed RGAN model outperformed the NLPAug method and demonstrated comparable performance to standard GAN,emphasizing its efficacy in text data augmentation.
文摘Women from middle age to old age are mostly screened positive for Breast cancer which leads to death.Times over the past decades,the overall sur-vival rate in breast cancer has improved due to advancements in early-stage diag-nosis and tailored therapy.Today all hospital brings high awareness and early detection technologies for breast cancer.This increases the survival rate of women.Though traditional breast cancer treatment takes so long,early cancer techniques require an automation system.This research provides a new methodol-ogy for classifying breast cancer using ultrasound pictures that use deep learning and the combination of the best characteristics.Initially,after successful learning of Convolutional Neural Network(CNN)algorithms,data augmentation is used to enhance the representation of the feature dataset.Then it uses BreastNet18 withfine-tuned VGG-16 model for pre-training the augmented dataset.For feature classification,Entropy controlled Whale Optimization Algorithm(EWOA)is used.The features that have been optimized using the EWOA were utilized to fuse and optimize the data.To identify the breast cancer pictures,training classifiers are used.By using the novel probability-based serial technique,the best-chosen characteristics are fused and categorized by machine learning techniques.The main objective behind the research is to increase tumor prediction accuracy for saving human life.The testing was performed using a dataset of enhanced Breast Ultrasound Images(BUSI).The proposed method improves the accuracy com-pared with the existing methods.