The joint European Space Agency and Chinese Academy of Sciences Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission will explore global dynamics of the magnetosphere under varying solar wind and interplane...The joint European Space Agency and Chinese Academy of Sciences Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission will explore global dynamics of the magnetosphere under varying solar wind and interplanetary magnetic field conditions,and simultaneously monitor the auroral response of the Northern Hemisphere ionosphere.Combining these large-scale responses with medium and fine-scale measurements at a variety of cadences by additional ground-based and space-based instruments will enable a much greater scientific impact beyond the original goals of the SMILE mission.Here,we describe current community efforts to prepare for SMILE,and the benefits and context various experiments that have explicitly expressed support for SMILE can offer.A dedicated group of international scientists representing many different experiment types and geographical locations,the Ground-based and Additional Science Working Group,is facilitating these efforts.Preparations include constructing an online SMILE Data Fusion Facility,the discussion of particular or special modes for experiments such as coherent and incoherent scatter radar,and the consideration of particular observing strategies and spacecraft conjunctions.We anticipate growing interest and community engagement with the SMILE mission,and we welcome novel ideas and insights from the solar-terrestrial community.展开更多
Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w...Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.展开更多
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende...Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.展开更多
Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in p...Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in passive satellite radiometer observations, few operational satellite CBH products are currently available. This study presents a new method for retrieving CBH from satellite radiometers. The method first uses the combined measurements of satellite radiometers and ground-based cloud radars to develop a lookup table(LUT) of effective cloud water content(ECWC), representing the vertically varying cloud water content. This LUT allows for the conversion of cloud water path to cloud geometric thickness(CGT), enabling the estimation of CBH as the difference between cloud top height and CGT. Detailed comparative analysis of CBH estimates from the state-of-the-art ECWC LUT are conducted against four ground-based millimeter-wave cloud radar(MMCR) measurements, and results show that the mean bias(correlation coefficient) is0.18±1.79 km(0.73), which is lower(higher) than 0.23±2.11 km(0.67) as derived from the combined measurements of satellite radiometers and satellite radar-lidar(i.e., Cloud Sat and CALIPSO). Furthermore, the percentages of the CBH biases within 250 m increase by 5% to 10%, which varies by location. This indicates that the CBH estimates from our algorithm are more consistent with ground-based MMCR measurements. Therefore, this algorithm shows great potential for further improvement of the CBH retrievals as ground-based MMCR are being increasingly included in global surface meteorological observing networks, and the improved CBH retrievals will contribute to better cloud radiative effect estimates.展开更多
Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore...Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore,data augmentation is crucial for this task.Existing data augmentationmethods often employ pixel-wise transformations,whichmay inadvertently disrupt edge features.In this paper,we propose a data augmentationmethod formonocular depth estimation,which we refer to as the Perpendicular-Cutdepth method.This method involves cutting realworld depth maps along perpendicular directions and pasting them onto input images,thereby diversifying the data without compromising edge features.To validate the effectiveness of the algorithm,we compared it with existing convolutional neural network(CNN)against the current mainstream data augmentation algorithms.Additionally,to verify the algorithm’s applicability to Transformer networks,we designed an encoder-decoder network structure based on Transformer to assess the generalization of our proposed algorithm.Experimental results demonstrate that,in the field of monocular depth estimation,our proposed method,Perpendicular-Cutdepth,outperforms traditional data augmentationmethods.On the indoor dataset NYU,our method increases accuracy from0.900 to 0.907 and reduces the error rate from0.357 to 0.351.On the outdoor dataset KITTI,our method improves accuracy from 0.9638 to 0.9642 and decreases the error rate from 0.060 to 0.0598.展开更多
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b...Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.展开更多
Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequentl...Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequently,various studies have been conducted on deep learning techniques related to the detection of parcel damage.This study proposes a deep learning-based damage detectionmethod for various types of parcels.Themethod is intended to be part of a parcel information-recognition systemthat identifies the volume and shipping information of parcels,and determines whether they are damaged;this method is intended for use in the actual parcel-transportation process.For this purpose,1)the study acquired image data in an environment simulating the actual parcel-transportation process,and 2)the training dataset was expanded based on StyleGAN3 with adaptive discriminator augmentation.Additionally,3)a preliminary distinction was made between the appearance of parcels and their damage status to enhance the performance of the parcel damage detection model and analyze the causes of parcel damage.Finally,using the dataset constructed based on the proposed method,a damage type detection model was trained,and its mean average precision was confirmed.This model can improve customer satisfaction and reduce return costs for parcel delivery companies.展开更多
BACKGROUND There is an increasingly strong demand for appearance and physical beauty in social life,marriage,and other aspects with the development of society and the improvement of material living standards.An increa...BACKGROUND There is an increasingly strong demand for appearance and physical beauty in social life,marriage,and other aspects with the development of society and the improvement of material living standards.An increasing number of people have improved their appearance and physical shape through aesthetic plastic surgery.The female breast plays a significant role in physical beauty,and droopy or atrophied breasts can frequently lead to psychological inferiority and lack of confidence in women.This,in turn,can affect their mental health and quality of life.AIM To analyze preoperative and postoperative self-image pressure-level changes of autologous fat breast augmentation patients and their impact on social adaptability.METHODS We selected 160 patients who underwent autologous fat breast augmentation at the First Affiliated Hospital of Xinxiang Medical University from January 2020 to December 2022 using random sampling method.The general information,selfimage pressure level,and social adaptability of the patients were investigated using a basic information survey,body image self-assessment scale,and social adaptability scale.The self-image pressure-level changes and their effects on the social adaptability of patients before and after autologous fat breast augmentation were analyzed.RESULTS We collected 142 valid questionnaires.The single-factor analysis results showed no statistically significant difference in the self-image pressure level and social adaptability score of patients with different ages,marital status,and monthly income.However,there were significant differences in social adaptability among patients with different education levels and employment statuses.The correlation analysis results revealed a significant correlation between the self-image pressure level and social adaptability score before and after surgery.Multiple factors analysis results showed that the degree of concern caused by appearance in selfimage pressure,the degree of possible behavioral intervention,the related distress caused by body image,and the influence of body image on social life influenced the social adaptability of autologous fat breast augmentation patients.CONCLUSION The self-image pressure on autologous fat breast augmentation patients is inversely proportional to their social adaptability.展开更多
BACKGROUND Transcranial direct current stimulation(tDCS)is proven to be safe in treating various neurological conditions in children and adolescents.It is also an effective method in the treatment of OCD in adults.AIM...BACKGROUND Transcranial direct current stimulation(tDCS)is proven to be safe in treating various neurological conditions in children and adolescents.It is also an effective method in the treatment of OCD in adults.AIM To assess the safety and efficacy of tDCS as an add-on therapy in drug-naive adolescents with OCD.METHODS We studied drug-naïve adolescents with OCD,using a Children’s Yale-Brown obsessive-compulsive scale(CY-BOCS)scale to assess their condition.Both active and sham groups were given fluoxetine,and we applied cathode and anode over the supplementary motor area and deltoid for 20 min in 10 sessions.Reassessment occurred at 2,6,and 12 wk using CY-BOCS.RESULTS Eighteen adolescents completed the study(10-active,8-sham group).CY-BOCS scores from baseline to 12 wk reduced significantly in both groups but change at baseline to 2 wk was significant in the active group only.The mean change at 2 wk was more in the active group(11.8±7.77 vs 5.25±2.22,P=0.056).Adverse effects between the groups were comparable.CONCLUSION tDCS is safe and well tolerated for the treatment of OCD in adolescents.However,there is a need for further studies with a larger sample population to confirm the effectiveness of tDCS as early augmentation in OCD in this population.展开更多
Background:Although clozapine is an effective option for treatment-resistant schizophrenia(TRS),there are still 1/3 to 1/2 of TRS patients who do not respond to clozapine.The main purpose of this randomized,double-bli...Background:Although clozapine is an effective option for treatment-resistant schizophrenia(TRS),there are still 1/3 to 1/2 of TRS patients who do not respond to clozapine.The main purpose of this randomized,double-blind,placebocontrolled trial was to explore the amisulpride augmentation efficacy on the psychopathological symptoms and cognitive function of clozapine-resistant treatment-refractory schizophrenia(CTRS)patients.Methods:A total of 80 patients were recruited and randomly assigned to receive initial clozapine plus amisulpride(amisulpride group)or clozapine plus placebo(placebo group).Positive and Negative Syndrome Scale(PANSS),Scale for the Assessment of Negative Symptoms(SANS),Clinical Global Impression(CGI)scale scores,Repeatable Battery for the Assessment of Neuropsychological Status(RBANS),Treatment Emergent Symptom Scale(TESS),laboratory measurements,and electrocardiograms(ECG)were performed at baseline,week 6,and week 12.Results:Compared with the placebo group,amisulpride group had a lower PANSS total score,positive subscore,and general psychopathology subscore at week 6 and week 12(PBonferroni<0.01).Furthermore,compared with the placebo group,the amisulpride group showed an improved RBANS language score at week 12(PBonferroni<0.001).Amisulpride group had a higher treatment response rate(P=0.04),lower scores of CGI severity and CGI efficacy at week 6 and week 12 than placebo group(PBonferroni<0.05).There were no differences between the groups in body mass index(BMI),corrected QT(QTc)intervals,and laboratory measurements.This study demonstrates that amisulpride augmentation therapy can safely improve the psychiatric symptoms and cognitive performance of CTRS patients.展开更多
Measurements of carbon dioxide(CO_(2)),methane(CH_(4)),and carbon monoxide(CO)are of great importance in the Qinghai-Tibetan region,as it is the highest and largest plateau in the world affecting global weather and cl...Measurements of carbon dioxide(CO_(2)),methane(CH_(4)),and carbon monoxide(CO)are of great importance in the Qinghai-Tibetan region,as it is the highest and largest plateau in the world affecting global weather and climate systems.In this study,for the first time,we present CO_(2),CH_(4),and CO column measurements carried out by a Bruker EM27/SUN Fourier-transform infrared spectrometer(FTIR)at Golmud(36.42°E,94.91°N,2808 m)in August 2021.The mean and standard deviation of the column-average dry-air mixing ratio of CO_(2),CH_(4),and CO(XCO_(2),XCH_(4),and XCO)are 409.3±0.4 ppm,1905.5±19.4 ppb,and 103.1±7.7 ppb,respectively.The differences between the FTIR co-located TROPOMI/S5P satellite measurements at Golmud are 0.68±0.64%(13.1±12.2 ppb)for XCH_(4) and 9.81±3.48%(–10.7±3.8 ppb)for XCO,which are within their retrieval uncertainties.High correlations for both XCH_(4) and XCO are observed between the FTIR and S5P satellite measurements.Using the FLEXPART model and satellite measurements,we find that enhanced CH_(4) and CO columns in Golmud are affected by anthropogenic emissions transported from North India.This study provides an insight into the variations of the CO_(2),CH_(4),and CO columns in the Qinghai-Tibetan Plateau.展开更多
It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone h...It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.展开更多
A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is es...A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is essential to diagnose at the beginning.Notwithstanding,the manual evaluation process utilizing Magnetic Resonance Imaging(MRI)causes a few worries,remarkably inefficient and inaccurate brain tumor diagnoses.Similarly,the examination process of brain tumors is intricate as they display high unbalance in nature like shape,size,appearance,and location.Therefore,a precise and expeditious prognosis of brain tumors is essential for implementing the of an implicit treatment.Several computer models adapted to diagnose the tumor,but the accuracy of the model needs to be tested.Considering all the above mentioned things,this work aims to identify the best classification system by considering the prediction accuracy out of Alex-Net,ResNet 50,and Inception V3.Data augmentation is performed on the database and fed into the three convolutions neural network(CNN)models.A comparison line is drawn between the three models based on accuracy and performance.An accuracy of 96.2%is obtained for AlexNet with augmentation and performed better than ResNet 50 and Inception V3 for the 120th epoch.With the suggested model with higher accuracy,it is highly reliable if brain tumors are diagnosed with available datasets.展开更多
Offline signature verification(OfSV)is essential in preventing the falsification of documents.Deep learning(DL)based OfSVs require a high number of signature images to attain acceptable performance.However,a limited n...Offline signature verification(OfSV)is essential in preventing the falsification of documents.Deep learning(DL)based OfSVs require a high number of signature images to attain acceptable performance.However,a limited number of signature samples are available to train these models in a real-world scenario.Several researchers have proposed models to augment new signature images by applying various transformations.Others,on the other hand,have used human neuromotor and cognitive-inspired augmentation models to address the demand for more signature samples.Hence,augmenting a sufficient number of signatures with variations is still a challenging task.This study proposed OffSig-SinGAN:a deep learning-based image augmentation model to address the limited number of signatures problem on offline signature verification.The proposed model is capable of augmenting better quality signatures with diversity from a single signature image only.It is empirically evaluated on widely used public datasets;GPDSsyntheticSignature.The quality of augmented signature images is assessed using four metrics like pixel-by-pixel difference,peak signal-to-noise ratio(PSNR),structural similarity index measure(SSIM),and frechet inception distance(FID).Furthermore,various experiments were organised to evaluate the proposed image augmentation model’s performance on selected DL-based OfSV systems and to prove whether it helped to improve the verification accuracy rate.Experiment results showed that the proposed augmentation model performed better on the GPDSsyntheticSignature dataset than other augmentation methods.The improved verification accuracy rate of the selected DL-based OfSV system proved the effectiveness of the proposed augmentation model.展开更多
The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalizatio...The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalization performance and can be seen as a type of implicit regularization.Thismethod is recommended in the casewhere the amount of high-quality data is limited,and gaining new examples is costly and time-consuming.In this paper,we trained YOLOv7 with a dataset that is part of the Open Images dataset that has 8,600 images with four classes(Car,Bus,Motorcycle,and Person).We used five different data augmentations techniques for duplicates and improvement of our dataset.The performance of the object detection algorithm was compared when using the proposed augmented dataset with a combination of two and three types of data augmentation with the result of the original data.The evaluation result for the augmented data gives a promising result for every object,and every kind of data augmentation gives a different improvement.The mAP@.5 of all classes was 76%,and F1-score was 74%.The proposed method increased the mAP@.5 value by+13%and F1-score by+10%for all objects.展开更多
The near-Earth asteroid collisions could cause catastrophic disasters to humanity and the Earth,so it is crucial to monitor asteroids.Ground-based synthetic aperture radar(SAR)is an observation technique for high reso...The near-Earth asteroid collisions could cause catastrophic disasters to humanity and the Earth,so it is crucial to monitor asteroids.Ground-based synthetic aperture radar(SAR)is an observation technique for high resolution imaging of asteroids.The ground-based SAR requires a long integration time to achieve a large synthetic aperture,and the echo signal will be seriously affected by temporal-spatial variant troposphere.Traditional spatiotemporal freezing tropospheric models are ineffective.To cope with this,this paper models and analyses the impacts of temporal-spatial variant troposphere on ground-based SAR imaging of asteroids.For the background tropo-sphere,a temporal-spatial variant ray tracing method is proposed to trace the 4D(3D spatial+temporal)refractive index network provided by the numerical weather model,and calculate the error of the background troposphere.For the tropospheric turbulence,the Andrew power spectral model is used in conjunction with multiphase screen theory,and varying errors are obtained by tracking the changing position of the pierce point on the phase screen.Through simulation,the impact of temporal-spatial variant tropospheric errors on image quality is analyzed,and the simulation results show that the X-band echo signal is seriously affected by the troposphere and the echo signal must be compensated.展开更多
Diabetic Retinopathy is a disease,which happens due to abnormal growth of blood vessels that causes spots on the vision and vision loss.Various techniques are applied to identify the disease in the early stage with di...Diabetic Retinopathy is a disease,which happens due to abnormal growth of blood vessels that causes spots on the vision and vision loss.Various techniques are applied to identify the disease in the early stage with different methods and parameters.Machine Learning(ML)techniques are used for analyz-ing the images andfinding out the location of the disease.The restriction of the ML is a dataset size,which is used for model evaluation.This problem has been overcome by using an augmentation method by generating larger datasets with multidimensional features.Existing models are using only one augmentation tech-nique,which produces limited features of dataset and also lacks in the association of those data during DR detection,so multilevel augmentation is proposed for analysis.The proposed method performs in two phases namely integrated aug-mentation model and dataset correlation(i.e.relationships).It eliminates overfit-ting problem by considering relevant dataset.This method is used for solving the Diabetic Retinopathy problem with a thin vessel identification using the UNET model.UNET based image segmentation achieves 98.3%accuracy when com-pared to RV-GAN and different UNET models with high detection rate.展开更多
Recent state-of-the-art semi-supervised learning(SSL)methods usually use data augmentations as core components.Such methods,however,are limited to simple transformations such as the augmentations under the instance’s...Recent state-of-the-art semi-supervised learning(SSL)methods usually use data augmentations as core components.Such methods,however,are limited to simple transformations such as the augmentations under the instance’s naive representations or the augmentations under the instance’s semantic representations.To tackle this problem,we offer a unique insight into data augmentations and propose a novel data-augmentation-based semi-supervised learning method,called Attentive Neighborhood Feature Aug-mentation(ANFA).The motivation of our method lies in the observation that the relationship between the given feature and its neighborhood may contribute to constructing more reliable transformations for the data,and further facilitating the classifier to distinguish the ambiguous features from the low-dense regions.Specially,we first project the labeled and unlabeled data points into an embedding space and then construct a neighbor graph that serves as a similarity measure based on the similar representations in the embedding space.Then,we employ an attention mechanism to transform the target features into augmented ones based on the neighbor graph.Finally,we formulate a novel semi-supervised loss by encouraging the predictions of the interpolations of augmented features to be consistent with the corresponding interpolations of the predictions of the target features.We carried out exper-iments on SVHN and CIFAR-10 benchmark datasets and the experimental results demonstrate that our method outperforms the state-of-the-art methods when the number of labeled examples is limited.展开更多
Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.Howeve...Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.展开更多
Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real...Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real-world engineering scenarios to train an intelligent diagnosis system is challenging.This paper proposes a fault diagnosis method combining several augmentation schemes to alleviate the problem of limited fault data.We begin by identifying relevant parameters that influence the construction of a spectrogram.We leverage the uncertainty principle in processing time-frequency domain signals,making it impossible to simultaneously achieve good time and frequency resolutions.A key determinant of this phenomenon is the window function's choice and length used in implementing the shorttime Fourier transform.The Gaussian,Kaiser,and rectangular windows are selected in the experimentation due to their diverse characteristics.The overlap parameter's size also influences the outcome and resolution of the spectrogram.A 50%overlap is used in the original data transformation,and±25%is used in implementing an effective augmentation policy to which two-stage regular CNN can be applied to achieve improved performance.The best model reaches an accuracy of 99.98%and a cross-domain accuracy of 92.54%.When combined with data augmentation,the proposed model yields cutting-edge results.展开更多
基金supported by Royal Society grant DHFR1211068funded by UKSA+14 种基金STFCSTFC grant ST/M001083/1funded by STFC grant ST/W00089X/1supported by NERC grant NE/W003309/1(E3d)funded by NERC grant NE/V000748/1support from NERC grants NE/V015133/1,NE/R016038/1(BAS magnetometers),and grants NE/R01700X/1 and NE/R015848/1(EISCAT)supported by NERC grant NE/T000937/1NSFC grants 42174208 and 41821003supported by the Research Council of Norway grant 223252PRODEX arrangement 4000123238 from the European Space Agencysupport of the AUTUMN East-West magnetometer network by the Canadian Space Agencysupported by NASA’s Heliophysics U.S.Participating Investigator Programsupport from grant NSF AGS 2027210supported by grant Dnr:2020-00106 from the Swedish National Space Agencysupported by the German Research Foundation(DFG)under number KR 4375/2-1 within SPP"Dynamic Earth"。
文摘The joint European Space Agency and Chinese Academy of Sciences Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission will explore global dynamics of the magnetosphere under varying solar wind and interplanetary magnetic field conditions,and simultaneously monitor the auroral response of the Northern Hemisphere ionosphere.Combining these large-scale responses with medium and fine-scale measurements at a variety of cadences by additional ground-based and space-based instruments will enable a much greater scientific impact beyond the original goals of the SMILE mission.Here,we describe current community efforts to prepare for SMILE,and the benefits and context various experiments that have explicitly expressed support for SMILE can offer.A dedicated group of international scientists representing many different experiment types and geographical locations,the Ground-based and Additional Science Working Group,is facilitating these efforts.Preparations include constructing an online SMILE Data Fusion Facility,the discussion of particular or special modes for experiments such as coherent and incoherent scatter radar,and the consideration of particular observing strategies and spacecraft conjunctions.We anticipate growing interest and community engagement with the SMILE mission,and we welcome novel ideas and insights from the solar-terrestrial community.
基金Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korea government(Grant No.20214000000140,Graduate School of Convergence for Clean Energy Integrated Power Generation)Korea Basic Science Institute(National Research Facilities and Equipment Center)grant funded by the Ministry of Education(2021R1A6C101A449)the National Research Foundation of Korea grant funded by the Ministry of Science and ICT(2021R1A2C1095139),Republic of Korea。
文摘Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.
基金This research was financially supported by the Ministry of Trade,Industry,and Energy(MOTIE),Korea,under the“Project for Research and Development with Middle Markets Enterprises and DNA(Data,Network,AI)Universities”(AI-based Safety Assessment and Management System for Concrete Structures)(ReferenceNumber P0024559)supervised by theKorea Institute for Advancement of Technology(KIAT).
文摘Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.
基金funded by the National Natural Science Foundation of China (Grant Nos. 42305150 and 42325501)the China Postdoctoral Science Foundation (Grant No. 2023M741774)。
文摘Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in passive satellite radiometer observations, few operational satellite CBH products are currently available. This study presents a new method for retrieving CBH from satellite radiometers. The method first uses the combined measurements of satellite radiometers and ground-based cloud radars to develop a lookup table(LUT) of effective cloud water content(ECWC), representing the vertically varying cloud water content. This LUT allows for the conversion of cloud water path to cloud geometric thickness(CGT), enabling the estimation of CBH as the difference between cloud top height and CGT. Detailed comparative analysis of CBH estimates from the state-of-the-art ECWC LUT are conducted against four ground-based millimeter-wave cloud radar(MMCR) measurements, and results show that the mean bias(correlation coefficient) is0.18±1.79 km(0.73), which is lower(higher) than 0.23±2.11 km(0.67) as derived from the combined measurements of satellite radiometers and satellite radar-lidar(i.e., Cloud Sat and CALIPSO). Furthermore, the percentages of the CBH biases within 250 m increase by 5% to 10%, which varies by location. This indicates that the CBH estimates from our algorithm are more consistent with ground-based MMCR measurements. Therefore, this algorithm shows great potential for further improvement of the CBH retrievals as ground-based MMCR are being increasingly included in global surface meteorological observing networks, and the improved CBH retrievals will contribute to better cloud radiative effect estimates.
基金the Grant of Program for Scientific ResearchInnovation Team in Colleges and Universities of Anhui Province(2022AH010095)The Grant ofScientific Research and Talent Development Foundation of the Hefei University(No.21-22RC15)+2 种基金The Key Research Plan of Anhui Province(No.2022k07020011)The Grant of Anhui Provincial940 CMC,2024,vol.79,no.1Natural Science Foundation,No.2308085MF213The Open Fund of Information Materials andIntelligent Sensing Laboratory of Anhui Province IMIS202205,as well as the AI General ComputingPlatform of Hefei University.
文摘Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore,data augmentation is crucial for this task.Existing data augmentationmethods often employ pixel-wise transformations,whichmay inadvertently disrupt edge features.In this paper,we propose a data augmentationmethod formonocular depth estimation,which we refer to as the Perpendicular-Cutdepth method.This method involves cutting realworld depth maps along perpendicular directions and pasting them onto input images,thereby diversifying the data without compromising edge features.To validate the effectiveness of the algorithm,we compared it with existing convolutional neural network(CNN)against the current mainstream data augmentation algorithms.Additionally,to verify the algorithm’s applicability to Transformer networks,we designed an encoder-decoder network structure based on Transformer to assess the generalization of our proposed algorithm.Experimental results demonstrate that,in the field of monocular depth estimation,our proposed method,Perpendicular-Cutdepth,outperforms traditional data augmentationmethods.On the indoor dataset NYU,our method increases accuracy from0.900 to 0.907 and reduces the error rate from0.357 to 0.351.On the outdoor dataset KITTI,our method improves accuracy from 0.9638 to 0.9642 and decreases the error rate from 0.060 to 0.0598.
基金Project supported by the National Key Research and Development Program of China(Grant No.2022YFB2803900)the National Natural Science Foundation of China(Grant Nos.61974075 and 61704121)+2 种基金the Natural Science Foundation of Tianjin Municipality(Grant Nos.22JCZDJC00460 and 19JCQNJC00700)Tianjin Municipal Education Commission(Grant No.2019KJ028)Fundamental Research Funds for the Central Universities(Grant No.22JCZDJC00460).
文摘Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.
基金supported by a Korea Agency for Infrastructure Technology Advancement(KAIA)grant funded by the Ministry of Land,Infrastructure,and Transport(Grant 1615013176)(https://www.kaia.re.kr/eng/main.do,accessed on 01/06/2024)supported by a Korea Evaluation Institute of Industrial Technology(KEIT)grant funded by the Korean Government(MOTIE)(141518499)(https://www.keit.re.kr/index.es?sid=a2,accessed on 01/06/2024).
文摘Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequently,various studies have been conducted on deep learning techniques related to the detection of parcel damage.This study proposes a deep learning-based damage detectionmethod for various types of parcels.Themethod is intended to be part of a parcel information-recognition systemthat identifies the volume and shipping information of parcels,and determines whether they are damaged;this method is intended for use in the actual parcel-transportation process.For this purpose,1)the study acquired image data in an environment simulating the actual parcel-transportation process,and 2)the training dataset was expanded based on StyleGAN3 with adaptive discriminator augmentation.Additionally,3)a preliminary distinction was made between the appearance of parcels and their damage status to enhance the performance of the parcel damage detection model and analyze the causes of parcel damage.Finally,using the dataset constructed based on the proposed method,a damage type detection model was trained,and its mean average precision was confirmed.This model can improve customer satisfaction and reduce return costs for parcel delivery companies.
文摘BACKGROUND There is an increasingly strong demand for appearance and physical beauty in social life,marriage,and other aspects with the development of society and the improvement of material living standards.An increasing number of people have improved their appearance and physical shape through aesthetic plastic surgery.The female breast plays a significant role in physical beauty,and droopy or atrophied breasts can frequently lead to psychological inferiority and lack of confidence in women.This,in turn,can affect their mental health and quality of life.AIM To analyze preoperative and postoperative self-image pressure-level changes of autologous fat breast augmentation patients and their impact on social adaptability.METHODS We selected 160 patients who underwent autologous fat breast augmentation at the First Affiliated Hospital of Xinxiang Medical University from January 2020 to December 2022 using random sampling method.The general information,selfimage pressure level,and social adaptability of the patients were investigated using a basic information survey,body image self-assessment scale,and social adaptability scale.The self-image pressure-level changes and their effects on the social adaptability of patients before and after autologous fat breast augmentation were analyzed.RESULTS We collected 142 valid questionnaires.The single-factor analysis results showed no statistically significant difference in the self-image pressure level and social adaptability score of patients with different ages,marital status,and monthly income.However,there were significant differences in social adaptability among patients with different education levels and employment statuses.The correlation analysis results revealed a significant correlation between the self-image pressure level and social adaptability score before and after surgery.Multiple factors analysis results showed that the degree of concern caused by appearance in selfimage pressure,the degree of possible behavioral intervention,the related distress caused by body image,and the influence of body image on social life influenced the social adaptability of autologous fat breast augmentation patients.CONCLUSION The self-image pressure on autologous fat breast augmentation patients is inversely proportional to their social adaptability.
文摘BACKGROUND Transcranial direct current stimulation(tDCS)is proven to be safe in treating various neurological conditions in children and adolescents.It is also an effective method in the treatment of OCD in adults.AIM To assess the safety and efficacy of tDCS as an add-on therapy in drug-naive adolescents with OCD.METHODS We studied drug-naïve adolescents with OCD,using a Children’s Yale-Brown obsessive-compulsive scale(CY-BOCS)scale to assess their condition.Both active and sham groups were given fluoxetine,and we applied cathode and anode over the supplementary motor area and deltoid for 20 min in 10 sessions.Reassessment occurred at 2,6,and 12 wk using CY-BOCS.RESULTS Eighteen adolescents completed the study(10-active,8-sham group).CY-BOCS scores from baseline to 12 wk reduced significantly in both groups but change at baseline to 2 wk was significant in the active group only.The mean change at 2 wk was more in the active group(11.8±7.77 vs 5.25±2.22,P=0.056).Adverse effects between the groups were comparable.CONCLUSION tDCS is safe and well tolerated for the treatment of OCD in adolescents.However,there is a need for further studies with a larger sample population to confirm the effectiveness of tDCS as early augmentation in OCD in this population.
基金supported by the National Natural Science Foundation of China(81401127)the Clinical Research Project of Shanghai Municipal Health Commission(20204Y0173)+4 种基金the Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems,Beihang University(VRLAB2022 B02)the Shanghai Key Laboratory of Psychotic Disorders Open Grant(21-K03)the Scientific Research Project of Traditional Chinese Medicine of Guangdong(20192070)the Guangzhou Municipal Key Discipline in Medicine(2021–2023)the Science and Technology Plan Project of Guangdong Province(2019B030316001).
文摘Background:Although clozapine is an effective option for treatment-resistant schizophrenia(TRS),there are still 1/3 to 1/2 of TRS patients who do not respond to clozapine.The main purpose of this randomized,double-blind,placebocontrolled trial was to explore the amisulpride augmentation efficacy on the psychopathological symptoms and cognitive function of clozapine-resistant treatment-refractory schizophrenia(CTRS)patients.Methods:A total of 80 patients were recruited and randomly assigned to receive initial clozapine plus amisulpride(amisulpride group)or clozapine plus placebo(placebo group).Positive and Negative Syndrome Scale(PANSS),Scale for the Assessment of Negative Symptoms(SANS),Clinical Global Impression(CGI)scale scores,Repeatable Battery for the Assessment of Neuropsychological Status(RBANS),Treatment Emergent Symptom Scale(TESS),laboratory measurements,and electrocardiograms(ECG)were performed at baseline,week 6,and week 12.Results:Compared with the placebo group,amisulpride group had a lower PANSS total score,positive subscore,and general psychopathology subscore at week 6 and week 12(PBonferroni<0.01).Furthermore,compared with the placebo group,the amisulpride group showed an improved RBANS language score at week 12(PBonferroni<0.001).Amisulpride group had a higher treatment response rate(P=0.04),lower scores of CGI severity and CGI efficacy at week 6 and week 12 than placebo group(PBonferroni<0.05).There were no differences between the groups in body mass index(BMI),corrected QT(QTc)intervals,and laboratory measurements.This study demonstrates that amisulpride augmentation therapy can safely improve the psychiatric symptoms and cognitive performance of CTRS patients.
基金supported by the National Natural Science Foundation of China(Grant No.42205140,41975035)the National Key Research and Development Program of China(2021YFB3901000).
文摘Measurements of carbon dioxide(CO_(2)),methane(CH_(4)),and carbon monoxide(CO)are of great importance in the Qinghai-Tibetan region,as it is the highest and largest plateau in the world affecting global weather and climate systems.In this study,for the first time,we present CO_(2),CH_(4),and CO column measurements carried out by a Bruker EM27/SUN Fourier-transform infrared spectrometer(FTIR)at Golmud(36.42°E,94.91°N,2808 m)in August 2021.The mean and standard deviation of the column-average dry-air mixing ratio of CO_(2),CH_(4),and CO(XCO_(2),XCH_(4),and XCO)are 409.3±0.4 ppm,1905.5±19.4 ppb,and 103.1±7.7 ppb,respectively.The differences between the FTIR co-located TROPOMI/S5P satellite measurements at Golmud are 0.68±0.64%(13.1±12.2 ppb)for XCH_(4) and 9.81±3.48%(–10.7±3.8 ppb)for XCO,which are within their retrieval uncertainties.High correlations for both XCH_(4) and XCO are observed between the FTIR and S5P satellite measurements.Using the FLEXPART model and satellite measurements,we find that enhanced CH_(4) and CO columns in Golmud are affected by anthropogenic emissions transported from North India.This study provides an insight into the variations of the CO_(2),CH_(4),and CO columns in the Qinghai-Tibetan Plateau.
基金The authors received the research fun T2022-CN-006 for this study.
文摘It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.
基金Ahmed Alhussen would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2022-####.
文摘A brain tumor is a lethal neurological disease that affects the average performance of the brain and can be fatal.In India,around 15 million cases are diagnosed yearly.To mitigate the seriousness of the tumor it is essential to diagnose at the beginning.Notwithstanding,the manual evaluation process utilizing Magnetic Resonance Imaging(MRI)causes a few worries,remarkably inefficient and inaccurate brain tumor diagnoses.Similarly,the examination process of brain tumors is intricate as they display high unbalance in nature like shape,size,appearance,and location.Therefore,a precise and expeditious prognosis of brain tumors is essential for implementing the of an implicit treatment.Several computer models adapted to diagnose the tumor,but the accuracy of the model needs to be tested.Considering all the above mentioned things,this work aims to identify the best classification system by considering the prediction accuracy out of Alex-Net,ResNet 50,and Inception V3.Data augmentation is performed on the database and fed into the three convolutions neural network(CNN)models.A comparison line is drawn between the three models based on accuracy and performance.An accuracy of 96.2%is obtained for AlexNet with augmentation and performed better than ResNet 50 and Inception V3 for the 120th epoch.With the suggested model with higher accuracy,it is highly reliable if brain tumors are diagnosed with available datasets.
文摘Offline signature verification(OfSV)is essential in preventing the falsification of documents.Deep learning(DL)based OfSVs require a high number of signature images to attain acceptable performance.However,a limited number of signature samples are available to train these models in a real-world scenario.Several researchers have proposed models to augment new signature images by applying various transformations.Others,on the other hand,have used human neuromotor and cognitive-inspired augmentation models to address the demand for more signature samples.Hence,augmenting a sufficient number of signatures with variations is still a challenging task.This study proposed OffSig-SinGAN:a deep learning-based image augmentation model to address the limited number of signatures problem on offline signature verification.The proposed model is capable of augmenting better quality signatures with diversity from a single signature image only.It is empirically evaluated on widely used public datasets;GPDSsyntheticSignature.The quality of augmented signature images is assessed using four metrics like pixel-by-pixel difference,peak signal-to-noise ratio(PSNR),structural similarity index measure(SSIM),and frechet inception distance(FID).Furthermore,various experiments were organised to evaluate the proposed image augmentation model’s performance on selected DL-based OfSV systems and to prove whether it helped to improve the verification accuracy rate.Experiment results showed that the proposed augmentation model performed better on the GPDSsyntheticSignature dataset than other augmentation methods.The improved verification accuracy rate of the selected DL-based OfSV system proved the effectiveness of the proposed augmentation model.
基金the United States Air Force Office of Scientific Research(AFOSR)contract FA9550-22-1-0268 awarded to KHA,https://www.afrl.af.mil/AFOSR/.The contract is entitled:“Investigating Improving Safety of Autonomous Exploring Intelligent Agents with Human-in-the-Loop Reinforcement Learning,”and in part by Jackson State University.
文摘The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalization performance and can be seen as a type of implicit regularization.Thismethod is recommended in the casewhere the amount of high-quality data is limited,and gaining new examples is costly and time-consuming.In this paper,we trained YOLOv7 with a dataset that is part of the Open Images dataset that has 8,600 images with four classes(Car,Bus,Motorcycle,and Person).We used five different data augmentations techniques for duplicates and improvement of our dataset.The performance of the object detection algorithm was compared when using the proposed augmented dataset with a combination of two and three types of data augmentation with the result of the original data.The evaluation result for the augmented data gives a promising result for every object,and every kind of data augmentation gives a different improvement.The mAP@.5 of all classes was 76%,and F1-score was 74%.The proposed method increased the mAP@.5 value by+13%and F1-score by+10%for all objects.
基金supported in part by the National Natural Science Foundation of China(Nos.62101039,62201051)in part by the Shandong Excellent Young Scientists Fund Program(Overseas)in part by China Postdoctoral Science Foundation(No.2022M720443).
文摘The near-Earth asteroid collisions could cause catastrophic disasters to humanity and the Earth,so it is crucial to monitor asteroids.Ground-based synthetic aperture radar(SAR)is an observation technique for high resolution imaging of asteroids.The ground-based SAR requires a long integration time to achieve a large synthetic aperture,and the echo signal will be seriously affected by temporal-spatial variant troposphere.Traditional spatiotemporal freezing tropospheric models are ineffective.To cope with this,this paper models and analyses the impacts of temporal-spatial variant troposphere on ground-based SAR imaging of asteroids.For the background tropo-sphere,a temporal-spatial variant ray tracing method is proposed to trace the 4D(3D spatial+temporal)refractive index network provided by the numerical weather model,and calculate the error of the background troposphere.For the tropospheric turbulence,the Andrew power spectral model is used in conjunction with multiphase screen theory,and varying errors are obtained by tracking the changing position of the pierce point on the phase screen.Through simulation,the impact of temporal-spatial variant tropospheric errors on image quality is analyzed,and the simulation results show that the X-band echo signal is seriously affected by the troposphere and the echo signal must be compensated.
文摘Diabetic Retinopathy is a disease,which happens due to abnormal growth of blood vessels that causes spots on the vision and vision loss.Various techniques are applied to identify the disease in the early stage with different methods and parameters.Machine Learning(ML)techniques are used for analyz-ing the images andfinding out the location of the disease.The restriction of the ML is a dataset size,which is used for model evaluation.This problem has been overcome by using an augmentation method by generating larger datasets with multidimensional features.Existing models are using only one augmentation tech-nique,which produces limited features of dataset and also lacks in the association of those data during DR detection,so multilevel augmentation is proposed for analysis.The proposed method performs in two phases namely integrated aug-mentation model and dataset correlation(i.e.relationships).It eliminates overfit-ting problem by considering relevant dataset.This method is used for solving the Diabetic Retinopathy problem with a thin vessel identification using the UNET model.UNET based image segmentation achieves 98.3%accuracy when com-pared to RV-GAN and different UNET models with high detection rate.
基金supported by the National Natural Science Foundation of China (Nos.62072127,62002076,61906049)Natural Science Foundation of Guangdong Province (Nos.2023A1515011774,2020A1515010423)+4 种基金Project 6142111180404 supported by CNKLSTISS,Science and Technology Program of Guangzhou,China (No.202002030131)Guangdong basic and applied basic research fund joint fund Youth Fund (No.2019A1515110213)Open Fund Project of Fujian Provincial Key Laboratory of Information Processing and Intelligent Control (Minjiang University) (No.MJUKF-IPIC202101)Natural Science Foundation of Guangdong Province No.2020A1515010423)Scientific research project for Guangzhou University (No.RP2022003).
文摘Recent state-of-the-art semi-supervised learning(SSL)methods usually use data augmentations as core components.Such methods,however,are limited to simple transformations such as the augmentations under the instance’s naive representations or the augmentations under the instance’s semantic representations.To tackle this problem,we offer a unique insight into data augmentations and propose a novel data-augmentation-based semi-supervised learning method,called Attentive Neighborhood Feature Aug-mentation(ANFA).The motivation of our method lies in the observation that the relationship between the given feature and its neighborhood may contribute to constructing more reliable transformations for the data,and further facilitating the classifier to distinguish the ambiguous features from the low-dense regions.Specially,we first project the labeled and unlabeled data points into an embedding space and then construct a neighbor graph that serves as a similarity measure based on the similar representations in the embedding space.Then,we employ an attention mechanism to transform the target features into augmented ones based on the neighbor graph.Finally,we formulate a novel semi-supervised loss by encouraging the predictions of the interpolations of augmented features to be consistent with the corresponding interpolations of the predictions of the target features.We carried out exper-iments on SVHN and CIFAR-10 benchmark datasets and the experimental results demonstrate that our method outperforms the state-of-the-art methods when the number of labeled examples is limited.
文摘Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.
基金supported by the National Natural Science Foundation of China(42027805)the National Aeronautical Fund(ASFC-20172080005)。
文摘Convolutional neural networks(CNNs)are well suited to bearing fault classification due to their ability to learn discriminative spectro-temporal patterns.However,gathering sufficient cases of faulty conditions in real-world engineering scenarios to train an intelligent diagnosis system is challenging.This paper proposes a fault diagnosis method combining several augmentation schemes to alleviate the problem of limited fault data.We begin by identifying relevant parameters that influence the construction of a spectrogram.We leverage the uncertainty principle in processing time-frequency domain signals,making it impossible to simultaneously achieve good time and frequency resolutions.A key determinant of this phenomenon is the window function's choice and length used in implementing the shorttime Fourier transform.The Gaussian,Kaiser,and rectangular windows are selected in the experimentation due to their diverse characteristics.The overlap parameter's size also influences the outcome and resolution of the spectrogram.A 50%overlap is used in the original data transformation,and±25%is used in implementing an effective augmentation policy to which two-stage regular CNN can be applied to achieve improved performance.The best model reaches an accuracy of 99.98%and a cross-domain accuracy of 92.54%.When combined with data augmentation,the proposed model yields cutting-edge results.