In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBo...In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBoost algorithm(AdaBoost model) was established.A prediction model based on a linear regression algorithm(LR model) and a prediction model based on a multi-layer perceptron neural network algorithm(MLP model) were established for comparison.The prediction experiments of the yarn evenness and the yarn strength were implemented.Determination coefficients and prediction errors were used to evaluate the prediction accuracy of these models,and the K-fold cross validation was used to evaluate the generalization ability of these models.In the prediction experiments,the determination coefficient of the yarn evenness prediction result of the AdaBoost model is 76% and 87% higher than that of the LR model and the MLP model,respectively.The determination coefficient of the yarn strength prediction result of the AdaBoost model is slightly higher than that of the other two models.Considering that the yarn evenness dataset has a weaker linear relationship with the cotton dataset than that of the yarn strength dataset in this paper,the AdaBoost model has the best adaptability for the nonlinear dataset among the three models.In addition,the AdaBoost model shows generally better results in the cross-validation experiments and the series of prediction experiments at eight different training set sample sizes.It is proved that the AdaBoost model not only has good prediction accuracy but also has good prediction stability and generalization ability for small samples.展开更多
Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample o...Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample of metal corrosion data in the laboratory was developed to provide a new perspective on how to solve the problem of pipeline corrosion under the condition of insufficient real samples.This approach employed the bagging algorithm to construct a strong learner by integrating several KNN learners.A total of 99 data were collected and split into training and test set with a 9:1 ratio.The training set was used to obtain the best hyperparameters by 10-fold cross-validation and grid search,and the test set was used to determine the performance of the model.The results showed that theMean Absolute Error(MAE)of this framework is 28.06%of the traditional model and outperforms other ensemblemethods.Therefore,the proposed framework is suitable formetal corrosion prediction under small sample conditions.展开更多
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di...Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.展开更多
Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,su...Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,such as the detection of military objects,as in these instances,a large number of samples is hard to obtain.In order to solve this problem,this paper proposes the use of Gabor-CNN for object detection based on a small number of samples.First of all,a feature extraction convolution kernel library composed of multi-shape Gabor and color Gabor is constructed,and the optimal Gabor convolution kernel group is obtained by means of training and screening,which is convolved with the input image to obtain feature information of objects with strong auxiliary function.Then,the k-means clustering algorithm is adopted to construct several different sizes of anchor boxes,which improves the quality of the regional proposals.We call this regional proposal process the Gabor-assisted Region Proposal Network(Gabor-assisted RPN).Finally,the Deeply-Utilized Feature Pyramid Network(DU-FPN)method is proposed to strengthen the feature expression of objects in the image.A bottom-up and a topdown feature pyramid is constructed in ResNet-50 and feature information of objects is deeply utilized through the transverse connection and integration of features at various scales.Experimental results show that the method proposed in this paper achieves better results than the state-of-art contrast models on data sets with small samples in terms of accuracy and recall rate,and thus has a strong application prospect.展开更多
The effect of intelligent fault diagnosis of mechanical equipment based on data-driven is often premised on big data and class-balance.However,due to the limitation of working environment,operating conditions and equi...The effect of intelligent fault diagnosis of mechanical equipment based on data-driven is often premised on big data and class-balance.However,due to the limitation of working environment,operating conditions and equipment status,the fault data collected by mechanical equipment are often small and imbalanced with normal samples.Therefore,in order to solve the abovementioned dilemma faced by the fault diagnosis of practical mechanical equipment,an auxiliary generative mutual adversarial network(AGMAN)is proposed.Firstly,the generator combined with the auto-encoder(AE)constructs the decoder reconstruction feature loss to assist it to complete the accurate mapping between noise distribution and real data distribution,generate highquality fake samples,supplement the imbalanced dataset to improve the accuracy of small sample class-imbalanced fault diagnosis.Secondly,the discriminator introduces a structure with unshared dual discriminators.Realize the mutual adversarial between the dual discriminator by setting the scoring criteria that the dual discriminator are completely opposite to the real and fake samples,thus improving the quality and diversity of generated samples to avoid mode collapse.Finally,the auxiliary generator and the dual discriminator are updated alternately.The auxiliary generator can generate fake samples that deceive both discriminators at the same time.Meanwhile,the dual discriminator cannot give correct scores to the real and fake samples according to their respective scoring criteria,so as to achieve Nash equilibrium.Using three different test-bed datasets for verification,the experimental results show that the proposed method can explicitly generate highquality fake samples,which greatly improves the accuracy of class-unbalanced fault diagnosis under small sample,especially when it is extremely imbalanced,after using this method to supplement fake samples,the fault diagnosis accuracy of DCNN and SAE are relatively big improvements.So,the proposed method provides an effective solution for small sample class-unbalanced fault diagnosis.展开更多
The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
Tea plants are susceptible to diseases during their growth.These diseases seriously affect the yield and quality of tea.The effective prevention and control of diseases requires accurate identification of diseases.Wit...Tea plants are susceptible to diseases during their growth.These diseases seriously affect the yield and quality of tea.The effective prevention and control of diseases requires accurate identification of diseases.With the development of artificial intelligence and computer vision,automatic recognition of plant diseases using image features has become feasible.As the support vector machine(SVM)is suitable for high dimension,high noise,and small sample learning,this paper uses the support vector machine learning method to realize the segmentation of disease spots of diseased tea plants.An improved Conditional Deep Convolutional Generation Adversarial Network with Gradient Penalty(C-DCGAN-GP)was used to expand the segmentation of tea plant spots.Finally,the Visual Geometry Group 16(VGG16)deep learning classification network was trained by the expanded tea lesion images to realize tea disease recognition.展开更多
In data-driven fault diagnosis for turbo-generator sets,the fault samples are usually expensive to obtain,and inevitably with noise,which will both lead to an unsatisfying identification performance of diagnosis model...In data-driven fault diagnosis for turbo-generator sets,the fault samples are usually expensive to obtain,and inevitably with noise,which will both lead to an unsatisfying identification performance of diagnosis models.To address these issues,this paper proposes a fault diagnosis model for turbo-generator sets based on Weighted Extension Neural Network(W-ENN).WENN is a novel neural network which has three types of connection weights and an improved correlation function.The performance of the proposed model is validated against Extension Neural Network(ENN),Support Vector Machine(SVM),Relevance Vector Machine(RVM)and Extreme Learning Machine(ELM)based models.The results indicate that,on noisy small sample sets,the proposed model is superior to the other models in terms of higher identification accuracy with fewer samples and strong noise-tolerant ability.The findings of this study may serve as a powerful fault diagnosis model for turbo-generator sets on noisy small sample sets.展开更多
Classic maximum entropy quantile function method (CMEQFM) based on the probability weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the...Classic maximum entropy quantile function method (CMEQFM) based on the probability weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence interval of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quantile function accurately on the small samples but inaccurately on the very small samples (10 samples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples; with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).展开更多
An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only smal...An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.展开更多
The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wo...The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test.展开更多
Target detection of small samples with a complex background is always difficult in the classification of remote sensing images.We propose a new small sample target detection method combining local features and a convo...Target detection of small samples with a complex background is always difficult in the classification of remote sensing images.We propose a new small sample target detection method combining local features and a convolutional neural network(LF-CNN)with the aim of detecting small numbers of unevenly distributed ground object targets in remote sensing images.The k-nearest neighbor method is used to construct the local neighborhood of each point and the local neighborhoods of the features are extracted one by one from the convolution layer.All the local features are aggregated by maximum pooling to obtain global feature representation.The classification probability of each category is then calculated and classified using the scaled expected linear units function and the full connection layer.The experimental results show that the proposed LF-CNN method has a high accuracy of target detection and classification for hyperspectral imager remote sensing data under the condition of small samples.Despite drawbacks in both time and complexity,the proposed LF-CNN method can more effectively integrate the local features of ground object samples and improve the accuracy of target identification and detection in small samples of remote sensing images than traditional target detection methods.展开更多
Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well ...Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well under conditions of large sample size and complete failure data,which lead to large deviation under conditions of small sample size and zero?failure data. To improve this problem,a new Bayesian method is proposed. Based on the characteristics of the solenoid valve in the braking system of a high?speed train,the modified Weibull distribution is selected to describe the failure rate over the entire lifetime. Based on the assumption of a binomial distribution for the failure probability at censored time,a concave method is employed to obtain the relationships between accumulation failure prob?abilities. A numerical simulation is performed to compare the results of the proposed method with those obtained from maximum likelihood estimation,and to illustrate that the proposed Bayesian model exhibits a better accuracy for the expectation value when the sample size is less than 12. Finally,the robustness of the model is demonstrated by obtaining the reliability indicators for a numerical case involving the solenoid valve of the braking system,which shows that the change in the reliability and failure rate among the di erent hyperparameters is small. The method is provided to avoid misleading of subjective information and improve accuracy of reliability assessment under condi?tions of small sample size and zero?failure data.展开更多
Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by ...Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by simulations. The several Bayesian approaches are discussed and some limitations are founded. An improvement is put forward after limitations of Bayesian approaches available are analyzed and the improved approach is applied to assessment of some new weapon performance.展开更多
Hydraulic pumps belong to reliable long-life hydraulic components. The reliability evaluation includes characters such as long test period,high cost,and high power loss and so on. Based on the principle of energy-savi...Hydraulic pumps belong to reliable long-life hydraulic components. The reliability evaluation includes characters such as long test period,high cost,and high power loss and so on. Based on the principle of energy-saving and power recovery,a small sample hydraulic pump reliability test rig is built,and the service life of hydraulic pump is predicted,and then the sampling period of reliability test is optimized. On the basis of considering the performance degradation mechanism of hydraulic pump,the feature information of degradation distribution of hydraulic pump volumetric efficiency during the test is collected,so an optimal degradation path model of feature information is selected from the aspect of fitting accuracy,and pseudo life data are obtained. Then a small sample reliability test of period constrained optimization search strategy for hydraulic pump is constructed to solve the optimization problem of the test sampling period and tightening end threshold,and it is verified that the accuracy of the minimum sampling period by the non-parametric hypothes is tested. Simulation result shows it could possess instructional significance and referenced value for hydraulic pump reliability life evaluation and the test's research and design.展开更多
The samples of fatigue life tests for aeroengine components are usually less than 5,so the evaluation of these samples belongs to small sample analysis. The Weibull distribution is known to describe the life data accu...The samples of fatigue life tests for aeroengine components are usually less than 5,so the evaluation of these samples belongs to small sample analysis. The Weibull distribution is known to describe the life data accurately,and the Weibayes method (developed from Bayesian method) expands on the experiential data in the small sample analysis of fatigue life in aeroengine. Based on the Weibull analysis,a program was developed to improve the efficiency of the reliability analysis for aeroengine compgnents. This program has complete functions and offers highly accurate results. A particular turbine disk's low cycle fatigue life was evaluated by this program. From the results,the following conclusions were drawn:(a) that this program could be used for the engineering applications,and (b) while a lack of former test data lowered the validity of evaluation results,the Weibayes method ensured the results of small sample analysis did not deviate from the truth.展开更多
In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay fo...In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results.展开更多
As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this prob...As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this problem,a model-data-driven seismic AVO(amplitude variation with offset)inversion method based on a space-variant objective function has been worked out.In this method,zero delay cross-correlation function and F norm are used to establish objective function.Based on inverse distance weighting theory,change of the objective function is controlled according to the location of the target CDP(common depth point),to change the constraint weights of training samples,initial low-frequency models,and seismic data on the inversion.Hence,the proposed method can get high resolution and high-accuracy velocity and density from inversion of small sample data,and is suitable for identifying thin interbedded sand bodies.Tests with thin interbedded geological models show that the proposed method has high inversion accuracy and resolution for small sample data,and can identify sandstone and mudstone layers of about one-30th of the dominant wavelength thick.Tests on the field data of Lishui sag show that the inversion results of the proposed method have small relative error with well-log data,and can identify thin interbedded sandstone layers of about one-15th of the dominant wavelength thick with small sample data.展开更多
The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process o...The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process of reliability-based optimization and reliability constrains are calculated in inner loop. Innovation of suggested approach is in application of newly developed optimization strategy based on multilevel simulation using an advanced Latin Hypercube Sampling technique. This method is called Aimed multilevel sampling and it is designated for optimization of problems where only limited number of simulations is possible to perform due to enormous com- putational demands.展开更多
We propose the meshfree-based physics-informed neural networks for solving the unsteady Oseen equations.Firstly,based on the ideas of meshfree and small sample learning,we only randomly select a small number of spatio...We propose the meshfree-based physics-informed neural networks for solving the unsteady Oseen equations.Firstly,based on the ideas of meshfree and small sample learning,we only randomly select a small number of spatiotemporal points to train the neural network instead of forming a mesh.Specifically,we optimize the neural network by minimizing the loss function to satisfy the differential operators,initial condition and boundary condition.Then,we prove the convergence of the loss function and the convergence of the neural network.In addition,the feasibility and effectiveness of the method are verified by the results of numerical experiments,and the theoretical derivation is verified by the relative error between the neural network solution and the analytical solution.展开更多
文摘In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBoost algorithm(AdaBoost model) was established.A prediction model based on a linear regression algorithm(LR model) and a prediction model based on a multi-layer perceptron neural network algorithm(MLP model) were established for comparison.The prediction experiments of the yarn evenness and the yarn strength were implemented.Determination coefficients and prediction errors were used to evaluate the prediction accuracy of these models,and the K-fold cross validation was used to evaluate the generalization ability of these models.In the prediction experiments,the determination coefficient of the yarn evenness prediction result of the AdaBoost model is 76% and 87% higher than that of the LR model and the MLP model,respectively.The determination coefficient of the yarn strength prediction result of the AdaBoost model is slightly higher than that of the other two models.Considering that the yarn evenness dataset has a weaker linear relationship with the cotton dataset than that of the yarn strength dataset in this paper,the AdaBoost model has the best adaptability for the nonlinear dataset among the three models.In addition,the AdaBoost model shows generally better results in the cross-validation experiments and the series of prediction experiments at eight different training set sample sizes.It is proved that the AdaBoost model not only has good prediction accuracy but also has good prediction stability and generalization ability for small samples.
基金supported by the National Natural Science Foundation of China(Grant No.52174062).
文摘Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample of metal corrosion data in the laboratory was developed to provide a new perspective on how to solve the problem of pipeline corrosion under the condition of insufficient real samples.This approach employed the bagging algorithm to construct a strong learner by integrating several KNN learners.A total of 99 data were collected and split into training and test set with a 9:1 ratio.The training set was used to obtain the best hyperparameters by 10-fold cross-validation and grid search,and the test set was used to determine the performance of the model.The results showed that theMean Absolute Error(MAE)of this framework is 28.06%of the traditional model and outperforms other ensemblemethods.Therefore,the proposed framework is suitable formetal corrosion prediction under small sample conditions.
文摘Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.
基金supported by the National Natural Science Foundation of China(grant number:61671470)the National Key Research and Development Program of China(grant number:2016YFC0802904)the Postdoctoral Science Foundation Funded Project of China(grant number:2017M623423).
文摘Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,such as the detection of military objects,as in these instances,a large number of samples is hard to obtain.In order to solve this problem,this paper proposes the use of Gabor-CNN for object detection based on a small number of samples.First of all,a feature extraction convolution kernel library composed of multi-shape Gabor and color Gabor is constructed,and the optimal Gabor convolution kernel group is obtained by means of training and screening,which is convolved with the input image to obtain feature information of objects with strong auxiliary function.Then,the k-means clustering algorithm is adopted to construct several different sizes of anchor boxes,which improves the quality of the regional proposals.We call this regional proposal process the Gabor-assisted Region Proposal Network(Gabor-assisted RPN).Finally,the Deeply-Utilized Feature Pyramid Network(DU-FPN)method is proposed to strengthen the feature expression of objects in the image.A bottom-up and a topdown feature pyramid is constructed in ResNet-50 and feature information of objects is deeply utilized through the transverse connection and integration of features at various scales.Experimental results show that the method proposed in this paper achieves better results than the state-of-art contrast models on data sets with small samples in terms of accuracy and recall rate,and thus has a strong application prospect.
基金co-supported by the Special Project of the National Key Research and Development Program of China (No. 2020YFB1709801)the Postgraduate Research and Practice Innovation Program of Jiangsu Province (No. KYCX21_0230)+1 种基金the National Natural Science Foundation of China (No. 51975276)the National Science and Technology Major Project (No. 2017-Ⅳ-0008-0045).
文摘The effect of intelligent fault diagnosis of mechanical equipment based on data-driven is often premised on big data and class-balance.However,due to the limitation of working environment,operating conditions and equipment status,the fault data collected by mechanical equipment are often small and imbalanced with normal samples.Therefore,in order to solve the abovementioned dilemma faced by the fault diagnosis of practical mechanical equipment,an auxiliary generative mutual adversarial network(AGMAN)is proposed.Firstly,the generator combined with the auto-encoder(AE)constructs the decoder reconstruction feature loss to assist it to complete the accurate mapping between noise distribution and real data distribution,generate highquality fake samples,supplement the imbalanced dataset to improve the accuracy of small sample class-imbalanced fault diagnosis.Secondly,the discriminator introduces a structure with unshared dual discriminators.Realize the mutual adversarial between the dual discriminator by setting the scoring criteria that the dual discriminator are completely opposite to the real and fake samples,thus improving the quality and diversity of generated samples to avoid mode collapse.Finally,the auxiliary generator and the dual discriminator are updated alternately.The auxiliary generator can generate fake samples that deceive both discriminators at the same time.Meanwhile,the dual discriminator cannot give correct scores to the real and fake samples according to their respective scoring criteria,so as to achieve Nash equilibrium.Using three different test-bed datasets for verification,the experimental results show that the proposed method can explicitly generate highquality fake samples,which greatly improves the accuracy of class-unbalanced fault diagnosis under small sample,especially when it is extremely imbalanced,after using this method to supplement fake samples,the fault diagnosis accuracy of DCNN and SAE are relatively big improvements.So,the proposed method provides an effective solution for small sample class-unbalanced fault diagnosis.
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
基金Science and Technology Project of Jiangsu Polytechnic of Agriculture and Forestry(Project No.2021kj56)。
文摘Tea plants are susceptible to diseases during their growth.These diseases seriously affect the yield and quality of tea.The effective prevention and control of diseases requires accurate identification of diseases.With the development of artificial intelligence and computer vision,automatic recognition of plant diseases using image features has become feasible.As the support vector machine(SVM)is suitable for high dimension,high noise,and small sample learning,this paper uses the support vector machine learning method to realize the segmentation of disease spots of diseased tea plants.An improved Conditional Deep Convolutional Generation Adversarial Network with Gradient Penalty(C-DCGAN-GP)was used to expand the segmentation of tea plant spots.Finally,the Visual Geometry Group 16(VGG16)deep learning classification network was trained by the expanded tea lesion images to realize tea disease recognition.
基金the National Natural Science Foundation of China(No.51775272,No.51005114)The Fundamental Research Funds for the Central Universities,China(No.NS2014050)。
文摘In data-driven fault diagnosis for turbo-generator sets,the fault samples are usually expensive to obtain,and inevitably with noise,which will both lead to an unsatisfying identification performance of diagnosis models.To address these issues,this paper proposes a fault diagnosis model for turbo-generator sets based on Weighted Extension Neural Network(W-ENN).WENN is a novel neural network which has three types of connection weights and an improved correlation function.The performance of the proposed model is validated against Extension Neural Network(ENN),Support Vector Machine(SVM),Relevance Vector Machine(RVM)and Extreme Learning Machine(ELM)based models.The results indicate that,on noisy small sample sets,the proposed model is superior to the other models in terms of higher identification accuracy with fewer samples and strong noise-tolerant ability.The findings of this study may serve as a powerful fault diagnosis model for turbo-generator sets on noisy small sample sets.
文摘Classic maximum entropy quantile function method (CMEQFM) based on the probability weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence interval of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quantile function accurately on the small samples but inaccurately on the very small samples (10 samples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples; with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
基金Funding of Jiangsu Innovation Program for Graduate Education (CXZZ11_0193)NUAA Research Funding (NJ2010009)
文摘An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.
文摘The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test.
基金This work was partially supported by the Key Laboratory for Digital Land and Resources of Jiangxi Province,East China University of Technology(DLLJ202103)Science and Technology Commission Shanghai Municipality(No.19142201600)Graduate Innovation and Entrepreneurship Program in Shanghai University in China(No.2019GY04).
文摘Target detection of small samples with a complex background is always difficult in the classification of remote sensing images.We propose a new small sample target detection method combining local features and a convolutional neural network(LF-CNN)with the aim of detecting small numbers of unevenly distributed ground object targets in remote sensing images.The k-nearest neighbor method is used to construct the local neighborhood of each point and the local neighborhoods of the features are extracted one by one from the convolution layer.All the local features are aggregated by maximum pooling to obtain global feature representation.The classification probability of each category is then calculated and classified using the scaled expected linear units function and the full connection layer.The experimental results show that the proposed LF-CNN method has a high accuracy of target detection and classification for hyperspectral imager remote sensing data under the condition of small samples.Despite drawbacks in both time and complexity,the proposed LF-CNN method can more effectively integrate the local features of ground object samples and improve the accuracy of target identification and detection in small samples of remote sensing images than traditional target detection methods.
基金Supported by National Natural Science Foundation of China(Grant No.51175028)Great Scholars Training Project(Grant No.CIT&TCD20150312)Beijing Recognized Talent Project(Grant No.2014018)
文摘Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well under conditions of large sample size and complete failure data,which lead to large deviation under conditions of small sample size and zero?failure data. To improve this problem,a new Bayesian method is proposed. Based on the characteristics of the solenoid valve in the braking system of a high?speed train,the modified Weibull distribution is selected to describe the failure rate over the entire lifetime. Based on the assumption of a binomial distribution for the failure probability at censored time,a concave method is employed to obtain the relationships between accumulation failure prob?abilities. A numerical simulation is performed to compare the results of the proposed method with those obtained from maximum likelihood estimation,and to illustrate that the proposed Bayesian model exhibits a better accuracy for the expectation value when the sample size is less than 12. Finally,the robustness of the model is demonstrated by obtaining the reliability indicators for a numerical case involving the solenoid valve of the braking system,which shows that the change in the reliability and failure rate among the di erent hyperparameters is small. The method is provided to avoid misleading of subjective information and improve accuracy of reliability assessment under condi?tions of small sample size and zero?failure data.
文摘Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by simulations. The several Bayesian approaches are discussed and some limitations are founded. An improvement is put forward after limitations of Bayesian approaches available are analyzed and the improved approach is applied to assessment of some new weapon performance.
基金Supported by the National Natural Science Foundation of China(No.51405424,11673040)the Special Scientific Research Fund of Public Welfare for Quality Inspection(No.201510202)
文摘Hydraulic pumps belong to reliable long-life hydraulic components. The reliability evaluation includes characters such as long test period,high cost,and high power loss and so on. Based on the principle of energy-saving and power recovery,a small sample hydraulic pump reliability test rig is built,and the service life of hydraulic pump is predicted,and then the sampling period of reliability test is optimized. On the basis of considering the performance degradation mechanism of hydraulic pump,the feature information of degradation distribution of hydraulic pump volumetric efficiency during the test is collected,so an optimal degradation path model of feature information is selected from the aspect of fitting accuracy,and pseudo life data are obtained. Then a small sample reliability test of period constrained optimization search strategy for hydraulic pump is constructed to solve the optimization problem of the test sampling period and tightening end threshold,and it is verified that the accuracy of the minimum sampling period by the non-parametric hypothes is tested. Simulation result shows it could possess instructional significance and referenced value for hydraulic pump reliability life evaluation and the test's research and design.
文摘The samples of fatigue life tests for aeroengine components are usually less than 5,so the evaluation of these samples belongs to small sample analysis. The Weibull distribution is known to describe the life data accurately,and the Weibayes method (developed from Bayesian method) expands on the experiential data in the small sample analysis of fatigue life in aeroengine. Based on the Weibull analysis,a program was developed to improve the efficiency of the reliability analysis for aeroengine compgnents. This program has complete functions and offers highly accurate results. A particular turbine disk's low cycle fatigue life was evaluated by this program. From the results,the following conclusions were drawn:(a) that this program could be used for the engineering applications,and (b) while a lack of former test data lowered the validity of evaluation results,the Weibayes method ensured the results of small sample analysis did not deviate from the truth.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61203147,61374047,61203126,and 61104092)the Humanities and Social Sciences Youth Funds of the Ministry of Education,China(Grant No.12YJCZH218)
文摘In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results.
文摘As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this problem,a model-data-driven seismic AVO(amplitude variation with offset)inversion method based on a space-variant objective function has been worked out.In this method,zero delay cross-correlation function and F norm are used to establish objective function.Based on inverse distance weighting theory,change of the objective function is controlled according to the location of the target CDP(common depth point),to change the constraint weights of training samples,initial low-frequency models,and seismic data on the inversion.Hence,the proposed method can get high resolution and high-accuracy velocity and density from inversion of small sample data,and is suitable for identifying thin interbedded sand bodies.Tests with thin interbedded geological models show that the proposed method has high inversion accuracy and resolution for small sample data,and can identify sandstone and mudstone layers of about one-30th of the dominant wavelength thick.Tests on the field data of Lishui sag show that the inversion results of the proposed method have small relative error with well-log data,and can identify thin interbedded sandstone layers of about one-15th of the dominant wavelength thick with small sample data.
基金support of projects of Ministry of Education of Czech Republic KONTAKT No.LH12062previous achievements worked out under the project of Technological Agency of Czech Republic No.TA01011019.
文摘The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process of reliability-based optimization and reliability constrains are calculated in inner loop. Innovation of suggested approach is in application of newly developed optimization strategy based on multilevel simulation using an advanced Latin Hypercube Sampling technique. This method is called Aimed multilevel sampling and it is designated for optimization of problems where only limited number of simulations is possible to perform due to enormous com- putational demands.
基金Project supported in part by the National Natural Science Foundation of China(Grant No.11771259)Shaanxi Provincial Joint Laboratory of Artificial Intelligence(GrantNo.2022JCSYS05)+1 种基金Innovative Team Project of Shaanxi Provincial Department of Education(Grant No.21JP013)Shaanxi Provincial Social Science Fund Annual Project(Grant No.2022D332)。
文摘We propose the meshfree-based physics-informed neural networks for solving the unsteady Oseen equations.Firstly,based on the ideas of meshfree and small sample learning,we only randomly select a small number of spatiotemporal points to train the neural network instead of forming a mesh.Specifically,we optimize the neural network by minimizing the loss function to satisfy the differential operators,initial condition and boundary condition.Then,we prove the convergence of the loss function and the convergence of the neural network.In addition,the feasibility and effectiveness of the method are verified by the results of numerical experiments,and the theoretical derivation is verified by the relative error between the neural network solution and the analytical solution.