期刊文献+
共找到48篇文章
< 1 2 3 >
每页显示 20 50 100
Research on aiming methods for small sample size shooting tests of two-dimensional trajectory correction fuse
1
作者 Chen Liang Qiang Shen +4 位作者 Zilong Deng Hongyun Li Wenyang Pu Lingyun Tian Ziyang Lin 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期506-517,共12页
The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ... The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future. 展开更多
关键词 Two-dimensional trajectory correction fuse small sample size test Compatibility test KL divergence Fusion bayesian estimation
下载PDF
Design and Research on Identification of Typical Tea Plant Diseases Using Small Sample Learning
2
作者 Jian Yang 《Journal of Electronic Research and Application》 2024年第5期21-25,共5页
Tea plants are susceptible to diseases during their growth.These diseases seriously affect the yield and quality of tea.The effective prevention and control of diseases requires accurate identification of diseases.Wit... Tea plants are susceptible to diseases during their growth.These diseases seriously affect the yield and quality of tea.The effective prevention and control of diseases requires accurate identification of diseases.With the development of artificial intelligence and computer vision,automatic recognition of plant diseases using image features has become feasible.As the support vector machine(SVM)is suitable for high dimension,high noise,and small sample learning,this paper uses the support vector machine learning method to realize the segmentation of disease spots of diseased tea plants.An improved Conditional Deep Convolutional Generation Adversarial Network with Gradient Penalty(C-DCGAN-GP)was used to expand the segmentation of tea plant spots.Finally,the Visual Geometry Group 16(VGG16)deep learning classification network was trained by the expanded tea lesion images to realize tea disease recognition. 展开更多
关键词 small sample learning Tea plant disease VGG16 deep learning
下载PDF
Yarn Quality Prediction for Small Samples Based on AdaBoost Algorithm 被引量:1
3
作者 刘智玉 陈南梁 汪军 《Journal of Donghua University(English Edition)》 CAS 2023年第3期261-266,共6页
In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBo... In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBoost algorithm(AdaBoost model) was established.A prediction model based on a linear regression algorithm(LR model) and a prediction model based on a multi-layer perceptron neural network algorithm(MLP model) were established for comparison.The prediction experiments of the yarn evenness and the yarn strength were implemented.Determination coefficients and prediction errors were used to evaluate the prediction accuracy of these models,and the K-fold cross validation was used to evaluate the generalization ability of these models.In the prediction experiments,the determination coefficient of the yarn evenness prediction result of the AdaBoost model is 76% and 87% higher than that of the LR model and the MLP model,respectively.The determination coefficient of the yarn strength prediction result of the AdaBoost model is slightly higher than that of the other two models.Considering that the yarn evenness dataset has a weaker linear relationship with the cotton dataset than that of the yarn strength dataset in this paper,the AdaBoost model has the best adaptability for the nonlinear dataset among the three models.In addition,the AdaBoost model shows generally better results in the cross-validation experiments and the series of prediction experiments at eight different training set sample sizes.It is proved that the AdaBoost model not only has good prediction accuracy but also has good prediction stability and generalization ability for small samples. 展开更多
关键词 stability and generalization ability for small samples.Key words:yarn quality prediction AdaBoost algorithm small sample generalization ability
下载PDF
Metal Corrosion Rate Prediction of Small Samples Using an Ensemble Technique
4
作者 Yang Yang Pengfei Zheng +3 位作者 Fanru Zeng Peng Xin Guoxi He Kexi Liao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第1期267-291,共25页
Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample o... Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample of metal corrosion data in the laboratory was developed to provide a new perspective on how to solve the problem of pipeline corrosion under the condition of insufficient real samples.This approach employed the bagging algorithm to construct a strong learner by integrating several KNN learners.A total of 99 data were collected and split into training and test set with a 9:1 ratio.The training set was used to obtain the best hyperparameters by 10-fold cross-validation and grid search,and the test set was used to determine the performance of the model.The results showed that theMean Absolute Error(MAE)of this framework is 28.06%of the traditional model and outperforms other ensemblemethods.Therefore,the proposed framework is suitable formetal corrosion prediction under small sample conditions. 展开更多
关键词 Oil pipeline BAGGING KNN ensemble learning small sample size
下载PDF
Data processing of small samples based on grey distance information approach 被引量:14
5
作者 Ke Hongfa, Chen Yongguang & Liu Yi 1. Coll. of Electronic Science and Engineering, National Univ. of Defense Technology, Changsha 410073, P. R. China 2. Unit 63880, Luoyang 471003, P. R. China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第2期281-289,共9页
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di... Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective. 展开更多
关键词 Data processing Grey theory Norm theory small samples Uncertainty assessments Grey distance measure Information whitening ratio.
下载PDF
Reliability Assessment for the Solenoid Valve of a High-Speed Train Braking System under Small Sample Size 被引量:9
6
作者 Jian-Wei Yang Jin-Hai Wang +1 位作者 Qiang Huang Ming Zhou 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2018年第3期189-199,共11页
Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well ... Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well under conditions of large sample size and complete failure data,which lead to large deviation under conditions of small sample size and zero?failure data. To improve this problem,a new Bayesian method is proposed. Based on the characteristics of the solenoid valve in the braking system of a high?speed train,the modified Weibull distribution is selected to describe the failure rate over the entire lifetime. Based on the assumption of a binomial distribution for the failure probability at censored time,a concave method is employed to obtain the relationships between accumulation failure prob?abilities. A numerical simulation is performed to compare the results of the proposed method with those obtained from maximum likelihood estimation,and to illustrate that the proposed Bayesian model exhibits a better accuracy for the expectation value when the sample size is less than 12. Finally,the robustness of the model is demonstrated by obtaining the reliability indicators for a numerical case involving the solenoid valve of the braking system,which shows that the change in the reliability and failure rate among the di erent hyperparameters is small. The method is provided to avoid misleading of subjective information and improve accuracy of reliability assessment under condi?tions of small sample size and zero?failure data. 展开更多
关键词 Zero?failure data Modified Weibull distribution small sample size Bayesian method
下载PDF
Small sample Bayesian analyses in assessment of weapon performance 被引量:6
7
作者 Li Qingmin Wang Hongwei Liu Jun 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第3期545-550,共6页
Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by ... Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by simulations. The several Bayesian approaches are discussed and some limitations are founded. An improvement is put forward after limitations of Bayesian approaches available are analyzed and the improved approach is applied to assessment of some new weapon performance. 展开更多
关键词 Bayesian approach small sample confidence.
下载PDF
Gabor-CNN for object detection based on small samples 被引量:4
8
作者 Xiao-dong Hu Xin-qing Wang +5 位作者 Fan-jie Meng Xia Hua Yu-ji Yan Yu-yang Li Jing Huang Xun-lin Jiang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2020年第6期1116-1129,共14页
Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,su... Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,such as the detection of military objects,as in these instances,a large number of samples is hard to obtain.In order to solve this problem,this paper proposes the use of Gabor-CNN for object detection based on a small number of samples.First of all,a feature extraction convolution kernel library composed of multi-shape Gabor and color Gabor is constructed,and the optimal Gabor convolution kernel group is obtained by means of training and screening,which is convolved with the input image to obtain feature information of objects with strong auxiliary function.Then,the k-means clustering algorithm is adopted to construct several different sizes of anchor boxes,which improves the quality of the regional proposals.We call this regional proposal process the Gabor-assisted Region Proposal Network(Gabor-assisted RPN).Finally,the Deeply-Utilized Feature Pyramid Network(DU-FPN)method is proposed to strengthen the feature expression of objects in the image.A bottom-up and a topdown feature pyramid is constructed in ResNet-50 and feature information of objects is deeply utilized through the transverse connection and integration of features at various scales.Experimental results show that the method proposed in this paper achieves better results than the state-of-art contrast models on data sets with small samples in terms of accuracy and recall rate,and thus has a strong application prospect. 展开更多
关键词 Deep learning Convolutional neural network small samples Gabor convolution kernel Feature pyramid
下载PDF
Life prediction and test period optimization research based on small sample reliability test of hydraulic pumps 被引量:5
9
作者 郭锐 Ning Chao +4 位作者 Zhao Jingyi Wang Ping Shi Yu Zhou Jinsheng Luo Jing 《High Technology Letters》 EI CAS 2017年第1期63-70,共8页
Hydraulic pumps belong to reliable long-life hydraulic components. The reliability evaluation includes characters such as long test period,high cost,and high power loss and so on. Based on the principle of energy-savi... Hydraulic pumps belong to reliable long-life hydraulic components. The reliability evaluation includes characters such as long test period,high cost,and high power loss and so on. Based on the principle of energy-saving and power recovery,a small sample hydraulic pump reliability test rig is built,and the service life of hydraulic pump is predicted,and then the sampling period of reliability test is optimized. On the basis of considering the performance degradation mechanism of hydraulic pump,the feature information of degradation distribution of hydraulic pump volumetric efficiency during the test is collected,so an optimal degradation path model of feature information is selected from the aspect of fitting accuracy,and pseudo life data are obtained. Then a small sample reliability test of period constrained optimization search strategy for hydraulic pump is constructed to solve the optimization problem of the test sampling period and tightening end threshold,and it is verified that the accuracy of the minimum sampling period by the non-parametric hypothes is tested. Simulation result shows it could possess instructional significance and referenced value for hydraulic pump reliability life evaluation and the test's research and design. 展开更多
关键词 hydraulic pump small sample test volumetric efficiency degradation path model life span period optimal
下载PDF
Application of Small Sample Analysis in Life Estimation of Aeroengine Components 被引量:4
10
作者 聂挺 《Journal of Southwest Jiaotong University(English Edition)》 2010年第4期285-288,共4页
The samples of fatigue life tests for aeroengine components are usually less than 5,so the evaluation of these samples belongs to small sample analysis. The Weibull distribution is known to describe the life data accu... The samples of fatigue life tests for aeroengine components are usually less than 5,so the evaluation of these samples belongs to small sample analysis. The Weibull distribution is known to describe the life data accurately,and the Weibayes method (developed from Bayesian method) expands on the experiential data in the small sample analysis of fatigue life in aeroengine. Based on the Weibull analysis,a program was developed to improve the efficiency of the reliability analysis for aeroengine compgnents. This program has complete functions and offers highly accurate results. A particular turbine disk's low cycle fatigue life was evaluated by this program. From the results,the following conclusions were drawn:(a) that this program could be used for the engineering applications,and (b) while a lack of former test data lowered the validity of evaluation results,the Weibayes method ensured the results of small sample analysis did not deviate from the truth. 展开更多
关键词 AEROENGINE LIFE small sample Weibull distribution
下载PDF
Model-data-driven seismic inversion method based on small sample data
11
作者 LIU Jinshui SUN Yuhang LIU Yang 《Petroleum Exploration and Development》 CSCD 2022年第5期1046-1055,共10页
As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this prob... As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this problem,a model-data-driven seismic AVO(amplitude variation with offset)inversion method based on a space-variant objective function has been worked out.In this method,zero delay cross-correlation function and F norm are used to establish objective function.Based on inverse distance weighting theory,change of the objective function is controlled according to the location of the target CDP(common depth point),to change the constraint weights of training samples,initial low-frequency models,and seismic data on the inversion.Hence,the proposed method can get high resolution and high-accuracy velocity and density from inversion of small sample data,and is suitable for identifying thin interbedded sand bodies.Tests with thin interbedded geological models show that the proposed method has high inversion accuracy and resolution for small sample data,and can identify sandstone and mudstone layers of about one-30th of the dominant wavelength thick.Tests on the field data of Lishui sag show that the inversion results of the proposed method have small relative error with well-log data,and can identify thin interbedded sandstone layers of about one-15th of the dominant wavelength thick with small sample data. 展开更多
关键词 small sample data space-variant objective function model-data-driven neural network seismic AVO inversion thin interbedded sandstone identification Paleocene Lishui sag
下载PDF
LF-CNN:Deep Learning-Guided Small Sample Target Detection for Remote Sensing Classification
12
作者 Chengfan Li Lan Liu +1 位作者 Junjuan Zhao Xuefeng Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第4期429-444,共16页
Target detection of small samples with a complex background is always difficult in the classification of remote sensing images.We propose a new small sample target detection method combining local features and a convo... Target detection of small samples with a complex background is always difficult in the classification of remote sensing images.We propose a new small sample target detection method combining local features and a convolutional neural network(LF-CNN)with the aim of detecting small numbers of unevenly distributed ground object targets in remote sensing images.The k-nearest neighbor method is used to construct the local neighborhood of each point and the local neighborhoods of the features are extracted one by one from the convolution layer.All the local features are aggregated by maximum pooling to obtain global feature representation.The classification probability of each category is then calculated and classified using the scaled expected linear units function and the full connection layer.The experimental results show that the proposed LF-CNN method has a high accuracy of target detection and classification for hyperspectral imager remote sensing data under the condition of small samples.Despite drawbacks in both time and complexity,the proposed LF-CNN method can more effectively integrate the local features of ground object samples and improve the accuracy of target identification and detection in small samples of remote sensing images than traditional target detection methods. 展开更多
关键词 small samples local features convolutional neural network(CNN) k-nearest neighbor(KNN) target detection
下载PDF
Reliability-Based Optimization:Small Sample Optimization Strategy
13
作者 Drahomir Novak Ondrej Slowik Maosen Cao 《Journal of Computer and Communications》 2014年第11期31-37,共7页
The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process o... The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process of reliability-based optimization and reliability constrains are calculated in inner loop. Innovation of suggested approach is in application of newly developed optimization strategy based on multilevel simulation using an advanced Latin Hypercube Sampling technique. This method is called Aimed multilevel sampling and it is designated for optimization of problems where only limited number of simulations is possible to perform due to enormous com- putational demands. 展开更多
关键词 OPTIMIZATION Reliability Assessment Aimed Multilevel Sampling Monte Carlo Latin Hypercube Sampling Probability of Failure Reliability-Based Design Optimization small sample Analysis
下载PDF
Auxiliary generative mutual adversarial networks for class-imbalanced fault diagnosis under small samples
14
作者 Ranran LI Shunming LI +4 位作者 Kun XU Mengjie ZENG Xianglian LI Jianfeng GU Yong CHEN 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第9期464-478,共15页
The effect of intelligent fault diagnosis of mechanical equipment based on data-driven is often premised on big data and class-balance.However,due to the limitation of working environment,operating conditions and equi... The effect of intelligent fault diagnosis of mechanical equipment based on data-driven is often premised on big data and class-balance.However,due to the limitation of working environment,operating conditions and equipment status,the fault data collected by mechanical equipment are often small and imbalanced with normal samples.Therefore,in order to solve the abovementioned dilemma faced by the fault diagnosis of practical mechanical equipment,an auxiliary generative mutual adversarial network(AGMAN)is proposed.Firstly,the generator combined with the auto-encoder(AE)constructs the decoder reconstruction feature loss to assist it to complete the accurate mapping between noise distribution and real data distribution,generate highquality fake samples,supplement the imbalanced dataset to improve the accuracy of small sample class-imbalanced fault diagnosis.Secondly,the discriminator introduces a structure with unshared dual discriminators.Realize the mutual adversarial between the dual discriminator by setting the scoring criteria that the dual discriminator are completely opposite to the real and fake samples,thus improving the quality and diversity of generated samples to avoid mode collapse.Finally,the auxiliary generator and the dual discriminator are updated alternately.The auxiliary generator can generate fake samples that deceive both discriminators at the same time.Meanwhile,the dual discriminator cannot give correct scores to the real and fake samples according to their respective scoring criteria,so as to achieve Nash equilibrium.Using three different test-bed datasets for verification,the experimental results show that the proposed method can explicitly generate highquality fake samples,which greatly improves the accuracy of class-unbalanced fault diagnosis under small sample,especially when it is extremely imbalanced,after using this method to supplement fake samples,the fault diagnosis accuracy of DCNN and SAE are relatively big improvements.So,the proposed method provides an effective solution for small sample class-unbalanced fault diagnosis. 展开更多
关键词 Adversarial Networks Auto-encoder Class-imbalanced Fault detection small samples
原文传递
Calculation of Two-Tailed Exact Probability in the Wald-Wolfowitz One-Sample Runs Test
15
作者 José Moral De La Rubia 《Journal of Data Analysis and Information Processing》 2024年第1期89-114,共26页
The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wo... The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test. 展开更多
关键词 RANDOMNESS Nonparametric Test Exact Probability small samples QUANTILES
下载PDF
A fault diagnosis model based on weighted extension neural network for turbo-generator sets on small samples with noise 被引量:11
16
作者 Tichun WANG Jiayun WANG +1 位作者 Yong WU Xin SHENG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2020年第10期2757-2769,共13页
In data-driven fault diagnosis for turbo-generator sets,the fault samples are usually expensive to obtain,and inevitably with noise,which will both lead to an unsatisfying identification performance of diagnosis model... In data-driven fault diagnosis for turbo-generator sets,the fault samples are usually expensive to obtain,and inevitably with noise,which will both lead to an unsatisfying identification performance of diagnosis models.To address these issues,this paper proposes a fault diagnosis model for turbo-generator sets based on Weighted Extension Neural Network(W-ENN).WENN is a novel neural network which has three types of connection weights and an improved correlation function.The performance of the proposed model is validated against Extension Neural Network(ENN),Support Vector Machine(SVM),Relevance Vector Machine(RVM)and Extreme Learning Machine(ELM)based models.The results indicate that,on noisy small sample sets,the proposed model is superior to the other models in terms of higher identification accuracy with fewer samples and strong noise-tolerant ability.The findings of this study may serve as a powerful fault diagnosis model for turbo-generator sets on noisy small sample sets. 展开更多
关键词 Fault diagnosis samples with noise small samples learning Turbo-generator sets Weighted Extension Neural Network
原文传递
Scatter factor confidence interval estimate of least square maximum entropy quantile function for small samples 被引量:3
17
作者 Wu Fuxian Wen Weidong 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2016年第5期1285-1293,共9页
Classic maximum entropy quantile function method (CMEQFM) based on the probability weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the... Classic maximum entropy quantile function method (CMEQFM) based on the probability weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence interval of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quantile function accurately on the small samples but inaccurately on the very small samples (10 samples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples; with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples). 展开更多
关键词 Confidence intervals Maximum entropy Quantile function RELIABILITY Scatter factor small samples
原文传递
Progressive prediction method for failure data with small sample size 被引量:2
18
作者 WANG Zhi-hua FU Hui-min LIU Cheng-rui 《航空动力学报》 EI CAS CSCD 北大核心 2011年第9期2049-2053,共5页
The small sample prediction problem which commonly exists in reliability analysis was discussed with the progressive prediction method in this paper.The modeling and estimation procedure,as well as the forecast and co... The small sample prediction problem which commonly exists in reliability analysis was discussed with the progressive prediction method in this paper.The modeling and estimation procedure,as well as the forecast and confidence limits formula of the progressive auto regressive(PAR) method were discussed in great detail.PAR model not only inherits the simple linear features of auto regressive(AR) model,but also has applicability for nonlinear systems.An application was illustrated for predicting the future fatigue failure for Tantalum electrolytic capacitors.Forecasting results of PAR model were compared with auto regressive moving average(ARMA) model,and it can be seen that the PAR method can be considered good and shows a promise for future applications. 展开更多
关键词 failure data forecast system reliability small sample progressive prediction nonlinear system
原文传递
Static Frame Model Validation with Small Samples Solution Using Improved Kernel Density Estimation and Confidence Level Method 被引量:5
19
作者 ZHANG Baoqiang CHEN Guoping GUO Qintao 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2012年第6期879-886,共8页
An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only smal... An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples. 展开更多
关键词 model validation small samples uncertainty analysis kernel density estimation confidence level prediction
原文传递
Monitoring model for predicting maize grain moisture at the filling stage using NIRS and a small sample size 被引量:1
20
作者 Xue Wang Tiemin Ma +3 位作者 Tao Yang Ping Song Zhengguang Chen Huan Xie 《International Journal of Agricultural and Biological Engineering》 SCIE EI CAS 2019年第2期132-140,共9页
The change in the maize moisture content during different growth stages is an important indicator to evaluate the growth status of maize.In particular,the moisture content during the grain-filling stage reflects the g... The change in the maize moisture content during different growth stages is an important indicator to evaluate the growth status of maize.In particular,the moisture content during the grain-filling stage reflects the grain quality and maturity and it can also be used as an important indicator for breeding and seed selection.At present,the drying method is usually used to calculate the moisture content and the dehydration rate at the grain-filling stage,however,it requires large sample size and long test time.In order to monitor the change in the moisture content at the maize grain-filling stage using small sample set,the Bootstrap re-sampling strategy-sample set partitioning based on joint x-y distances-partial least squares(Bootstrap-SPXY-PLS)moisture content monitoring model and near-infrared spectroscopy for small sample sizes of 10,20,and 50 were used.To improve the prediction accuracy of the model,the optimal number of factors of the model was determined and the comprehensive evaluation thresholds RVP(coefficient of determination(R^(2)),the root mean square error of cross-validation(RMSECV)and the root mean square error of prediction(RMSEP))was proposed for sub-model screening.The model exhibited a good performance for predicting the moisture content of the maize grain at the filling stage for small sample set.For the sample sizes of 20 and 50,the R^(2) values were greater than 0.99.The average deviations of the predicted and reference values of the model were 0.1078%,0.057%,and 0.0918%,respectively.Therefore,the model was effective for monitoring the moisture content at the grain-filling stage for a small sample size.The method is also suitable for the quantitative analysis of different concentrations using near-infrared spectroscopy and small sample size. 展开更多
关键词 moisture content monitoring MAIZE growth stage near-infrared spectroscopy(NIRS) small sample set model screening optimal factor number Bootstrap-SPXY-PLS
原文传递
上一页 1 2 3 下一页 到第
使用帮助 返回顶部