The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct ...Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct power functions to select the optimal sample size. We revise this approach when the focus is on testing a single binomial proportion. We consider exact methods and introduce a conservative criterion to account for the typical non-monotonic behavior of the power functions, when dealing with discrete data. The main purpose of this paper is to present a Shiny App providing a user-friendly, interactive tool to apply these criteria. The app also provides specific tools to elicit the analysis and the design prior distributions, which are the core of the two-priors approach.展开更多
Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well ...Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well under conditions of large sample size and complete failure data,which lead to large deviation under conditions of small sample size and zero?failure data. To improve this problem,a new Bayesian method is proposed. Based on the characteristics of the solenoid valve in the braking system of a high?speed train,the modified Weibull distribution is selected to describe the failure rate over the entire lifetime. Based on the assumption of a binomial distribution for the failure probability at censored time,a concave method is employed to obtain the relationships between accumulation failure prob?abilities. A numerical simulation is performed to compare the results of the proposed method with those obtained from maximum likelihood estimation,and to illustrate that the proposed Bayesian model exhibits a better accuracy for the expectation value when the sample size is less than 12. Finally,the robustness of the model is demonstrated by obtaining the reliability indicators for a numerical case involving the solenoid valve of the braking system,which shows that the change in the reliability and failure rate among the di erent hyperparameters is small. The method is provided to avoid misleading of subjective information and improve accuracy of reliability assessment under condi?tions of small sample size and zero?failure data.展开更多
The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management. Selection of an appropriate sampling strategy is an important pre...The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management. Selection of an appropriate sampling strategy is an important prerequisite to construct a core collection with appropriate size in order to adequately represent the genetic spectrum and maximally capture the genetic diversity in available crop collections. The present study was initiated to construct nested core collections to determine the appropriate sample size to represent the genetic diversity of rice landrace collection based on 15 quantitative traits and 34 qualitative traits of 2 262 rice accessions. The results showed that 50-225 nested core collections, whose sampling rate was 2.2%-9.9%, were sufficient to maintain the maximum genetic diversity of the initial collections. Of these, 150 accessions (6.6%) could capture the maximal genetic diversity of the initial collection. Three data types, i.e. qualitative traits (QT1), quantitative traits (QT2) and integrated qualitative and quantitative traits (QTT), were compared for their efficiency in constructing core collections based on the weighted pair-group average method combined with stepwise clustering and preferred sampling on adjusted Euclidean distances. Every combining scheme constructed eight rice core collections (225, 200, 175, 150, 125, 100, 75 and 50). The results showed that the QTT data was the best in constructing a core collection as indicated by the genetic diversity of core collections. A core collection constructed only on the information of QT1 could not represent the initial collection effectively. QTT should be used together to construct a productive core collection.展开更多
In order to investigate the effect of sample size on the dynamic torsional behaviour of the 2A12 aluminium alloy. In this paper, torsional split Hopkinson bar tests are conducted on this alloy with different sample di...In order to investigate the effect of sample size on the dynamic torsional behaviour of the 2A12 aluminium alloy. In this paper, torsional split Hopkinson bar tests are conducted on this alloy with different sample dimensions. It is found that with the decreasing gauge length and thickness, the tested yield strength increases. However, the sample innerlouter diameter has little effect on the dynamic torsional behaviour. Based on the finite element method, the stress states in the alloy with different sample sizes are analysed. Due to the effect of stress concentration zone (SCZ), the shorter sample has a higher yield stress. Furthermore, the stress distributes more uniformly in the thinner sample, which leads to the higher tested yield stress. According to the experimental and simulation analysis, some suggestions on choosing the sample size are given as well.展开更多
This study used Ecopath model of the Jiaozhou Bay as an example to evaluate the effect of stomach sample size of three fish species on the projection of this model. The derived ecosystem indices were classified into t...This study used Ecopath model of the Jiaozhou Bay as an example to evaluate the effect of stomach sample size of three fish species on the projection of this model. The derived ecosystem indices were classified into three categories:(1) direct indices, like the trophic level of species, influenced by stomach sample size directly;(2)indirect indices, like ecology efficiency(EE) of invertebrates, influenced by the multiple prey-predator relationships;and(3) systemic indices, like total system throughout(TST), describing the status of the whole ecosystem. The influences of different stomach sample sizes on these indices were evaluated. The results suggest that systemic indices of the ecosystem model were robust to stomach sample sizes, whereas specific indices related to species were indicated to be with low accuracy and precision when stomach samples were insufficient.The indices became more uncertain when the stomach sample sizes varied for more species. This study enhances the understanding of how the quality of diet composition data influences ecosystem modeling outputs. The results can also guide the design of stomach content analysis for developing ecosystem models.展开更多
Sample size re-estimation is essential in oncology studies. However, the use of blinded sample size reassessment for survival data has been rarely reported. Based on the density function of the exponential distributio...Sample size re-estimation is essential in oncology studies. However, the use of blinded sample size reassessment for survival data has been rarely reported. Based on the density function of the exponential distribution, an expectation-maximization(EM) algorithm of the hazard ratio was derived, and several simulation studies were used to verify its applications. The method had obvious variation in the hazard ratio estimates and overestimation for the relatively small hazard ratios. Our studies showed that the stability of the EM estimation results directly correlated with the sample size, the convergence of the EM algorithm was impacted by the initial values, and a balanced design produced the best estimates. No reliable blinded sample size re-estimation inference can be made in our studies, but the results provide useful information to steer the practitioners in this field from repeating the same endeavor.展开更多
The precise and accurate knowledge of genetic parameters is a prerequisite for making efficient selection strategies in breeding programs.A number of estimators of heritability about important economic traits in many ...The precise and accurate knowledge of genetic parameters is a prerequisite for making efficient selection strategies in breeding programs.A number of estimators of heritability about important economic traits in many marine mollusks are available in the literature,however very few research have evaluated about the accuracy of genetic parameters estimated with different family structures.Thus,in the present study,the effect of parent sample size for estimating the precision of genetic parameters of four growth traits in clam M.meretrix by factorial designs were analyzed through restricted maximum likelihood(REML) and Bayesian.The results showed that the average estimated heritabilities of growth traits obtained from REML were 0.23-0.32 for 9 and 16 full-sib families and 0.19-0.22 for 25 full-sib families.When using Bayesian inference,the average estimated heritabilities were0.11-0.12 for 9 and 16 full-sib families and 0.13-0.16 for 25 full-sib families.Compared with REML,Bayesian got lower heritabilities,but still remained at a medium level.When the number of parents increased from 6 to 10,the estimated heritabilities were more closed to 0.20 in REML and 0.12 in Bayesian inference.Genetic correlations among traits were positive and high and had no significant difference between different sizes of designs.The accuracies of estimated breeding values from the 9 and 16 families were less precise than those from 25 families.Our results provide a basic genetic evaluation for growth traits and should be useful for the design and operation of a practical selective breeding program in the clam M.meretrix.展开更多
BACKGROUND Of 25%of randomised controlled trials(RCTs)on interventions for inflammatory bowel disease(IBD)have no power calculation.AIM To systematically review RCTs reporting interventions for the management of IBD a...BACKGROUND Of 25%of randomised controlled trials(RCTs)on interventions for inflammatory bowel disease(IBD)have no power calculation.AIM To systematically review RCTs reporting interventions for the management of IBD and to produce data for minimum sample sizes that would achieve appropriate power using the actual clinical data.METHODS We included RCTs retrieved from Cochrane IBD specialised Trial register and CENTRAL investigating any form of therapy for either induction or maintenance of remission against control,placebo,or no intervention of IBD in patients of any age.The relevant data was extracted,and the studies were grouped according to the intervention used.We recalculated sample size and the achieved difference,as well as minimum sample sizes needed in the future.RESULTS A total of 105 trials were included.There was a large discrepancy between the estimated figure for the minimal clinically important difference used for the calculations(15%group differences observed vs 30%used for calculation)explaining substantial actual sample size deficits.The minimum sample sizes indicated for future trials based on the 25 years of trial data were calculated and grouped by the intervention.CONCLUSION A third of intervention studies in IBD within the last 25 years are underpowered,with large variations in the calculation of sample sizes.The authors present a sample size estimate resource constructed on the published evidence base for future researchers and key stakeholders within the IBD trial field.展开更多
To clarify the most appropriate sample size for obtaining phenotypic data for a single line,we investigated the main-effect QTL(M-QTL) of a quantitative trait plant height(ph) in a recombinant inbred line(RIL) populat...To clarify the most appropriate sample size for obtaining phenotypic data for a single line,we investigated the main-effect QTL(M-QTL) of a quantitative trait plant height(ph) in a recombinant inbred line(RIL) population of rice(derived from the cross between Xieqingzao B and Zhonghui 9308) using five individual plants in 2006 and 2009.Twenty-six ph phenotypic datasets from the completely random combinations of 2,3,4,and 5 plants in a single line,and five ph phenotypic datasets from five individual plants were used to detect the QTLs.Fifteen M-QTLs were detected by 1 to 31 datasets.Of these,qph7a was detected repeatedly by all the 31 ph datasets in 2006 and explained 11.67% to 23.93% of phenotypic variation;qph3 was detected repeatedly by all the 31 datasets and explained 5.21% to 7.93% and 11.51% to 24.46% of phenotypic variance in 2006 and 2009,respectively.The results indicate that the M-QTL for a quantitative trait could be detected repeatedly by the phenotypic values from 5 individual plants and 26 sets of completely random combinations of phenotypic data within a single line in an RIL population under different environments.The sample size for a single line of the RIL population did not affect the efficiency for identification of stably expressed M-QTLs.展开更多
After finishing 102 replicate constant amplitude crack initiation and growth tests on Ly12-CZ aluminum alloy plate, a statistical investigation of the fatigue crack initiation and growth process is conducted in this p...After finishing 102 replicate constant amplitude crack initiation and growth tests on Ly12-CZ aluminum alloy plate, a statistical investigation of the fatigue crack initiation and growth process is conducted in this paper. According to the post-mortem fractographic examination by scanning electron microscopy (SEM), some qualitative observations of the spacial correlation among fatigue striations are developed to reveal the statistical nature of material intrinsic inhomogeneity during the crack growth process. From the test data, an engineering division between crack initiation and growth is defined as the upper limit of small crack. The distributions of crack initiation life N-i, growth life N, and the statistical characteristics of crack growth rate da/dN are also investigated. It is hoped that the work will provide a solid test basis for the study of probabilistic fatigue, probabilistic fracture mechanics, fatigue reliability and its engineering applications.展开更多
Sample size can be a key design feature that not only affects the probability of a trial's success but also determines the duration and feasibility of a trial. If an investigational drug is expected to be effective a...Sample size can be a key design feature that not only affects the probability of a trial's success but also determines the duration and feasibility of a trial. If an investigational drug is expected to be effective and address unmet medical needs of an orphan disease, where the accrual period may require many years with a large sample size to detect a minimal clinically relevant treatment effect, a minimum sample size may be set to maintain nominal power. In limited situations such as this, there may be a need for flexibility in the initial and final sample sizes; thus, it is useful to consider the utility of adaptive sample size designs that use sample size re-estimation or group sequential design. In this paper, we propose a new adaptive performance measure to consider the utility of an adaptive sample size design in a trial simulation. Considering that previously proposed sample size re-estimation methods do not take into account errors in estimation based on interim results, we propose Bayesian sample size re-estimation criteria that take into account prior information on treatment effect, and then, we assess its operating characteristics in a simulation study. We also present a review example of sample size re-estimation mainly based on published paper and review report in Pharmaceuticals and Medical Devices Agency (PMDA).展开更多
Knowledge on spatial distribution and sampling size optimization of soil copper (Cu) could lay solid foundations for environmetal quality survey of agricultural soils at county scale. In this investigation, cokrigin...Knowledge on spatial distribution and sampling size optimization of soil copper (Cu) could lay solid foundations for environmetal quality survey of agricultural soils at county scale. In this investigation, cokriging method was used to conduct the interpolation of Cu concentraiton in cropland soil in Shuangliu County, Sichuan Province, China. Based on the original 623 physicochmically measured soil samples, 560, 498, and 432 sub-samples were randomly selected as target variable and soil organic matter (SOM) of the whole original samples as auxiliary variable. Interpolation results using Cokriging under different sampling numbers were evaluated for their applicability in estimating the spatial distribution of soil Cu at county sacle. The results showed that the root mean square error (RMSE) produced by Cokriging decreased from 0.9 to 7.77%, correlation coefficient between the predicted values and the measured increased from 1.76 to 9.76% in comparison with the ordinary Kriging under the corresponding sample sizes. The prediction accuracy using Cokriging was still higher than original 623 data using ordinary Kriging even as sample size reduced 10%, and their interpolation maps were highly in agreement. Therefore, Cokriging was proven to be a more accurate and economic method which could provide more information and benefit for the studies on spatial distribution of soil pollutants at county scale.展开更多
Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP...Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP is an efficient solution for big data processing and analysis.However,a challenge for implementing RSP is determining an appropriate sample size for RSP data blocks.While a large sample size increases the burden of big data computation,a small size will lead to insufficient distribution information for RSP data blocks.To address this problem,this paper presents a novel density estimation-based method(DEM)to determine the optimal sample size for RSP data blocks.First,a theoretical sample size is calculated based on the multivariate Dvoretzky-Kiefer-Wolfowitz(DKW)inequality by using the fixed-point iteration(FPI)method.Second,a practical sample size is determined by minimizing the validation error of a kernel density estimator(KDE)constructed on RSP data blocks for an increasing sample size.Finally,a series of persuasive experiments are conducted to validate the feasibility,rationality,and effectiveness of DEM.Experimental results show that(1)the iteration function of the FPI method is convergent for calculating the theoretical sample size from the multivariate DKW inequality;(2)the KDE constructed on RSP data blocks with sample size determined by DEM can yield a good approximation of the probability density function(p.d.f);and(3)DEM provides more accurate sample sizes than the existing sample size determination methods from the perspective of p.d.f.estimation.This demonstrates that DEM is a viable approach to deal with the sample size determination problem for big data RSP implementation.展开更多
Determining required sample size is one of the critical pathways to reproducible,reliable and robust results in human-related studies.This paper aims to answer a fundamental but often overlooked question:what sample s...Determining required sample size is one of the critical pathways to reproducible,reliable and robust results in human-related studies.This paper aims to answer a fundamental but often overlooked question:what sample size is required in surveys of occupant responses to indoor environmental quality(IEQ).The statistical models are introduced in order to promote determining required sample size for various types of data analysis methods commonly used in IEQ field studies.The Monte Carlo simulations are performed to verify the statistical methods and to illustrate the impact of sample size on the study accuracy and reliability.Several examples are presented to illustrate how to determine the value of the parameters in the statistical models based on previous similar research or existing databases.The required sample size including“worst”and“optimal”cases in each condition is obtained by this method and references.It is indicated that 385 is a“worst case”sample size to be adequate for a subgroup analysis,while if the researcher has an estimate of the study design and outcome,the“optimal case”sample size can potentially be reduced.When the required sample size is not achievable,the uncertainty in the result can properly interpret via a confidence interval.It is hoped that this paper would fill in the gap between statistical analysis of sample size and IEQ field research,and it can provide a useful reference for researchers when planning their own studies.展开更多
This paper shows that for DEM simulations of triaxial tests using samples with a grading that is repre- sentative of a real soil, the sample size significantly influences the observed material response. Four DEM sampl...This paper shows that for DEM simulations of triaxial tests using samples with a grading that is repre- sentative of a real soil, the sample size significantly influences the observed material response. Four DEM samples with identical initial states were produced: three cylindrical samples bounded by rigid wails and one bounded by a cubical periodic cell, When subjected to triaxial loading, the samples with rigid boundaries were more dilative, stiffer and reached a higher peak stress ratio than the sample enclosed by periodic boundaries. For the rigid-wall samples, dilatancy increased and stiffness decreased with increasing sample size, The periodic sample was effectively homogeneous, The void ratio increased and the contact density decreased close to the rigid walls, This heterogeneity reduced with increasing sample size. The positions of the critical state lines (CSLs) of the overall response in e-log p' space were sensitive to the sample size, although no difference was observed between their slopes. The critical states of the interior regions of the rigid-wall-bounded samples approached that of the homogeneous periodic sample with increasing sample size. The ultimate strength of the material at the critical state is independent of sample size.展开更多
Length composition analysis can provide insights into the dynamics of a fish population.Accurate quantification of the size structure of a population is critical to understand the status of a fishery and how the popul...Length composition analysis can provide insights into the dynamics of a fish population.Accurate quantification of the size structure of a population is critical to understand the status of a fishery and how the population responds to environmental stressors.A scientific observer program is a reliable way to provide such accurate information.However,100%observer coverage is usually impossible for most fisheries because of logistic and financial constraints.Thus,there is a need to evaluate observer program performance,identify suitable sample sizes,and optimize the allocation of observation efforts.The objective of this study is to evaluate the effects of sample size on the quality of length composition data and identify an optimal coverage rate and observation ratio to improve the observation efficiency using an onboard observer data set from China's tuna longline fishery in the western and central Pacific Ocean.We found that the required sample size varies with fish species,indices used to describe length composition,the acceptable accuracy of the estimates,and the allocation methods of sampling effort.Ignoring other information requirements,1000 individuals would be sufficient for most species to reliably quantify length compositions,and a smaller sample size could generate reliable estimates of mean length.A coverage rate of 20%would be sufficient for most species,but a lower coverage rate(5%or 10%)could also be effective to meet with the accuracy and precision requirement in estimating length compositions.A nonrandom effort allocation among fishing baskets within a set could cause the length composition to be overestimated or underestimated for some species.The differences in effective sample sizes among species should be included in the consideration for a rational allocation of observation effort among species when there are different species management priorities.展开更多
In the October 2014 publication of JAMA,Dr.Hinman and colleagues published the study"Acupuncture for Chronic Knee Pain:A Randomized Clinical Trial,"in which the authors concluded that"in patients older than50 year...In the October 2014 publication of JAMA,Dr.Hinman and colleagues published the study"Acupuncture for Chronic Knee Pain:A Randomized Clinical Trial,"in which the authors concluded that"in patients older than50 years with moderate or severe chronic knee pain,neither laser nor needle acupuncture conferred benefi t over sham for pain or function.Our fi ndings do not support acupuncture[1]展开更多
Literature review indicates that sample size, attribute variance and within-sample choice distribution of alternatives are important considerations in the estimation of multinomial logit (MNL) models, but their impa...Literature review indicates that sample size, attribute variance and within-sample choice distribution of alternatives are important considerations in the estimation of multinomial logit (MNL) models, but their impacts on the estimation accuracy have not been systematically studied. Therefore, the objective of this paper is to provide an empirical examination to the above issues through a set of simulated discrete choice preference and rank ordered preference datasets. In this paper, the utility coefficients, alternative specific constants (ASCs), and the mean and standard deviation of the four attributes for a set of seven hypothetical alternatives are specified as a priori. Then, synthetic datasets, with varying sample size, attribute variance and within-sample choice distribution are simulated. Based on these datasets, the utility coefficients and ASCs of the specified MNLs are re-estimated and compared with the original values specified as the priori. It is found that (1) the estimation accuracy of utility parameters increases as the sample size increases; (2) the utility coefficients can be re-estimated with reasonable accuracy, but the estimates of the ASCs are confronted with much larger errors; (3) as the variances of the alternative attributes increase, the estimation accuracy improves significantly; and (4) as the distribution of chosen choices becomes more balanced across alternatives within sample datasets, the hit-ratio decreases. The results indicate that (a) under a similar setting presented in this paper, a large sample consisting of a few thousand observations (3000 - 4000) may be needed in order to provide reasonable estimates for utility coefficients, particularly for ASCs; (b) a larger, but realistic attribute space is preferred in the stated preference survey design; and (c) choice datasets with unbalanced "chosen" choice frequency distribution is preferred, in order to better capture the elasticity between the "perceived utility" associated with alternative's attributes.展开更多
The internal length scale(ILS)is a dominant parameter in strain gradient plasticity(SGP)theories,which helps to successfully explain the size effect of metals at the microscale.However,the ILS is usually introduced in...The internal length scale(ILS)is a dominant parameter in strain gradient plasticity(SGP)theories,which helps to successfully explain the size effect of metals at the microscale.However,the ILS is usually introduced into strain gradient frameworks for dimensional consistency and is model-dependent.Even now,its physical meaning,connection with the microstructure of the material,and dependence on the strain level have not been thoroughly elucidated.In the current work,Aifantis'SGP model is reformulated by incorporating a recently proposed power-law relation for strain-dependent ILS.A further extension of Aifantis'SGP model by including the grain size effect is conducted according to the Hall-Petch formulation,and then the predictions are compared with torsion experiments of thin wires.It is revealed that the ILS depends on the sample size and grain size simultaneously;these dependencies are dominated by the dislocation spacing and can be well described through the strain hardenmg exponent.Furthermore,both the original and generalized Aifantis models provide larger estimated values for the ILS than Fleck-Hutchinson's theory.展开更多
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
文摘Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct power functions to select the optimal sample size. We revise this approach when the focus is on testing a single binomial proportion. We consider exact methods and introduce a conservative criterion to account for the typical non-monotonic behavior of the power functions, when dealing with discrete data. The main purpose of this paper is to present a Shiny App providing a user-friendly, interactive tool to apply these criteria. The app also provides specific tools to elicit the analysis and the design prior distributions, which are the core of the two-priors approach.
基金Supported by National Natural Science Foundation of China(Grant No.51175028)Great Scholars Training Project(Grant No.CIT&TCD20150312)Beijing Recognized Talent Project(Grant No.2014018)
文摘Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well under conditions of large sample size and complete failure data,which lead to large deviation under conditions of small sample size and zero?failure data. To improve this problem,a new Bayesian method is proposed. Based on the characteristics of the solenoid valve in the braking system of a high?speed train,the modified Weibull distribution is selected to describe the failure rate over the entire lifetime. Based on the assumption of a binomial distribution for the failure probability at censored time,a concave method is employed to obtain the relationships between accumulation failure prob?abilities. A numerical simulation is performed to compare the results of the proposed method with those obtained from maximum likelihood estimation,and to illustrate that the proposed Bayesian model exhibits a better accuracy for the expectation value when the sample size is less than 12. Finally,the robustness of the model is demonstrated by obtaining the reliability indicators for a numerical case involving the solenoid valve of the braking system,which shows that the change in the reliability and failure rate among the di erent hyperparameters is small. The method is provided to avoid misleading of subjective information and improve accuracy of reliability assessment under condi?tions of small sample size and zero?failure data.
基金supported by the National Natural Science Foundation of China (Grant No. 30700494)the Principal Fund of South China Agricultural University, China (Grant No. 2003K053)
文摘The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management. Selection of an appropriate sampling strategy is an important prerequisite to construct a core collection with appropriate size in order to adequately represent the genetic spectrum and maximally capture the genetic diversity in available crop collections. The present study was initiated to construct nested core collections to determine the appropriate sample size to represent the genetic diversity of rice landrace collection based on 15 quantitative traits and 34 qualitative traits of 2 262 rice accessions. The results showed that 50-225 nested core collections, whose sampling rate was 2.2%-9.9%, were sufficient to maintain the maximum genetic diversity of the initial collections. Of these, 150 accessions (6.6%) could capture the maximal genetic diversity of the initial collection. Three data types, i.e. qualitative traits (QT1), quantitative traits (QT2) and integrated qualitative and quantitative traits (QTT), were compared for their efficiency in constructing core collections based on the weighted pair-group average method combined with stepwise clustering and preferred sampling on adjusted Euclidean distances. Every combining scheme constructed eight rice core collections (225, 200, 175, 150, 125, 100, 75 and 50). The results showed that the QTT data was the best in constructing a core collection as indicated by the genetic diversity of core collections. A core collection constructed only on the information of QT1 could not represent the initial collection effectively. QTT should be used together to construct a productive core collection.
基金Financial support is from the NSFC(Grant Nos.11602257,11472257,11272300,11572299)funded by the key subject"Computational Solid Mechanics"of the China Academy of Engineering Physics
文摘In order to investigate the effect of sample size on the dynamic torsional behaviour of the 2A12 aluminium alloy. In this paper, torsional split Hopkinson bar tests are conducted on this alloy with different sample dimensions. It is found that with the decreasing gauge length and thickness, the tested yield strength increases. However, the sample innerlouter diameter has little effect on the dynamic torsional behaviour. Based on the finite element method, the stress states in the alloy with different sample sizes are analysed. Due to the effect of stress concentration zone (SCZ), the shorter sample has a higher yield stress. Furthermore, the stress distributes more uniformly in the thinner sample, which leads to the higher tested yield stress. According to the experimental and simulation analysis, some suggestions on choosing the sample size are given as well.
基金The National Natural Science Foundation of China under contract No.31772852the Fundamental Research Funds for the Central Universities under contract No.201612004。
文摘This study used Ecopath model of the Jiaozhou Bay as an example to evaluate the effect of stomach sample size of three fish species on the projection of this model. The derived ecosystem indices were classified into three categories:(1) direct indices, like the trophic level of species, influenced by stomach sample size directly;(2)indirect indices, like ecology efficiency(EE) of invertebrates, influenced by the multiple prey-predator relationships;and(3) systemic indices, like total system throughout(TST), describing the status of the whole ecosystem. The influences of different stomach sample sizes on these indices were evaluated. The results suggest that systemic indices of the ecosystem model were robust to stomach sample sizes, whereas specific indices related to species were indicated to be with low accuracy and precision when stomach samples were insufficient.The indices became more uncertain when the stomach sample sizes varied for more species. This study enhances the understanding of how the quality of diet composition data influences ecosystem modeling outputs. The results can also guide the design of stomach content analysis for developing ecosystem models.
基金supported by the National Natural Science Foundation of China(81273184)the National Natural Science Foundation of China Grant for Young Scientists (81302512)
文摘Sample size re-estimation is essential in oncology studies. However, the use of blinded sample size reassessment for survival data has been rarely reported. Based on the density function of the exponential distribution, an expectation-maximization(EM) algorithm of the hazard ratio was derived, and several simulation studies were used to verify its applications. The method had obvious variation in the hazard ratio estimates and overestimation for the relatively small hazard ratios. Our studies showed that the stability of the EM estimation results directly correlated with the sample size, the convergence of the EM algorithm was impacted by the initial values, and a balanced design produced the best estimates. No reliable blinded sample size re-estimation inference can be made in our studies, but the results provide useful information to steer the practitioners in this field from repeating the same endeavor.
基金The National High Technology Research and Development Program(863 program)of China under contract No.2012AA10A410the Zhejiang Science and Technology Project of Agricultural Breeding under contract No.2012C12907-4the Scientific and Technological Innovation Project financially supported by Qingdao National Laboratory for Marine Science and Technology under contract No.2015ASKJ02
文摘The precise and accurate knowledge of genetic parameters is a prerequisite for making efficient selection strategies in breeding programs.A number of estimators of heritability about important economic traits in many marine mollusks are available in the literature,however very few research have evaluated about the accuracy of genetic parameters estimated with different family structures.Thus,in the present study,the effect of parent sample size for estimating the precision of genetic parameters of four growth traits in clam M.meretrix by factorial designs were analyzed through restricted maximum likelihood(REML) and Bayesian.The results showed that the average estimated heritabilities of growth traits obtained from REML were 0.23-0.32 for 9 and 16 full-sib families and 0.19-0.22 for 25 full-sib families.When using Bayesian inference,the average estimated heritabilities were0.11-0.12 for 9 and 16 full-sib families and 0.13-0.16 for 25 full-sib families.Compared with REML,Bayesian got lower heritabilities,but still remained at a medium level.When the number of parents increased from 6 to 10,the estimated heritabilities were more closed to 0.20 in REML and 0.12 in Bayesian inference.Genetic correlations among traits were positive and high and had no significant difference between different sizes of designs.The accuracies of estimated breeding values from the 9 and 16 families were less precise than those from 25 families.Our results provide a basic genetic evaluation for growth traits and should be useful for the design and operation of a practical selective breeding program in the clam M.meretrix.
文摘BACKGROUND Of 25%of randomised controlled trials(RCTs)on interventions for inflammatory bowel disease(IBD)have no power calculation.AIM To systematically review RCTs reporting interventions for the management of IBD and to produce data for minimum sample sizes that would achieve appropriate power using the actual clinical data.METHODS We included RCTs retrieved from Cochrane IBD specialised Trial register and CENTRAL investigating any form of therapy for either induction or maintenance of remission against control,placebo,or no intervention of IBD in patients of any age.The relevant data was extracted,and the studies were grouped according to the intervention used.We recalculated sample size and the achieved difference,as well as minimum sample sizes needed in the future.RESULTS A total of 105 trials were included.There was a large discrepancy between the estimated figure for the minimal clinically important difference used for the calculations(15%group differences observed vs 30%used for calculation)explaining substantial actual sample size deficits.The minimum sample sizes indicated for future trials based on the 25 years of trial data were calculated and grouped by the intervention.CONCLUSION A third of intervention studies in IBD within the last 25 years are underpowered,with large variations in the calculation of sample sizes.The authors present a sample size estimate resource constructed on the published evidence base for future researchers and key stakeholders within the IBD trial field.
基金supported by the grants from the Chinese Natural Science Foundation(Grant No.31071398)the National Program on Super Rice Breeding,the Ministry of Agriculture(Grant No.2010-3)+1 种基金National High Technology Research and Development Program of China(Grant No.2006AA10Z1E8)the Provincial Program of ‘8812’,Zhejiang Province,China(Grant No.8812-1)
文摘To clarify the most appropriate sample size for obtaining phenotypic data for a single line,we investigated the main-effect QTL(M-QTL) of a quantitative trait plant height(ph) in a recombinant inbred line(RIL) population of rice(derived from the cross between Xieqingzao B and Zhonghui 9308) using five individual plants in 2006 and 2009.Twenty-six ph phenotypic datasets from the completely random combinations of 2,3,4,and 5 plants in a single line,and five ph phenotypic datasets from five individual plants were used to detect the QTLs.Fifteen M-QTLs were detected by 1 to 31 datasets.Of these,qph7a was detected repeatedly by all the 31 ph datasets in 2006 and explained 11.67% to 23.93% of phenotypic variation;qph3 was detected repeatedly by all the 31 datasets and explained 5.21% to 7.93% and 11.51% to 24.46% of phenotypic variance in 2006 and 2009,respectively.The results indicate that the M-QTL for a quantitative trait could be detected repeatedly by the phenotypic values from 5 individual plants and 26 sets of completely random combinations of phenotypic data within a single line in an RIL population under different environments.The sample size for a single line of the RIL population did not affect the efficiency for identification of stably expressed M-QTLs.
基金The project is supported by the Aeronautic Science Foundation,China
文摘After finishing 102 replicate constant amplitude crack initiation and growth tests on Ly12-CZ aluminum alloy plate, a statistical investigation of the fatigue crack initiation and growth process is conducted in this paper. According to the post-mortem fractographic examination by scanning electron microscopy (SEM), some qualitative observations of the spacial correlation among fatigue striations are developed to reveal the statistical nature of material intrinsic inhomogeneity during the crack growth process. From the test data, an engineering division between crack initiation and growth is defined as the upper limit of small crack. The distributions of crack initiation life N-i, growth life N, and the statistical characteristics of crack growth rate da/dN are also investigated. It is hoped that the work will provide a solid test basis for the study of probabilistic fatigue, probabilistic fracture mechanics, fatigue reliability and its engineering applications.
文摘Sample size can be a key design feature that not only affects the probability of a trial's success but also determines the duration and feasibility of a trial. If an investigational drug is expected to be effective and address unmet medical needs of an orphan disease, where the accrual period may require many years with a large sample size to detect a minimal clinically relevant treatment effect, a minimum sample size may be set to maintain nominal power. In limited situations such as this, there may be a need for flexibility in the initial and final sample sizes; thus, it is useful to consider the utility of adaptive sample size designs that use sample size re-estimation or group sequential design. In this paper, we propose a new adaptive performance measure to consider the utility of an adaptive sample size design in a trial simulation. Considering that previously proposed sample size re-estimation methods do not take into account errors in estimation based on interim results, we propose Bayesian sample size re-estimation criteria that take into account prior information on treatment effect, and then, we assess its operating characteristics in a simulation study. We also present a review example of sample size re-estimation mainly based on published paper and review report in Pharmaceuticals and Medical Devices Agency (PMDA).
基金supported by the Youth Foundation from Sichuan Education Bureau (2006B009)Key Project from Sichuan Education Bureau (2006A008)Sichuan Youth Science & Technology Foundation,China (06ZQ026-020)
文摘Knowledge on spatial distribution and sampling size optimization of soil copper (Cu) could lay solid foundations for environmetal quality survey of agricultural soils at county scale. In this investigation, cokriging method was used to conduct the interpolation of Cu concentraiton in cropland soil in Shuangliu County, Sichuan Province, China. Based on the original 623 physicochmically measured soil samples, 560, 498, and 432 sub-samples were randomly selected as target variable and soil organic matter (SOM) of the whole original samples as auxiliary variable. Interpolation results using Cokriging under different sampling numbers were evaluated for their applicability in estimating the spatial distribution of soil Cu at county sacle. The results showed that the root mean square error (RMSE) produced by Cokriging decreased from 0.9 to 7.77%, correlation coefficient between the predicted values and the measured increased from 1.76 to 9.76% in comparison with the ordinary Kriging under the corresponding sample sizes. The prediction accuracy using Cokriging was still higher than original 623 data using ordinary Kriging even as sample size reduced 10%, and their interpolation maps were highly in agreement. Therefore, Cokriging was proven to be a more accurate and economic method which could provide more information and benefit for the studies on spatial distribution of soil pollutants at county scale.
基金This paper was supported by the National Natural Science Foundation of China(Grant No.61972261)the Natural Science Foundation of Guangdong Province(No.2023A1515011667)+1 种基金the Key Basic Research Foundation of Shenzhen(No.JCYJ20220818100205012)the Basic Research Foundation of Shenzhen(No.JCYJ20210324093609026)。
文摘Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP is an efficient solution for big data processing and analysis.However,a challenge for implementing RSP is determining an appropriate sample size for RSP data blocks.While a large sample size increases the burden of big data computation,a small size will lead to insufficient distribution information for RSP data blocks.To address this problem,this paper presents a novel density estimation-based method(DEM)to determine the optimal sample size for RSP data blocks.First,a theoretical sample size is calculated based on the multivariate Dvoretzky-Kiefer-Wolfowitz(DKW)inequality by using the fixed-point iteration(FPI)method.Second,a practical sample size is determined by minimizing the validation error of a kernel density estimator(KDE)constructed on RSP data blocks for an increasing sample size.Finally,a series of persuasive experiments are conducted to validate the feasibility,rationality,and effectiveness of DEM.Experimental results show that(1)the iteration function of the FPI method is convergent for calculating the theoretical sample size from the multivariate DKW inequality;(2)the KDE constructed on RSP data blocks with sample size determined by DEM can yield a good approximation of the probability density function(p.d.f);and(3)DEM provides more accurate sample sizes than the existing sample size determination methods from the perspective of p.d.f.estimation.This demonstrates that DEM is a viable approach to deal with the sample size determination problem for big data RSP implementation.
基金This work is supported by the National Key R&D Program of China(2022YFC3803201)。
文摘Determining required sample size is one of the critical pathways to reproducible,reliable and robust results in human-related studies.This paper aims to answer a fundamental but often overlooked question:what sample size is required in surveys of occupant responses to indoor environmental quality(IEQ).The statistical models are introduced in order to promote determining required sample size for various types of data analysis methods commonly used in IEQ field studies.The Monte Carlo simulations are performed to verify the statistical methods and to illustrate the impact of sample size on the study accuracy and reliability.Several examples are presented to illustrate how to determine the value of the parameters in the statistical models based on previous similar research or existing databases.The required sample size including“worst”and“optimal”cases in each condition is obtained by this method and references.It is indicated that 385 is a“worst case”sample size to be adequate for a subgroup analysis,while if the researcher has an estimate of the study design and outcome,the“optimal case”sample size can potentially be reduced.When the required sample size is not achievable,the uncertainty in the result can properly interpret via a confidence interval.It is hoped that this paper would fill in the gap between statistical analysis of sample size and IEQ field research,and it can provide a useful reference for researchers when planning their own studies.
基金funding from the Royal Commission for the Exhibition of 1851provided as part of grant EP/1006761/1 from the Engineering and Physical Sciences Research Council
文摘This paper shows that for DEM simulations of triaxial tests using samples with a grading that is repre- sentative of a real soil, the sample size significantly influences the observed material response. Four DEM samples with identical initial states were produced: three cylindrical samples bounded by rigid wails and one bounded by a cubical periodic cell, When subjected to triaxial loading, the samples with rigid boundaries were more dilative, stiffer and reached a higher peak stress ratio than the sample enclosed by periodic boundaries. For the rigid-wall samples, dilatancy increased and stiffness decreased with increasing sample size, The periodic sample was effectively homogeneous, The void ratio increased and the contact density decreased close to the rigid walls, This heterogeneity reduced with increasing sample size. The positions of the critical state lines (CSLs) of the overall response in e-log p' space were sensitive to the sample size, although no difference was observed between their slopes. The critical states of the interior regions of the rigid-wall-bounded samples approached that of the homogeneous periodic sample with increasing sample size. The ultimate strength of the material at the critical state is independent of sample size.
基金The work was supported by the scientific observer program of the distant-water fishery of the Agriculture Ministry of China(08–25).
文摘Length composition analysis can provide insights into the dynamics of a fish population.Accurate quantification of the size structure of a population is critical to understand the status of a fishery and how the population responds to environmental stressors.A scientific observer program is a reliable way to provide such accurate information.However,100%observer coverage is usually impossible for most fisheries because of logistic and financial constraints.Thus,there is a need to evaluate observer program performance,identify suitable sample sizes,and optimize the allocation of observation efforts.The objective of this study is to evaluate the effects of sample size on the quality of length composition data and identify an optimal coverage rate and observation ratio to improve the observation efficiency using an onboard observer data set from China's tuna longline fishery in the western and central Pacific Ocean.We found that the required sample size varies with fish species,indices used to describe length composition,the acceptable accuracy of the estimates,and the allocation methods of sampling effort.Ignoring other information requirements,1000 individuals would be sufficient for most species to reliably quantify length compositions,and a smaller sample size could generate reliable estimates of mean length.A coverage rate of 20%would be sufficient for most species,but a lower coverage rate(5%or 10%)could also be effective to meet with the accuracy and precision requirement in estimating length compositions.A nonrandom effort allocation among fishing baskets within a set could cause the length composition to be overestimated or underestimated for some species.The differences in effective sample sizes among species should be included in the consideration for a rational allocation of observation effort among species when there are different species management priorities.
文摘In the October 2014 publication of JAMA,Dr.Hinman and colleagues published the study"Acupuncture for Chronic Knee Pain:A Randomized Clinical Trial,"in which the authors concluded that"in patients older than50 years with moderate or severe chronic knee pain,neither laser nor needle acupuncture conferred benefi t over sham for pain or function.Our fi ndings do not support acupuncture[1]
文摘Literature review indicates that sample size, attribute variance and within-sample choice distribution of alternatives are important considerations in the estimation of multinomial logit (MNL) models, but their impacts on the estimation accuracy have not been systematically studied. Therefore, the objective of this paper is to provide an empirical examination to the above issues through a set of simulated discrete choice preference and rank ordered preference datasets. In this paper, the utility coefficients, alternative specific constants (ASCs), and the mean and standard deviation of the four attributes for a set of seven hypothetical alternatives are specified as a priori. Then, synthetic datasets, with varying sample size, attribute variance and within-sample choice distribution are simulated. Based on these datasets, the utility coefficients and ASCs of the specified MNLs are re-estimated and compared with the original values specified as the priori. It is found that (1) the estimation accuracy of utility parameters increases as the sample size increases; (2) the utility coefficients can be re-estimated with reasonable accuracy, but the estimates of the ASCs are confronted with much larger errors; (3) as the variances of the alternative attributes increase, the estimation accuracy improves significantly; and (4) as the distribution of chosen choices becomes more balanced across alternatives within sample datasets, the hit-ratio decreases. The results indicate that (a) under a similar setting presented in this paper, a large sample consisting of a few thousand observations (3000 - 4000) may be needed in order to provide reasonable estimates for utility coefficients, particularly for ASCs; (b) a larger, but realistic attribute space is preferred in the stated preference survey design; and (c) choice datasets with unbalanced "chosen" choice frequency distribution is preferred, in order to better capture the elasticity between the "perceived utility" associated with alternative's attributes.
文摘The internal length scale(ILS)is a dominant parameter in strain gradient plasticity(SGP)theories,which helps to successfully explain the size effect of metals at the microscale.However,the ILS is usually introduced into strain gradient frameworks for dimensional consistency and is model-dependent.Even now,its physical meaning,connection with the microstructure of the material,and dependence on the strain level have not been thoroughly elucidated.In the current work,Aifantis'SGP model is reformulated by incorporating a recently proposed power-law relation for strain-dependent ILS.A further extension of Aifantis'SGP model by including the grain size effect is conducted according to the Hall-Petch formulation,and then the predictions are compared with torsion experiments of thin wires.It is revealed that the ILS depends on the sample size and grain size simultaneously;these dependencies are dominated by the dislocation spacing and can be well described through the strain hardenmg exponent.Furthermore,both the original and generalized Aifantis models provide larger estimated values for the ILS than Fleck-Hutchinson's theory.