As the chaff centroid jamming can introduce the guiding error of the anti-warship missile's seeker and decrease its hitting probability,a new quantitative analysis method and a mathematic model are proposed in thi...As the chaff centroid jamming can introduce the guiding error of the anti-warship missile's seeker and decrease its hitting probability,a new quantitative analysis method and a mathematic model are proposed in this paper to evaluate the success jamming probability.By using this method,the optimal decision scheme of chaff centroid jamming in different threat situations can be found,and also the success probability of this scheme can be calculated quantitatively.Thus,the operation rules of the centroid jamming and the tactical approach for increasing the success probability can be determined.展开更多
Evaluating a climate model's fidelity(ability to simulate observed climate) is a critical step in establishing confidence in the model's suitability for future climate projections, and in tuning climate model para...Evaluating a climate model's fidelity(ability to simulate observed climate) is a critical step in establishing confidence in the model's suitability for future climate projections, and in tuning climate model parameters. Model developers use their judgement in determining which trade-offs between different aspects of model fidelity are acceptable. However, little is known about the degree of consensus in these evaluations, and whether experts use the same criteria when different scientific objectives are defined. Here, we report on results from a broad community survey studying expert assessments of the relative importance of different output variables when evaluating a global atmospheric model's mean climate. We find that experts adjust their ratings of variable importance in response to the scientific objective, for instance, scientists rate surface wind stress as significantly more important for Southern Ocean climate than for the water cycle in the Asian watershed. There is greater consensus on the importance of certain variables(e.g., shortwave cloud forcing) than others(e.g., aerosol optical depth). We find few differences in expert consensus between respondents with greater or less climate modeling experience,and no statistically significant differences between the responses of climate model developers and users. The concise variable lists and community ratings reported here provide baseline descriptive data on current expert understanding of certain aspects of model evaluation, and can serve as a starting point for further investigation, as well as developing more sophisticated evaluation and scoring criteria with respect to specific scientific objectives.展开更多
In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to stron...In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to strong dependence on subjective information of experts' judgments on sparse statistical data. In this paper, a quasi-Bayesian software reliability model using interval-valued probabilities to clearly quantify experts' prior beliefs on possible intervals of the parameters of the probability distributions is presented. The model integrates experts' judgments with statistical data to obtain more convincible assessments of software reliability with small samples. For some actual data sets, the presented model yields better predictions than the Jelinski-Moranda (JM) model using maximum likelihood (ML).展开更多
Background: Diagnosis of heparin-induced thrombocytopenia (HIT) is challenging. This study aimed to compare the diagnostic performance of HIT expert probability (HEP) and 4T scores, and to evaluate the inter-observer ...Background: Diagnosis of heparin-induced thrombocytopenia (HIT) is challenging. This study aimed to compare the diagnostic performance of HIT expert probability (HEP) and 4T scores, and to evaluate the inter-observer reliability for the 4T score in a clinical setting. Methods: This prospective study included HIT-suspected patients between 2016 and 2018. Three hematologists assessed the HEP and 4T scores. Correlations between scores and anti-platelet factor 4 (anti-PF4)/heparin antibodies were evaluated. Receiver operating characteristic curves and area under the curve (AUC) were used to assess the predictive accuracy of these two scoring models. The intraclass correlation coefficient (ICC) was used to assess the inter-observer agreement of 4T scores between residents and hematologists. Results: Of the 89 subjects included, 22 (24.7%) were positive for anti-PF4/heparin antibody. The correlations between antibody titer and either HEP or 4T scores were similar (r = 0.392, P < 0.01 for the HEP score;r = 0.444, P < 0.01 for the 4T score). No significant difference in the diagnostic performance was displayed between these two scores (AUC for the HEP score: 0.778 vs. AUC for the 4T score: 0.741, P = 0.357). Only 72 4T scores were collected from the residents, with a surprisingly low percentage of observers (43.1%) presenting the four individual item scores which made up their 4T score. The AUC of 4T score assessed by residents and hematologists was 0.657 (95% confidence interval [CI]: 536–0.765) and 0.780 (95% CI: 0.667–0.869, P < 0.05), respectively. The ICC of 4T score between residents and hematologists was 0.49 (95% CI: 0.29–0.65, P < 0.01), demonstrating a fair inter-observer agreement. Conclusions: The HEP score does not display a better performance for predicting HIT than the 4T score. With the unsatisfactory completion rate, the inter-observer agreement of 4T score in a tertiary hospital is fair, underscoring the necessity for continuing education for physicians.展开更多
文摘As the chaff centroid jamming can introduce the guiding error of the anti-warship missile's seeker and decrease its hitting probability,a new quantitative analysis method and a mathematic model are proposed in this paper to evaluate the success jamming probability.By using this method,the optimal decision scheme of chaff centroid jamming in different threat situations can be found,and also the success probability of this scheme can be calculated quantitatively.Thus,the operation rules of the centroid jamming and the tactical approach for increasing the success probability can be determined.
文摘Evaluating a climate model's fidelity(ability to simulate observed climate) is a critical step in establishing confidence in the model's suitability for future climate projections, and in tuning climate model parameters. Model developers use their judgement in determining which trade-offs between different aspects of model fidelity are acceptable. However, little is known about the degree of consensus in these evaluations, and whether experts use the same criteria when different scientific objectives are defined. Here, we report on results from a broad community survey studying expert assessments of the relative importance of different output variables when evaluating a global atmospheric model's mean climate. We find that experts adjust their ratings of variable importance in response to the scientific objective, for instance, scientists rate surface wind stress as significantly more important for Southern Ocean climate than for the water cycle in the Asian watershed. There is greater consensus on the importance of certain variables(e.g., shortwave cloud forcing) than others(e.g., aerosol optical depth). We find few differences in expert consensus between respondents with greater or less climate modeling experience,and no statistically significant differences between the responses of climate model developers and users. The concise variable lists and community ratings reported here provide baseline descriptive data on current expert understanding of certain aspects of model evaluation, and can serve as a starting point for further investigation, as well as developing more sophisticated evaluation and scoring criteria with respect to specific scientific objectives.
基金supported by the National High-Technology Research and Development Program of China (Grant Nos.2006AA01Z187,2007AA040605)
文摘In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to strong dependence on subjective information of experts' judgments on sparse statistical data. In this paper, a quasi-Bayesian software reliability model using interval-valued probabilities to clearly quantify experts' prior beliefs on possible intervals of the parameters of the probability distributions is presented. The model integrates experts' judgments with statistical data to obtain more convincible assessments of software reliability with small samples. For some actual data sets, the presented model yields better predictions than the Jelinski-Moranda (JM) model using maximum likelihood (ML).
文摘Background: Diagnosis of heparin-induced thrombocytopenia (HIT) is challenging. This study aimed to compare the diagnostic performance of HIT expert probability (HEP) and 4T scores, and to evaluate the inter-observer reliability for the 4T score in a clinical setting. Methods: This prospective study included HIT-suspected patients between 2016 and 2018. Three hematologists assessed the HEP and 4T scores. Correlations between scores and anti-platelet factor 4 (anti-PF4)/heparin antibodies were evaluated. Receiver operating characteristic curves and area under the curve (AUC) were used to assess the predictive accuracy of these two scoring models. The intraclass correlation coefficient (ICC) was used to assess the inter-observer agreement of 4T scores between residents and hematologists. Results: Of the 89 subjects included, 22 (24.7%) were positive for anti-PF4/heparin antibody. The correlations between antibody titer and either HEP or 4T scores were similar (r = 0.392, P < 0.01 for the HEP score;r = 0.444, P < 0.01 for the 4T score). No significant difference in the diagnostic performance was displayed between these two scores (AUC for the HEP score: 0.778 vs. AUC for the 4T score: 0.741, P = 0.357). Only 72 4T scores were collected from the residents, with a surprisingly low percentage of observers (43.1%) presenting the four individual item scores which made up their 4T score. The AUC of 4T score assessed by residents and hematologists was 0.657 (95% confidence interval [CI]: 536–0.765) and 0.780 (95% CI: 0.667–0.869, P < 0.05), respectively. The ICC of 4T score between residents and hematologists was 0.49 (95% CI: 0.29–0.65, P < 0.01), demonstrating a fair inter-observer agreement. Conclusions: The HEP score does not display a better performance for predicting HIT than the 4T score. With the unsatisfactory completion rate, the inter-observer agreement of 4T score in a tertiary hospital is fair, underscoring the necessity for continuing education for physicians.