BACKGROUND Upper gastrointestinal(GI)bleeding is a life-threatening condition with high mortality rates.AIM To compare the performance of pre-endoscopic risk scores in predicting the following primary outcomes:In-hosp...BACKGROUND Upper gastrointestinal(GI)bleeding is a life-threatening condition with high mortality rates.AIM To compare the performance of pre-endoscopic risk scores in predicting the following primary outcomes:In-hospital mortality,intervention(endoscopic or surgical)and length of admission(≥7 d).METHODS We performed a retrospective analysis of 363 patients presenting with upper GI bleeding from December 2020 to January 2021.We calculated and compared the area under the receiver operating characteristics curves(AUROCs)of Glasgow-Blatchford score(GBS),pre-endoscopic Rockall score(PERS),albumin,international normalized ratio,altered mental status,systolic blood pressure,age older than 65(AIMS65)and age,blood tests and comorbidities(ABC),including their optimal cut-off in variceal and non-variceal upper GI bleeding cohorts.We subsequently analyzed through a logistic binary regression model,if addition of lactate increased the score performance.RESULTS All scores had discriminative ability in predicting in-hospital mortality irrespective of study group.AIMS65 score had the best performance in the variceal bleeding group(AUROC=0.772;P<0.001),and ABC score(AUROC=0.775;P<0.001)in the non-variceal bleeding group.However,ABC score,at a cut-off value of 5.5,was the best predictor(AUROC=0.770,P=0.001)of inhospital mortality in both populations.PERS score was a good predictor for endoscopic treatment(AUC=0.604;P=0.046)in the variceal population,while GBS score,(AUROC=0.722;P=0.024),outperformed the other scores in predicting surgical intervention.Addition of lactate to AIMS65 score,increases by 5-fold the probability of in-hospital mortality(P<0.05)and by 12-fold if added to GBS score(P<0.003).No score proved to be a good predictor for length of admission.CONCLUSION ABC score is the most accurate in predicting in-hospital mortality in both mixed and non-variceal bleeding population.PERS and GBS should be used to determine need for endoscopic and surgical intervention,respectively.Lactate can be used as an additional tool to risk scores for predicting inhospital mortality.展开更多
This article proposes the maximum test for a sequence of quadratic form statistics about score test in logistic regression model which can be applied to genetic and medicine fields.Theoretical properties about the max...This article proposes the maximum test for a sequence of quadratic form statistics about score test in logistic regression model which can be applied to genetic and medicine fields.Theoretical properties about the maximum test are derived.Extensive simulation studies are conducted to testify powers robustness of the maximum test compared to other two existed test.We also apply the maximum test to a real dataset about multiple gene variables association analysis.展开更多
In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditiona...In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditional Maxinnization (ECM) algorithm to estimate parameters and conduct numerical simulation, and performs fitting analysis on the test scores of Linear Algebra and Advanced Mathematics of F University. The empirical results show that the two-component mixed generalized normal distribution is better than the commonly used two-component mixed normal distribution in fitting college students’ test data, and has good application value.展开更多
Little is known about how the assessment modality,i.e.,computer-based(CB)and paper-based(PB)tests,affects language teachers’scorings,perceptions,and preferences and,therefore,the validity and fairness of classroom wr...Little is known about how the assessment modality,i.e.,computer-based(CB)and paper-based(PB)tests,affects language teachers’scorings,perceptions,and preferences and,therefore,the validity and fairness of classroom writing assessments.The present mixed-methods study used Shaw and Weir’s(2007)sociocognitive writing test validation framework to examine the scoring and consequential validity evidence of CB and PB writing tests in EFL classroom assessment in higher education.Original handwritten and word-processed texts of 38 EFL university students were transcribed to their opposite format and assessed by three language lecturers(N=456 texts,152 per teacher)to examine the scoring validity of CB and PB tests.The teachers’perceptions of text quality and preferences for assessment modality accounted for the consequential validity evidence of both tests.Findings revealed that the assessment modality impacted teachers’scorings,perceptions,and preferences.The teachers awarded higher scores to original and transcribed handwritten texts,particularly text organization and language use.The teachers’perceptions of text quality differed from their ratings,and physical,psychological,and experiential characteristics influenced their preferences for assessment modality.The results have implications for the validity and fairness of CB and PB writing tests and teachers’assessment practices.展开更多
Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leadi...Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leading to incorrect inferences and conclusions,and ultimately affecting the validity and accuracy of statistical inferences.Considering this,the study designs a unified analysis scheme for different data types based on parametric statistical test methods and non-parametric test methods.The data were grouped according to sample type and divided into discrete data and continuous data.To account for differences among subgroups,the conventional chi-squared test was used for discrete data.The normal distribution is the basis of many statistical methods;if the data does not follow a normal distribution,many statistical methods will fail or produce incorrect results.Therefore,before data analysis and modeling,the data were divided into normal and non-normal groups through normality testing.For normally distributed data,parametric statistical methods were used to judge the differences between groups.For non-normal data,non-parametric tests were employed to improve the accuracy of the analysis.Statistically significant indicators were retained according to the significance index P-value of the statistical test or corresponding statistics.These indicators were then combined with relevant medical background to further explore the etiology leading to the occurrence or transformation of diabetes status.展开更多
Safety-critical system (SCS) has highly demand for dependability, which requires plenty of resource to ensure that the system under test (SUT) satisfies the dependability requirement. In this paper, a new SCS rapi...Safety-critical system (SCS) has highly demand for dependability, which requires plenty of resource to ensure that the system under test (SUT) satisfies the dependability requirement. In this paper, a new SCS rapid testing method is proposed to improve SCS adaptive dependability testing. The result of each test execution is saved in calculation memory unit and evaluated as an algorithm model. Then the least quantity of scenario test case for next test execution will be calculated according to the promised SUT's confidence level. The feedback data are generated to weight controller as the guideline for the further testing. Finally, a compre- hensive experiment study demonstrates that this adaptive testing method can really work in practice. This rapid testing method, testing result statistics-based adaptive control, makes the SCS dependability testing much more effective.展开更多
Cardiovascular disease(CVD) is the leading cause of morbidity and mortality among patients with diabetes mellitus,who have a risk of cardiovascular mortality two to four times that of people without diabetes.An indivi...Cardiovascular disease(CVD) is the leading cause of morbidity and mortality among patients with diabetes mellitus,who have a risk of cardiovascular mortality two to four times that of people without diabetes.An individualised approach to cardiovascular risk estimation and management is needed.Over the past decades,many risk scores have been developed to predict CVD.However,few have been externally validated in a diabetic population and limited studies have examined the impact of applying a prediction model in clinical practice.Currently,guidelines are focused on testing for CVD in symptomatic patients.Atypical symptoms or silent ischemia are more common in the diabetic population,and with additional markers of vascular disease such as erectile dysfunction and autonomic neuropathy,these guidelines can be difficult to interpret.We propose an algorithm incorporating cardiovascular risk scores in combination with typical and atypical signs and symptoms to alert clinicians to consider further investigation with provocative testing.The modalities for investigation of CVD are discussed.展开更多
In this paper, some test statistics of Kolmogorov type and Cramervon Mises type based on projection pursuit technique are proposed for testing the sphericity problem of a high\|dimensional distribution. The limiting d...In this paper, some test statistics of Kolmogorov type and Cramervon Mises type based on projection pursuit technique are proposed for testing the sphericity problem of a high\|dimensional distribution. The limiting distributions of the test statistics are derived under the null hypothesis. The asymptotic properties of Bootstrap approximation are investigated and the tail behaviors of the statistics are studied.展开更多
The traditional method for creating a gene score to predict a given outcome is to use the most statistically significant single nucleotide polymorphisms (SNPs) from all SNPs which were tested. There are several disadv...The traditional method for creating a gene score to predict a given outcome is to use the most statistically significant single nucleotide polymorphisms (SNPs) from all SNPs which were tested. There are several disadvantages of this approach such as excluding SNPs that do not have strong single effects when tested on their own but do have strong joint effects when tested together with other SNPs. The interpretation of results from the traditional gene score may lack biological insight since the functional unit of interest is often the gene, not the single SNP. In this paper we present a new gene scoring method, which overcomes these problems as it generates a gene score for each gene, and the total gene score for all the genes available. First, we calculate a gene score for each gene and second, we test the association between this gene score and the outcome of interest (i.e. trait). Only the gene scores which are significantly associated with the outcome after multiple testing correction for the number of gene tests (not SNPs) are considered in the total gene score calculation. This method controls false positive results caused by multiple tests within genes and between genes separately, and has the advantage of identifying multi-locus genetic effects, compared with the Bonferroni correction, false discovery rate (FDR), and permutation tests for all SNPs. Another main feature of this method is that we select the SNPs, which have different effects within a gene by using adjustment in multiple regressions and then combine the information from the selected SNPs within a gene to create a gene score. A simulation study has been conducted to evaluate finite sample performance of the proposed method.展开更多
There are a few statistics testing the homogeneity of odds ratios across strata. Asymptotic statistics lose their power in the “sparse-data” setting. Both asymptotic statistics and exact tests have low power when th...There are a few statistics testing the homogeneity of odds ratios across strata. Asymptotic statistics lose their power in the “sparse-data” setting. Both asymptotic statistics and exact tests have low power when the sample sizes are small. We created a set of U statistics and compared them with some existing statistics in testing homogeneity of OR at different data settings. We evaluated their performance in terms of the empirical size and power via Monto Carlo simulations. Our results showed that two of the U-statistics under our study had higher power for testing homogeneity of odds ratios for 2 by 2 contingency tables. The application of the tests was illustrated in two real examples.展开更多
Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Kn...Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.展开更多
Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Kn...Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.展开更多
Two statistical validation methods were used to evaluate the confidence level of the Total Column Ozone (TCO) measurements recorded by satellite systems measuring simultaneously, one using the normal distribution and ...Two statistical validation methods were used to evaluate the confidence level of the Total Column Ozone (TCO) measurements recorded by satellite systems measuring simultaneously, one using the normal distribution and another using the Mann-Whitney test. First, the reliability of the TCO measurements was studied hemispherically. While similar coincidences and levels of significance > 0.05 were found with the two statistical tests, an enormous variability in the levels of significance throughout the year was also exposed. Then, using the same statistical comparison methods, a latitudinal study was carried out in order to elucidate the geographical distribution that gave rise to this variability. Our study reveals that between the TOMS and OMI measurements in 2005 there was only a coincidence in 50% of the latitudes, which explained the variability. This implies that for 2005, the TOMS measurements are not completely reliable, except between the -50° and -15° latitude band in the southern hemisphere and between +15° and +50° latitude band in the northern hemisphere. In the case of OMI-OMPS, we observe that between 2011 and 2016 the measurements of both satellite systems are reasonably similar with a confidence level higher than 95%. However, in 2017 a band with a width of 20° latitude centered on the equator appeared, in which the significance levels were much less than 0.05, indicating that one of the measurement systems had begun to fail. In 2018, the fault was not only located in the equator, but was also replicated in various bands in the Southern Hemisphere. We interpret this as evidence of irreversible failure in one of the measurement systems.展开更多
Objective: To improve the detecting accuracy of chromosomal aneuploidy of fetus by non-invasive prenatal testing (NIPT) using next generation sequencing data of pregnant women’s cell-free DNA. Methods: We proposed th...Objective: To improve the detecting accuracy of chromosomal aneuploidy of fetus by non-invasive prenatal testing (NIPT) using next generation sequencing data of pregnant women’s cell-free DNA. Methods: We proposed the multi-Z method which uses 21 z-scores for each autosomal chromosome to detect aneuploidy of the chromosome, while the conventional NIPT method uses only one z-score. To do this, mapped read numbers of a certain chromosome were normalized by those of the other 21 chromosomes. Average and standard deviation (SD), which are used for calculating z-score of each sample, were obtained with normalized values between all autosomal chromosomes of control samples. In this way, multiple z-scores can be calculated for 21 autosomal chromosomes except oneself. Results: Multi-Z method showed 100% sensitivity and specificity for 187 samples sequenced to 3 M reads while the conventional NIPT method showed 95.1% specificity. Similarly, for 216 samples sequenced to 1 M reads, Multi-Z method showed 100% sensitivity and 95.6% specificity and the conventional NIPT method showed a result of 75.1% specificity. Conclusion: Multi-Z method showed higher accuracy and robust results than the conventional method even at low coverage reads.展开更多
文摘BACKGROUND Upper gastrointestinal(GI)bleeding is a life-threatening condition with high mortality rates.AIM To compare the performance of pre-endoscopic risk scores in predicting the following primary outcomes:In-hospital mortality,intervention(endoscopic or surgical)and length of admission(≥7 d).METHODS We performed a retrospective analysis of 363 patients presenting with upper GI bleeding from December 2020 to January 2021.We calculated and compared the area under the receiver operating characteristics curves(AUROCs)of Glasgow-Blatchford score(GBS),pre-endoscopic Rockall score(PERS),albumin,international normalized ratio,altered mental status,systolic blood pressure,age older than 65(AIMS65)and age,blood tests and comorbidities(ABC),including their optimal cut-off in variceal and non-variceal upper GI bleeding cohorts.We subsequently analyzed through a logistic binary regression model,if addition of lactate increased the score performance.RESULTS All scores had discriminative ability in predicting in-hospital mortality irrespective of study group.AIMS65 score had the best performance in the variceal bleeding group(AUROC=0.772;P<0.001),and ABC score(AUROC=0.775;P<0.001)in the non-variceal bleeding group.However,ABC score,at a cut-off value of 5.5,was the best predictor(AUROC=0.770,P=0.001)of inhospital mortality in both populations.PERS score was a good predictor for endoscopic treatment(AUC=0.604;P=0.046)in the variceal population,while GBS score,(AUROC=0.722;P=0.024),outperformed the other scores in predicting surgical intervention.Addition of lactate to AIMS65 score,increases by 5-fold the probability of in-hospital mortality(P<0.05)and by 12-fold if added to GBS score(P<0.003).No score proved to be a good predictor for length of admission.CONCLUSION ABC score is the most accurate in predicting in-hospital mortality in both mixed and non-variceal bleeding population.PERS and GBS should be used to determine need for endoscopic and surgical intervention,respectively.Lactate can be used as an additional tool to risk scores for predicting inhospital mortality.
基金This work of Jiayan Zhu is partially supported by seeding project funding(2019ZZX026)scientific research project funding of talent recruitment,and start up funding for scientific research of Hubei University of Chinese MedicineThis work of Zhengbang Li is partially supported by self-determined research funds of Central China Normal University from colleges'basic research of MOE(CCNU18QN031).
文摘This article proposes the maximum test for a sequence of quadratic form statistics about score test in logistic regression model which can be applied to genetic and medicine fields.Theoretical properties about the maximum test are derived.Extensive simulation studies are conducted to testify powers robustness of the maximum test compared to other two existed test.We also apply the maximum test to a real dataset about multiple gene variables association analysis.
文摘In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditional Maxinnization (ECM) algorithm to estimate parameters and conduct numerical simulation, and performs fitting analysis on the test scores of Linear Algebra and Advanced Mathematics of F University. The empirical results show that the two-component mixed generalized normal distribution is better than the commonly used two-component mixed normal distribution in fitting college students’ test data, and has good application value.
文摘Little is known about how the assessment modality,i.e.,computer-based(CB)and paper-based(PB)tests,affects language teachers’scorings,perceptions,and preferences and,therefore,the validity and fairness of classroom writing assessments.The present mixed-methods study used Shaw and Weir’s(2007)sociocognitive writing test validation framework to examine the scoring and consequential validity evidence of CB and PB writing tests in EFL classroom assessment in higher education.Original handwritten and word-processed texts of 38 EFL university students were transcribed to their opposite format and assessed by three language lecturers(N=456 texts,152 per teacher)to examine the scoring validity of CB and PB tests.The teachers’perceptions of text quality and preferences for assessment modality accounted for the consequential validity evidence of both tests.Findings revealed that the assessment modality impacted teachers’scorings,perceptions,and preferences.The teachers awarded higher scores to original and transcribed handwritten texts,particularly text organization and language use.The teachers’perceptions of text quality differed from their ratings,and physical,psychological,and experiential characteristics influenced their preferences for assessment modality.The results have implications for the validity and fairness of CB and PB writing tests and teachers’assessment practices.
基金National Natural Science Foundation of China(No.12271261)Postgraduate Research and Practice Innovation Program of Jiangsu Province,China(Grant No.SJCX230368)。
文摘Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leading to incorrect inferences and conclusions,and ultimately affecting the validity and accuracy of statistical inferences.Considering this,the study designs a unified analysis scheme for different data types based on parametric statistical test methods and non-parametric test methods.The data were grouped according to sample type and divided into discrete data and continuous data.To account for differences among subgroups,the conventional chi-squared test was used for discrete data.The normal distribution is the basis of many statistical methods;if the data does not follow a normal distribution,many statistical methods will fail or produce incorrect results.Therefore,before data analysis and modeling,the data were divided into normal and non-normal groups through normality testing.For normally distributed data,parametric statistical methods were used to judge the differences between groups.For non-normal data,non-parametric tests were employed to improve the accuracy of the analysis.Statistically significant indicators were retained according to the significance index P-value of the statistical test or corresponding statistics.These indicators were then combined with relevant medical background to further explore the etiology leading to the occurrence or transformation of diabetes status.
基金the National 863 Program under Grant No. 2006AA01Z173.
文摘Safety-critical system (SCS) has highly demand for dependability, which requires plenty of resource to ensure that the system under test (SUT) satisfies the dependability requirement. In this paper, a new SCS rapid testing method is proposed to improve SCS adaptive dependability testing. The result of each test execution is saved in calculation memory unit and evaluated as an algorithm model. Then the least quantity of scenario test case for next test execution will be calculated according to the promised SUT's confidence level. The feedback data are generated to weight controller as the guideline for the further testing. Finally, a compre- hensive experiment study demonstrates that this adaptive testing method can really work in practice. This rapid testing method, testing result statistics-based adaptive control, makes the SCS dependability testing much more effective.
文摘Cardiovascular disease(CVD) is the leading cause of morbidity and mortality among patients with diabetes mellitus,who have a risk of cardiovascular mortality two to four times that of people without diabetes.An individualised approach to cardiovascular risk estimation and management is needed.Over the past decades,many risk scores have been developed to predict CVD.However,few have been externally validated in a diabetic population and limited studies have examined the impact of applying a prediction model in clinical practice.Currently,guidelines are focused on testing for CVD in symptomatic patients.Atypical symptoms or silent ischemia are more common in the diabetic population,and with additional markers of vascular disease such as erectile dysfunction and autonomic neuropathy,these guidelines can be difficult to interpret.We propose an algorithm incorporating cardiovascular risk scores in combination with typical and atypical signs and symptoms to alert clinicians to consider further investigation with provocative testing.The modalities for investigation of CVD are discussed.
文摘In this paper, some test statistics of Kolmogorov type and Cramervon Mises type based on projection pursuit technique are proposed for testing the sphericity problem of a high\|dimensional distribution. The limiting distributions of the test statistics are derived under the null hypothesis. The asymptotic properties of Bootstrap approximation are investigated and the tail behaviors of the statistics are studied.
文摘The traditional method for creating a gene score to predict a given outcome is to use the most statistically significant single nucleotide polymorphisms (SNPs) from all SNPs which were tested. There are several disadvantages of this approach such as excluding SNPs that do not have strong single effects when tested on their own but do have strong joint effects when tested together with other SNPs. The interpretation of results from the traditional gene score may lack biological insight since the functional unit of interest is often the gene, not the single SNP. In this paper we present a new gene scoring method, which overcomes these problems as it generates a gene score for each gene, and the total gene score for all the genes available. First, we calculate a gene score for each gene and second, we test the association between this gene score and the outcome of interest (i.e. trait). Only the gene scores which are significantly associated with the outcome after multiple testing correction for the number of gene tests (not SNPs) are considered in the total gene score calculation. This method controls false positive results caused by multiple tests within genes and between genes separately, and has the advantage of identifying multi-locus genetic effects, compared with the Bonferroni correction, false discovery rate (FDR), and permutation tests for all SNPs. Another main feature of this method is that we select the SNPs, which have different effects within a gene by using adjustment in multiple regressions and then combine the information from the selected SNPs within a gene to create a gene score. A simulation study has been conducted to evaluate finite sample performance of the proposed method.
文摘There are a few statistics testing the homogeneity of odds ratios across strata. Asymptotic statistics lose their power in the “sparse-data” setting. Both asymptotic statistics and exact tests have low power when the sample sizes are small. We created a set of U statistics and compared them with some existing statistics in testing homogeneity of OR at different data settings. We evaluated their performance in terms of the empirical size and power via Monto Carlo simulations. Our results showed that two of the U-statistics under our study had higher power for testing homogeneity of odds ratios for 2 by 2 contingency tables. The application of the tests was illustrated in two real examples.
文摘Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.
文摘Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.
文摘Two statistical validation methods were used to evaluate the confidence level of the Total Column Ozone (TCO) measurements recorded by satellite systems measuring simultaneously, one using the normal distribution and another using the Mann-Whitney test. First, the reliability of the TCO measurements was studied hemispherically. While similar coincidences and levels of significance > 0.05 were found with the two statistical tests, an enormous variability in the levels of significance throughout the year was also exposed. Then, using the same statistical comparison methods, a latitudinal study was carried out in order to elucidate the geographical distribution that gave rise to this variability. Our study reveals that between the TOMS and OMI measurements in 2005 there was only a coincidence in 50% of the latitudes, which explained the variability. This implies that for 2005, the TOMS measurements are not completely reliable, except between the -50° and -15° latitude band in the southern hemisphere and between +15° and +50° latitude band in the northern hemisphere. In the case of OMI-OMPS, we observe that between 2011 and 2016 the measurements of both satellite systems are reasonably similar with a confidence level higher than 95%. However, in 2017 a band with a width of 20° latitude centered on the equator appeared, in which the significance levels were much less than 0.05, indicating that one of the measurement systems had begun to fail. In 2018, the fault was not only located in the equator, but was also replicated in various bands in the Southern Hemisphere. We interpret this as evidence of irreversible failure in one of the measurement systems.
文摘Objective: To improve the detecting accuracy of chromosomal aneuploidy of fetus by non-invasive prenatal testing (NIPT) using next generation sequencing data of pregnant women’s cell-free DNA. Methods: We proposed the multi-Z method which uses 21 z-scores for each autosomal chromosome to detect aneuploidy of the chromosome, while the conventional NIPT method uses only one z-score. To do this, mapped read numbers of a certain chromosome were normalized by those of the other 21 chromosomes. Average and standard deviation (SD), which are used for calculating z-score of each sample, were obtained with normalized values between all autosomal chromosomes of control samples. In this way, multiple z-scores can be calculated for 21 autosomal chromosomes except oneself. Results: Multi-Z method showed 100% sensitivity and specificity for 187 samples sequenced to 3 M reads while the conventional NIPT method showed 95.1% specificity. Similarly, for 216 samples sequenced to 1 M reads, Multi-Z method showed 100% sensitivity and 95.6% specificity and the conventional NIPT method showed a result of 75.1% specificity. Conclusion: Multi-Z method showed higher accuracy and robust results than the conventional method even at low coverage reads.