This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviat...This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviation ratio of 1, was conducted for both small and large sample sizes. For small sample sizes, two main categories were established: equal and different sample sizes. Analyses were performed using Monte Carlo simulations with 20,000 repetitions for each scenario, and the simulations were evaluated using SAS software. For small sample sizes, the I. type error rate of the Siegel-Tukey test generally ranged from 0.045 to 0.055, while the I. type error rate of the Savage test was observed to range from 0.016 to 0.041. Similar trends were observed for Platykurtic and Skewed distributions. In scenarios with different sample sizes, the Savage test generally exhibited lower I. type error rates. For large sample sizes, two main categories were established: equal and different sample sizes. For large sample sizes, the I. type error rate of the Siegel-Tukey test ranged from 0.047 to 0.052, while the I. type error rate of the Savage test ranged from 0.043 to 0.051. In cases of equal sample sizes, both tests generally had lower error rates, with the Savage test providing more consistent results for large sample sizes. In conclusion, it was determined that the Savage test provides lower I. type error rates for small sample sizes and that both tests have similar error rates for large sample sizes. These findings suggest that the Savage test could be a more reliable option when analyzing variance differences.展开更多
In this paper, we investigate the problem of approximating solutions of the equations of Lipschitzian ψ-strongly accretive operators and fixed points of Lipschitzian ψ-hemicontractive operators by lshikawa type iter...In this paper, we investigate the problem of approximating solutions of the equations of Lipschitzian ψ-strongly accretive operators and fixed points of Lipschitzian ψ-hemicontractive operators by lshikawa type iterative sequences with errors. Our results unify, improve and extend the results obtained previously by several authors including Li and Liu (Acta Math. Sinica 41 (4)(1998), 845-850), and Osilike (Nonlinear Anal. TMA, 36(1)(1999), 1-9), and also answer completely the open problems mentioned by Chidume (J. Math. Anal. Appl. 151 (2)(1990), 453-461).展开更多
Using intermediate and advanced Chinese students as the research subjects,HSK corpus and questionnaire surveys were used to explore the error types in the construction of"X什么(Y)+都+Z."The types of errors c...Using intermediate and advanced Chinese students as the research subjects,HSK corpus and questionnaire surveys were used to explore the error types in the construction of"X什么(Y)+都+Z."The types of errors can be divided into component errors,relationship errors between components,and collocation errors between components and sentences.The main reasons for the errors are the particularity of the construction,the immature development of construction teaching,the negative transfer of students5 mother tongue,as well as the generalization and avoidance of the target language.展开更多
The use of Statistical Hypothesis Testing procedure to determine type I and type II errors was linked to the measurement of sensitivity and specificity in clinical trial test and experimental pathogen detection techni...The use of Statistical Hypothesis Testing procedure to determine type I and type II errors was linked to the measurement of sensitivity and specificity in clinical trial test and experimental pathogen detection techniques. A theoretical analysis of establishing these types of errors was made and compared to determination of False Positive, False Negative, True Positive and True Negative. Experimental laboratory detection methods used to detect Cryptosporidium spp. were used to highlight the relationship between hypothesis testing, sensitivity, specificity and predicted values. The study finds that, sensitivity and specificity for the two laboratory methods used for Cryptosporidium detection were low hence lowering the probability of detecting a “false null hypothesis” for the presence of cryptosporidium in the water samples using either Microscopic or PCR. Nevertheless, both procedures for cryptosporidium detection had higher “true negatives” increasing its probability of failing to reject a “true null hypothesis” with specificity of 1.00 for both Microscopic and PCR laboratory detection methods.展开更多
The aim of this paper is to present a generalization of the Shapiro-Wilk W-test or Shapiro-Francia W'-test for application to two or more variables. It consists of calculating all the unweighted linear combination...The aim of this paper is to present a generalization of the Shapiro-Wilk W-test or Shapiro-Francia W'-test for application to two or more variables. It consists of calculating all the unweighted linear combinations of the variables and their W- or W'-statistics with the Royston’s log-transformation and standardization, z<sub>ln(1-W)</sub> or z<sub>ln(1-W</sub><sub>'</sub><sub>)</sub>. Because the calculation of the probability of z<sub>ln(1-W)</sub> or z<sub>ln(1-W</sub><sub>'</sub><sub>)</sub> is to the right tail, negative values are truncated to 0 before doing their sum of squares. Independence in the sequence of these half-normally distributed values is required for the test statistic to follow a chi-square distribution. This assumption is checked using the robust Ljung-Box test. One degree of freedom is lost for each cancelled value. Defined the new test with its two variants (Q-test or Q'-test), 50 random samples with 4 variables and 20 participants were generated, 20% following a multivariate normal distribution and 80% deviating from this distribution. The new test was compared with Mardia’s, runs, and Royston’s tests. Central tendency differences in type II error and statistical power were tested using the Friedman’s test and pairwise comparisons using the Wilcoxon’s test. Differences in the frequency of successes in statistical decision making were compared using the Cochran’s Q test and pairwise comparisons using the McNemar’s test. Sensitivity, specificity and efficiency proportions were compared using the McNemar’s Z test. The generated 50 samples were classified into five ordered categories of deviation from multivariate normality, the correlation between this variable and p-value of each test was calculated using the Spearman’s coefficient and these correlations were compared. Family-wise error rate corrections were applied. The new test and the Royston’s test were the best choices, with a very slight advantage Q-test over Q'-test. Based on these promising results, further study and use of this new sensitive, specific and effective test are suggested.展开更多
Error correction, in recent times, is seen as one of the important teaching processes in L2 (second language) learning, because comprehensible inputs alone is insufficient for acquisition of language. However, few L...Error correction, in recent times, is seen as one of the important teaching processes in L2 (second language) learning, because comprehensible inputs alone is insufficient for acquisition of language. However, few L2 teachers know much about error analysis and how to correct errors in the L2 classroom. Error correction is a very complicated and a thorny issue in L2 teaching and learning. L2 teachers, therefore, need to be armed with ways in which errors can be treated to ensure maximum effect yet with less harm to learners. Identifying learners' errors is very important in L2 learning, but how to correct them to give the desired effect is equally important and very challenging to L2 teachers. It is therefore crucial to initiate a study in Ghana to find out how errors are corrected in the Ghanaian English language classroom. This case study used complete observation and semi-structured interview as data collection strategies to identify error correction strategies/types English teachers use in the Ghanaian JHS (Junior High School) classroom and how error correction/treatment can be improved to facilitate English language teaching and learning. The findings of the study showed that explicit error correction technique was the most commonly used followed by recast, elicitation, metalinguistic clues, clarification request, repetition, and cues. It was also found that the causes of the disparity in the use of the various error correction types were inadequate teacher preparation, incompetence in English language, limited knowledge in error correction, caliber of students, and insufficient teaching time. This study identified that the situation can be improved through effective teacher training, in-service training, learner involvement, and effective planning.展开更多
With recent advances in biotechnology, genome-wide association study (GWAS) has been widely used to identify genetic variants that underlie human complex diseases and traits. In case-control GWAS, typical statistica...With recent advances in biotechnology, genome-wide association study (GWAS) has been widely used to identify genetic variants that underlie human complex diseases and traits. In case-control GWAS, typical statistical strategy is traditional logistical regression (LR) based on single-locus analysis. However, such a single-locus analysis leads to the well-known multiplicity problem, with a risk of inflating type I error and reducing power. Dimension reduction-based techniques, such as principal component-based logistic regression (PC-LR), partial least squares-based logistic regression (PLS-LR), have recently gained much attention in the analysis of high dimensional genomic data. However, the perfor- mance of these methods is still not clear, especially in GWAS. We conducted simulations and real data application to compare the type I error and power of PC-LR, PLS-LR and LR applicable to GWAS within a defined single nucleotide polymorphism (SNP) set region. We found that PC-LR and PLS can reasonably control type I error under null hypothesis. On contrast, LR, which is corrected by Bonferroni method, was more conserved in all simulation settings. In particular, we found that PC-LR and PLS-LR had comparable power and they both outperformed LR, especially when the causal SNP was in high linkage disequilibrium with genotyped ones and with a small effective size in simulation. Based on SNP set analysis, we applied all three methods to analyze non-small cell lung cancer GWAS data.展开更多
We describe a patient with a Homo sapiens mutL homolog 1 (MLH1)-associated Lynch syndrome with previous diagnoses of two distinct primary cancers:a sigmoid colon cancer at the age of 39 years, and a right colon cancer...We describe a patient with a Homo sapiens mutL homolog 1 (MLH1)-associated Lynch syndrome with previous diagnoses of two distinct primary cancers:a sigmoid colon cancer at the age of 39 years, and a right colon cancer at the age of 50 years. The mutation identified in his blood and buccal cells, c.1771delG, p.Asp591Ilefs*25, appears to be a de novo event, as it was not transmitted by either of his parents. This type of de novo event is rare in MLH1 as only three cases have been reported in the literature so far. Further-more, the discordant results observed between repli-cation error phenotyping and immunohistochemistry highlight the importance of the systematic use of both pre-screening tests in the molecular diagnosis of Lynch syndrome.展开更多
Heteroscedasticity and multicollinearity are serious problems when they exist in econometrics data. These problems exist as a result of violating the assumptions of equal variance between the error terms and that of i...Heteroscedasticity and multicollinearity are serious problems when they exist in econometrics data. These problems exist as a result of violating the assumptions of equal variance between the error terms and that of independence between the explanatory variables of the model. With these assumption violations, Ordinary Least Square Estimator</span><span style="font-family:""> </span><span style="font-family:""><span style="font-family:Verdana;">(OLS) will not give best linear unbiased, efficient and consistent estimator. In practice, there are several structures of heteroscedasticity and several methods of heteroscedasticity detection. For better estimation result, best heteroscedasticity detection methods must be determined for any structure of heteroscedasticity in the presence of multicollinearity between the explanatory variables of the model. In this paper we examine the effects of multicollinearity on type I error rates of some methods of heteroscedasticity detection in linear regression model in other to determine the best method of heteroscedasticity detection to use when both problems exist in the model. Nine heteroscedasticity detection methods were considered with seven heteroscedasticity structures. Simulation study was done via a Monte Carlo experiment on a multiple linear regression model with 3 explanatory variables. This experiment was conducted 1000 times with linear model parameters of </span><span style="white-space:nowrap;"><em><span style="font-family:Verdana;">β</span></em><sub><span style="font-family:Verdana;">0</span></sub><span style="font-family:Verdana;"> = 4 , </span><em><span style="font-family:Verdana;">β</span></em><sub><span style="font-family:Verdana;">1</span></sub><span style="font-family:Verdana;"> = 0.4 , </span><em><span style="font-family:Verdana;">β</span></em><sub><span style="font-family:Verdana;">2</span></sub><span style="font-family:Verdana;">= 1.5</span></span></span><span style="font-family:""><span style="font-family:Verdana;"> and </span><em style="font-family:""><span style="font-family:Verdana;">β</span><span style="font-family:Verdana;"><sub>3 </sub></span></em><span style="font-family:Verdana;">= 3.6</span><span style="font-family:Verdana;">. </span><span style="font-family:Verdana;">Five (5) </span><span style="font-family:Verdana;"></span><span style="font-family:Verdana;">levels of</span><span style="white-space:nowrap;font-family:Verdana;"> </span><span style="font-family:Verdana;"></span><span style="font-family:Verdana;">mulicollinearity </span></span><span style="font-family:Verdana;">are </span><span style="font-family:Verdana;">with seven</span><span style="font-family:""> </span><span style="font-family:Verdana;">(7) different sample sizes. The method’s performances were compared with the aids of set confidence interval (C.I</span><span style="font-family:Verdana;">.</span><span style="font-family:Verdana;">) criterion. Results showed that whenever multicollinearity exists in the model with any forms of heteroscedasticity structures, Breusch-Godfrey (BG) test is the best method to determine the existence of heteroscedasticity at all chosen levels of significance.展开更多
The paper discusses the generalization of constrained Bayesian method (CBM) for arbitrary loss functions and its application for testing the directional hypotheses. The problem is stated in terms of false and tru...The paper discusses the generalization of constrained Bayesian method (CBM) for arbitrary loss functions and its application for testing the directional hypotheses. The problem is stated in terms of false and true discovery rates. One more criterion of estimation of directional hypotheses tests quality, the Type III errors rate, is considered. The ratio among discovery rates and the Type III errors rate in CBM is considered. The advantage of CBM in comparison with Bayes and frequentist methods is theoretically proved and demonstrated by an example.展开更多
In this simulation study, five correlation coefficients, namely, Pearson, Spearman, Kendal Tau, Permutation-based, and Winsorized were compared in terms of Type I error rate and power under different scenarios where t...In this simulation study, five correlation coefficients, namely, Pearson, Spearman, Kendal Tau, Permutation-based, and Winsorized were compared in terms of Type I error rate and power under different scenarios where the underlying distributions of the variables of interest, sample sizes and correlation patterns were varied. Simulation results showed that the Type I error rate and power of Pearson correlation coefficient were negatively affected by the distribution shapes especially for small sample sizes, which was much more pronounced for Spearman Rank and Kendal Tau correlation coefficients especially when sample sizes were small. In general, Permutation-based and Winsorized correlation coefficients are more robust to distribution shapes and correlation patterns, regardless of sample size. In conclusion, when assumptions of Pearson correlation coefficient are not satisfied, Permutation-based and Winsorized correlation coefficients seem to be better alternatives.展开更多
Exclusive hypothesis testing is a new and special class of hypothesis testing.This kind of testing can be applied in survival analysis to understand the association between genomics information and clinical informatio...Exclusive hypothesis testing is a new and special class of hypothesis testing.This kind of testing can be applied in survival analysis to understand the association between genomics information and clinical information about the survival time.Besides,it is well known that Cox's proportional hazards model is the most commonly used model for regression analysis of failure time.In this paper,the authors consider doing the exclusive hypothesis testing for Cox's proportional hazards model with right-censored data.The authors propose the comprehensive test statistics to make decision,and show that the corresponding decision rule can control the asymptotic TypeⅠerrors and have good powers in theory.The numerical studies indicate that the proposed approach works well for practical situations and it is applied to a set of real data arising from Rotterdam Breast Cancer Data study that motivated this study.展开更多
We consider an Adaptive Edge Finite Element Method (AEFEM) for the 3D eddy currents equations with variable coefficients using a residual-type a posteriori error estimator. Both the components of the estimator and c...We consider an Adaptive Edge Finite Element Method (AEFEM) for the 3D eddy currents equations with variable coefficients using a residual-type a posteriori error estimator. Both the components of the estimator and certain oscillation terms, due to the occurrence of the variable coefficients, have to be controlled properly within the adaptive loop which is taken care of by appropriate bulk criteria. Convergence of the AEFEM in terms of reductions of the energy norm of the discretization error and of the oscillations is shown. Numerical results are given to illustrate the performance of the AEFEM.展开更多
In this paper,two formulation theorems of time-difference fidelity schemes for general quadratic and cubic physical conservation laws are respectively constructed and proved,with earlier major conserving time-discreti...In this paper,two formulation theorems of time-difference fidelity schemes for general quadratic and cubic physical conservation laws are respectively constructed and proved,with earlier major conserving time-discretized schemes given as special cases.These two theorems can provide new mathematical basis for solving basic formulation problems of more types of conservative time- discrete fidelity schemes,and even for formulating conservative temporal-spatial discrete fidelity schemes by combining existing instantly conserving space-discretized schemes.Besides.the two theorems can also solve two large categories of problems about linear and nonlinear computational instability. The traditional global spectral-vertical finite-difference semi-implicit model for baroclinic primitive equations is currently used in many countries in the world for operational weather forecast and numerical simulations of general circulation.The present work,however,based on Theorem 2 formulated in this paper,develops and realizes a high-order total energy conserving semi-implicit time-difference fidelity scheme for global spectral-vertical finite-difference model of baroclinic primitive equations.Prior to this,such a basic formulation problem remains unsolved for long,whether in terms of theory or practice.The total energy conserving semi-implicit scheme formulated here is applicable to real data long-term numerical integration. The experiment of thirteen FGGE data 30-day numerical integration indicates that the new type of total energy conserving semi-implicit fidelity scheme can surely modify the systematic deviation of energy and mass conserving of the traditional scheme.It should be particularly noted that,under the experiment conditions of the present work,the systematic errors induced by the violation of physical laws of conservation in the time-discretized process regarding the traditional scheme designs(called type Z errors for short)can contribute up to one-third of the total systematic root-mean-square(RMS)error at the end of second week of the integration and exceed one half of the total amount four weeks afterwards.In contrast,by realizing a total energy conserving semi-implicit fidelity scheme and thereby eliminating corresponding type Z errors, roughly an average of one-fourth of the RMS errors in the traditional forecast cases can be reduced at the end of second week of the integration,and averagely more than one-third reduced at integral time of four weeks afterwards.In addition,experiment results also reveal that,in a sense,the effects of type Z errors are no less great than that of the real topographic forcing of the model.The prospects of the new type of total energy conserving fidelity schemes are very encouraging.展开更多
Bayesian adaptive randomization has attracted increasingly attention in the literature and has been implemented in many phase II clinical trials. Doubly adaptive biased coin design(DBCD) is a superior choice in respon...Bayesian adaptive randomization has attracted increasingly attention in the literature and has been implemented in many phase II clinical trials. Doubly adaptive biased coin design(DBCD) is a superior choice in response-adaptive designs owing to its promising properties. In this paper, we propose a randomized design by combining Bayesian adaptive randomization with doubly adaptive biased coin design. By selecting a fixed tuning parameter, the proposed randomization procedure can target an explicit allocation proportion, and assign more patients to the better treatment simultaneously. Moreover, the proposed randomization is efficient to detect treatment differences. We illustrate the proposed design by its applications to both discrete and continuous responses, and evaluate its operating features through simulation studies.展开更多
Covariate-adaptive randomisation has a long history of applications in clinical trials. Shao, Yu,and Zhong [(2010). A theory for testing hypotheses under covariate-adaptive randomization.Biometrika, 97, 347–360] and ...Covariate-adaptive randomisation has a long history of applications in clinical trials. Shao, Yu,and Zhong [(2010). A theory for testing hypotheses under covariate-adaptive randomization.Biometrika, 97, 347–360] and Shao and Yu [(2013). Validity of tests under covariate-adaptivebiased coin randomization and generalized linear models. Biometrics, 69, 960–969] showed thatthe simple t-test is conservative under covariate-adaptive biased coin (CABC) randomisation interms of type I error, and proposed a valid test using the bootstrap. Under a general additivemodel with CABC randomisation, we construct a calibrated t-test that shares the same propertyas the bootstrap method in Shao et al. (2010), but do not need large computation required by thebootstrap method. Some simulation results are presented to show the finite sample performanceof the calibrated t-test.展开更多
To test variance homogeneity,various likelihood-ratio based tests such as the Bartlett's test have been proposed.The null distributions of these tests were generally derived asymptotically or approximately.We re-e...To test variance homogeneity,various likelihood-ratio based tests such as the Bartlett's test have been proposed.The null distributions of these tests were generally derived asymptotically or approximately.We re-examine the restrictive maximum likelihood ratio(RELR)statistic,and sug-gest a Monte Carlo algorithm to compute its exact null distribution,and so its p-value.It is much easier to implement than most existing methods.Simulation studies indicate that the proposed procedure is also superior to its competitors in terms of type I error and powers.We analyse an environmental dataset for an illustration.展开更多
文摘This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviation ratio of 1, was conducted for both small and large sample sizes. For small sample sizes, two main categories were established: equal and different sample sizes. Analyses were performed using Monte Carlo simulations with 20,000 repetitions for each scenario, and the simulations were evaluated using SAS software. For small sample sizes, the I. type error rate of the Siegel-Tukey test generally ranged from 0.045 to 0.055, while the I. type error rate of the Savage test was observed to range from 0.016 to 0.041. Similar trends were observed for Platykurtic and Skewed distributions. In scenarios with different sample sizes, the Savage test generally exhibited lower I. type error rates. For large sample sizes, two main categories were established: equal and different sample sizes. For large sample sizes, the I. type error rate of the Siegel-Tukey test ranged from 0.047 to 0.052, while the I. type error rate of the Savage test ranged from 0.043 to 0.051. In cases of equal sample sizes, both tests generally had lower error rates, with the Savage test providing more consistent results for large sample sizes. In conclusion, it was determined that the Savage test provides lower I. type error rates for small sample sizes and that both tests have similar error rates for large sample sizes. These findings suggest that the Savage test could be a more reliable option when analyzing variance differences.
基金supported by the Teaching and Research Award Fund for Outstanding Young Teachers in Higher Educations of MOE,P.R.C.the National Natural Science Foundation of P.R.C.No.19801023
文摘In this paper, we investigate the problem of approximating solutions of the equations of Lipschitzian ψ-strongly accretive operators and fixed points of Lipschitzian ψ-hemicontractive operators by lshikawa type iterative sequences with errors. Our results unify, improve and extend the results obtained previously by several authors including Li and Liu (Acta Math. Sinica 41 (4)(1998), 845-850), and Osilike (Nonlinear Anal. TMA, 36(1)(1999), 1-9), and also answer completely the open problems mentioned by Chidume (J. Math. Anal. Appl. 151 (2)(1990), 453-461).
文摘Using intermediate and advanced Chinese students as the research subjects,HSK corpus and questionnaire surveys were used to explore the error types in the construction of"X什么(Y)+都+Z."The types of errors can be divided into component errors,relationship errors between components,and collocation errors between components and sentences.The main reasons for the errors are the particularity of the construction,the immature development of construction teaching,the negative transfer of students5 mother tongue,as well as the generalization and avoidance of the target language.
文摘The use of Statistical Hypothesis Testing procedure to determine type I and type II errors was linked to the measurement of sensitivity and specificity in clinical trial test and experimental pathogen detection techniques. A theoretical analysis of establishing these types of errors was made and compared to determination of False Positive, False Negative, True Positive and True Negative. Experimental laboratory detection methods used to detect Cryptosporidium spp. were used to highlight the relationship between hypothesis testing, sensitivity, specificity and predicted values. The study finds that, sensitivity and specificity for the two laboratory methods used for Cryptosporidium detection were low hence lowering the probability of detecting a “false null hypothesis” for the presence of cryptosporidium in the water samples using either Microscopic or PCR. Nevertheless, both procedures for cryptosporidium detection had higher “true negatives” increasing its probability of failing to reject a “true null hypothesis” with specificity of 1.00 for both Microscopic and PCR laboratory detection methods.
文摘The aim of this paper is to present a generalization of the Shapiro-Wilk W-test or Shapiro-Francia W'-test for application to two or more variables. It consists of calculating all the unweighted linear combinations of the variables and their W- or W'-statistics with the Royston’s log-transformation and standardization, z<sub>ln(1-W)</sub> or z<sub>ln(1-W</sub><sub>'</sub><sub>)</sub>. Because the calculation of the probability of z<sub>ln(1-W)</sub> or z<sub>ln(1-W</sub><sub>'</sub><sub>)</sub> is to the right tail, negative values are truncated to 0 before doing their sum of squares. Independence in the sequence of these half-normally distributed values is required for the test statistic to follow a chi-square distribution. This assumption is checked using the robust Ljung-Box test. One degree of freedom is lost for each cancelled value. Defined the new test with its two variants (Q-test or Q'-test), 50 random samples with 4 variables and 20 participants were generated, 20% following a multivariate normal distribution and 80% deviating from this distribution. The new test was compared with Mardia’s, runs, and Royston’s tests. Central tendency differences in type II error and statistical power were tested using the Friedman’s test and pairwise comparisons using the Wilcoxon’s test. Differences in the frequency of successes in statistical decision making were compared using the Cochran’s Q test and pairwise comparisons using the McNemar’s test. Sensitivity, specificity and efficiency proportions were compared using the McNemar’s Z test. The generated 50 samples were classified into five ordered categories of deviation from multivariate normality, the correlation between this variable and p-value of each test was calculated using the Spearman’s coefficient and these correlations were compared. Family-wise error rate corrections were applied. The new test and the Royston’s test were the best choices, with a very slight advantage Q-test over Q'-test. Based on these promising results, further study and use of this new sensitive, specific and effective test are suggested.
文摘Error correction, in recent times, is seen as one of the important teaching processes in L2 (second language) learning, because comprehensible inputs alone is insufficient for acquisition of language. However, few L2 teachers know much about error analysis and how to correct errors in the L2 classroom. Error correction is a very complicated and a thorny issue in L2 teaching and learning. L2 teachers, therefore, need to be armed with ways in which errors can be treated to ensure maximum effect yet with less harm to learners. Identifying learners' errors is very important in L2 learning, but how to correct them to give the desired effect is equally important and very challenging to L2 teachers. It is therefore crucial to initiate a study in Ghana to find out how errors are corrected in the Ghanaian English language classroom. This case study used complete observation and semi-structured interview as data collection strategies to identify error correction strategies/types English teachers use in the Ghanaian JHS (Junior High School) classroom and how error correction/treatment can be improved to facilitate English language teaching and learning. The findings of the study showed that explicit error correction technique was the most commonly used followed by recast, elicitation, metalinguistic clues, clarification request, repetition, and cues. It was also found that the causes of the disparity in the use of the various error correction types were inadequate teacher preparation, incompetence in English language, limited knowledge in error correction, caliber of students, and insufficient teaching time. This study identified that the situation can be improved through effective teacher training, in-service training, learner involvement, and effective planning.
基金founded by the National Natural Science Foundation of China(81202283,81473070,81373102 and81202267)Key Grant of Natural Science Foundation of the Jiangsu Higher Education Institutions of China(10KJA330034 and11KJA330001)+1 种基金the Research Fund for the Doctoral Program of Higher Education of China(20113234110002)the Priority Academic Program for the Development of Jiangsu Higher Education Institutions(Public Health and Preventive Medicine)
文摘With recent advances in biotechnology, genome-wide association study (GWAS) has been widely used to identify genetic variants that underlie human complex diseases and traits. In case-control GWAS, typical statistical strategy is traditional logistical regression (LR) based on single-locus analysis. However, such a single-locus analysis leads to the well-known multiplicity problem, with a risk of inflating type I error and reducing power. Dimension reduction-based techniques, such as principal component-based logistic regression (PC-LR), partial least squares-based logistic regression (PLS-LR), have recently gained much attention in the analysis of high dimensional genomic data. However, the perfor- mance of these methods is still not clear, especially in GWAS. We conducted simulations and real data application to compare the type I error and power of PC-LR, PLS-LR and LR applicable to GWAS within a defined single nucleotide polymorphism (SNP) set region. We found that PC-LR and PLS can reasonably control type I error under null hypothesis. On contrast, LR, which is corrected by Bonferroni method, was more conserved in all simulation settings. In particular, we found that PC-LR and PLS-LR had comparable power and they both outperformed LR, especially when the causal SNP was in high linkage disequilibrium with genotyped ones and with a small effective size in simulation. Based on SNP set analysis, we applied all three methods to analyze non-small cell lung cancer GWAS data.
文摘We describe a patient with a Homo sapiens mutL homolog 1 (MLH1)-associated Lynch syndrome with previous diagnoses of two distinct primary cancers:a sigmoid colon cancer at the age of 39 years, and a right colon cancer at the age of 50 years. The mutation identified in his blood and buccal cells, c.1771delG, p.Asp591Ilefs*25, appears to be a de novo event, as it was not transmitted by either of his parents. This type of de novo event is rare in MLH1 as only three cases have been reported in the literature so far. Further-more, the discordant results observed between repli-cation error phenotyping and immunohistochemistry highlight the importance of the systematic use of both pre-screening tests in the molecular diagnosis of Lynch syndrome.
文摘Heteroscedasticity and multicollinearity are serious problems when they exist in econometrics data. These problems exist as a result of violating the assumptions of equal variance between the error terms and that of independence between the explanatory variables of the model. With these assumption violations, Ordinary Least Square Estimator</span><span style="font-family:""> </span><span style="font-family:""><span style="font-family:Verdana;">(OLS) will not give best linear unbiased, efficient and consistent estimator. In practice, there are several structures of heteroscedasticity and several methods of heteroscedasticity detection. For better estimation result, best heteroscedasticity detection methods must be determined for any structure of heteroscedasticity in the presence of multicollinearity between the explanatory variables of the model. In this paper we examine the effects of multicollinearity on type I error rates of some methods of heteroscedasticity detection in linear regression model in other to determine the best method of heteroscedasticity detection to use when both problems exist in the model. Nine heteroscedasticity detection methods were considered with seven heteroscedasticity structures. Simulation study was done via a Monte Carlo experiment on a multiple linear regression model with 3 explanatory variables. This experiment was conducted 1000 times with linear model parameters of </span><span style="white-space:nowrap;"><em><span style="font-family:Verdana;">β</span></em><sub><span style="font-family:Verdana;">0</span></sub><span style="font-family:Verdana;"> = 4 , </span><em><span style="font-family:Verdana;">β</span></em><sub><span style="font-family:Verdana;">1</span></sub><span style="font-family:Verdana;"> = 0.4 , </span><em><span style="font-family:Verdana;">β</span></em><sub><span style="font-family:Verdana;">2</span></sub><span style="font-family:Verdana;">= 1.5</span></span></span><span style="font-family:""><span style="font-family:Verdana;"> and </span><em style="font-family:""><span style="font-family:Verdana;">β</span><span style="font-family:Verdana;"><sub>3 </sub></span></em><span style="font-family:Verdana;">= 3.6</span><span style="font-family:Verdana;">. </span><span style="font-family:Verdana;">Five (5) </span><span style="font-family:Verdana;"></span><span style="font-family:Verdana;">levels of</span><span style="white-space:nowrap;font-family:Verdana;"> </span><span style="font-family:Verdana;"></span><span style="font-family:Verdana;">mulicollinearity </span></span><span style="font-family:Verdana;">are </span><span style="font-family:Verdana;">with seven</span><span style="font-family:""> </span><span style="font-family:Verdana;">(7) different sample sizes. The method’s performances were compared with the aids of set confidence interval (C.I</span><span style="font-family:Verdana;">.</span><span style="font-family:Verdana;">) criterion. Results showed that whenever multicollinearity exists in the model with any forms of heteroscedasticity structures, Breusch-Godfrey (BG) test is the best method to determine the existence of heteroscedasticity at all chosen levels of significance.
文摘The paper discusses the generalization of constrained Bayesian method (CBM) for arbitrary loss functions and its application for testing the directional hypotheses. The problem is stated in terms of false and true discovery rates. One more criterion of estimation of directional hypotheses tests quality, the Type III errors rate, is considered. The ratio among discovery rates and the Type III errors rate in CBM is considered. The advantage of CBM in comparison with Bayes and frequentist methods is theoretically proved and demonstrated by an example.
文摘In this simulation study, five correlation coefficients, namely, Pearson, Spearman, Kendal Tau, Permutation-based, and Winsorized were compared in terms of Type I error rate and power under different scenarios where the underlying distributions of the variables of interest, sample sizes and correlation patterns were varied. Simulation results showed that the Type I error rate and power of Pearson correlation coefficient were negatively affected by the distribution shapes especially for small sample sizes, which was much more pronounced for Spearman Rank and Kendal Tau correlation coefficients especially when sample sizes were small. In general, Permutation-based and Winsorized correlation coefficients are more robust to distribution shapes and correlation patterns, regardless of sample size. In conclusion, when assumptions of Pearson correlation coefficient are not satisfied, Permutation-based and Winsorized correlation coefficients seem to be better alternatives.
基金supported by the National Natural Science Foundation of China under Grant Nos.11971064,12371262,and 12171374。
文摘Exclusive hypothesis testing is a new and special class of hypothesis testing.This kind of testing can be applied in survival analysis to understand the association between genomics information and clinical information about the survival time.Besides,it is well known that Cox's proportional hazards model is the most commonly used model for regression analysis of failure time.In this paper,the authors consider doing the exclusive hypothesis testing for Cox's proportional hazards model with right-censored data.The authors propose the comprehensive test statistics to make decision,and show that the corresponding decision rule can control the asymptotic TypeⅠerrors and have good powers in theory.The numerical studies indicate that the proposed approach works well for practical situations and it is applied to a set of real data arising from Rotterdam Breast Cancer Data study that motivated this study.
基金The work of the first author was supported by the NSF under Grant No.DMS-0411403 and Grant No.DMS-0511611The second author acknowledges the support from the Austrian Science Foundation(FWF)under Grant No.Start Y-192Both authors acknowledge support and the inspiring athmosphere at the Johann Radon Institute for Computational and Applied Mathematics(RICAM),Linz,Austria,during the special semester on computational mechanics
文摘We consider an Adaptive Edge Finite Element Method (AEFEM) for the 3D eddy currents equations with variable coefficients using a residual-type a posteriori error estimator. Both the components of the estimator and certain oscillation terms, due to the occurrence of the variable coefficients, have to be controlled properly within the adaptive loop which is taken care of by appropriate bulk criteria. Convergence of the AEFEM in terms of reductions of the energy norm of the discretization error and of the oscillations is shown. Numerical results are given to illustrate the performance of the AEFEM.
基金The work is supported by the National Natural Science Foundation of China(49675267).
文摘In this paper,two formulation theorems of time-difference fidelity schemes for general quadratic and cubic physical conservation laws are respectively constructed and proved,with earlier major conserving time-discretized schemes given as special cases.These two theorems can provide new mathematical basis for solving basic formulation problems of more types of conservative time- discrete fidelity schemes,and even for formulating conservative temporal-spatial discrete fidelity schemes by combining existing instantly conserving space-discretized schemes.Besides.the two theorems can also solve two large categories of problems about linear and nonlinear computational instability. The traditional global spectral-vertical finite-difference semi-implicit model for baroclinic primitive equations is currently used in many countries in the world for operational weather forecast and numerical simulations of general circulation.The present work,however,based on Theorem 2 formulated in this paper,develops and realizes a high-order total energy conserving semi-implicit time-difference fidelity scheme for global spectral-vertical finite-difference model of baroclinic primitive equations.Prior to this,such a basic formulation problem remains unsolved for long,whether in terms of theory or practice.The total energy conserving semi-implicit scheme formulated here is applicable to real data long-term numerical integration. The experiment of thirteen FGGE data 30-day numerical integration indicates that the new type of total energy conserving semi-implicit fidelity scheme can surely modify the systematic deviation of energy and mass conserving of the traditional scheme.It should be particularly noted that,under the experiment conditions of the present work,the systematic errors induced by the violation of physical laws of conservation in the time-discretized process regarding the traditional scheme designs(called type Z errors for short)can contribute up to one-third of the total systematic root-mean-square(RMS)error at the end of second week of the integration and exceed one half of the total amount four weeks afterwards.In contrast,by realizing a total energy conserving semi-implicit fidelity scheme and thereby eliminating corresponding type Z errors, roughly an average of one-fourth of the RMS errors in the traditional forecast cases can be reduced at the end of second week of the integration,and averagely more than one-third reduced at integral time of four weeks afterwards.In addition,experiment results also reveal that,in a sense,the effects of type Z errors are no less great than that of the real topographic forcing of the model.The prospects of the new type of total energy conserving fidelity schemes are very encouraging.
基金supported by National Natural Science Foundation of China (Grant No. 11371366)Doctoral Research Fund of Henan Polytechnic University (Grant No. 672103/001/147)
文摘Bayesian adaptive randomization has attracted increasingly attention in the literature and has been implemented in many phase II clinical trials. Doubly adaptive biased coin design(DBCD) is a superior choice in response-adaptive designs owing to its promising properties. In this paper, we propose a randomized design by combining Bayesian adaptive randomization with doubly adaptive biased coin design. By selecting a fixed tuning parameter, the proposed randomization procedure can target an explicit allocation proportion, and assign more patients to the better treatment simultaneously. Moreover, the proposed randomization is efficient to detect treatment differences. We illustrate the proposed design by its applications to both discrete and continuous responses, and evaluate its operating features through simulation studies.
文摘Covariate-adaptive randomisation has a long history of applications in clinical trials. Shao, Yu,and Zhong [(2010). A theory for testing hypotheses under covariate-adaptive randomization.Biometrika, 97, 347–360] and Shao and Yu [(2013). Validity of tests under covariate-adaptivebiased coin randomization and generalized linear models. Biometrics, 69, 960–969] showed thatthe simple t-test is conservative under covariate-adaptive biased coin (CABC) randomisation interms of type I error, and proposed a valid test using the bootstrap. Under a general additivemodel with CABC randomisation, we construct a calibrated t-test that shares the same propertyas the bootstrap method in Shao et al. (2010), but do not need large computation required by thebootstrap method. Some simulation results are presented to show the finite sample performanceof the calibrated t-test.
基金The research of Li was supported by Grant 11871294 from National Natural Science Foundation of ChinaLiang’s research was partially supported by NSF grant DMS-1620898.
文摘To test variance homogeneity,various likelihood-ratio based tests such as the Bartlett's test have been proposed.The null distributions of these tests were generally derived asymptotically or approximately.We re-examine the restrictive maximum likelihood ratio(RELR)statistic,and sug-gest a Monte Carlo algorithm to compute its exact null distribution,and so its p-value.It is much easier to implement than most existing methods.Simulation studies indicate that the proposed procedure is also superior to its competitors in terms of type I error and powers.We analyse an environmental dataset for an illustration.