Medical research data are often skewed and heteroscedastic. It has therefore become practice to log-transform data in regression analysis, in order to stabilize the variance. Regression analysis on log-transformed dat...Medical research data are often skewed and heteroscedastic. It has therefore become practice to log-transform data in regression analysis, in order to stabilize the variance. Regression analysis on log-transformed data estimates the relative effect, whereas it is often the absolute effect of a predictor that is of interest. We propose a maximum likelihood (ML)-based approach to estimate a linear regression model on log-normal, heteroscedastic data. The new method was evaluated with a large simulation study. Log-normal observations were generated according to the simulation models and parameters were estimated using the new ML method, ordinary least-squares regression (LS) and weighed least-squares regression (WLS). All three methods produced unbiased estimates of parameters and expected response, and ML and WLS yielded smaller standard errors than LS. The approximate normality of the Wald statistic, used for tests of the ML estimates, in most situations produced correct type I error risk. Only ML and WLS produced correct confidence intervals for the estimated expected value. ML had the highest power for tests regarding β1.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
One of the most powerful algorithms for obtaining maximum likelihood estimates for many incomplete-data problems is the EM algorithm.However,when the parameters satisfy a set of nonlinear restrictions,It is difficult ...One of the most powerful algorithms for obtaining maximum likelihood estimates for many incomplete-data problems is the EM algorithm.However,when the parameters satisfy a set of nonlinear restrictions,It is difficult to apply the EM algorithm directly.In this paper,we propose an asymptotic maximum likelihood estimation procedure under a set of nonlinear inequalities restrictions on the parameters,in which the EM algorithm can be used.Essentially this kind of estimation problem is a stochastic optimization problem in the M-step.We make use of methods in stochastic optimization to overcome the difficulty caused by nonlinearity in the given constraints.展开更多
Logistic regression is usually used to model probabilities of categorical responses as functions of covariates. However, the link connecting the probabilities to the covariates is non-linear. We show in this paper tha...Logistic regression is usually used to model probabilities of categorical responses as functions of covariates. However, the link connecting the probabilities to the covariates is non-linear. We show in this paper that when the cross-classification of all the covariates and the dependent variable have no empty cells, then the probabilities of responses can be expressed as linear functions of the covariates. We demonstrate this for both the dichotmous and polytomous dependent variables.展开更多
针对真实环境下的语种识别,信道类型和通话内容等非语种方面因素的不同都会造成测试和训练条件的不匹配,从而影响系统的识别性能.本文以音素识别器后接向量空间模型(Phone recognizer followed by vectorspace model,PRVSM)为语种识别系...针对真实环境下的语种识别,信道类型和通话内容等非语种方面因素的不同都会造成测试和训练条件的不匹配,从而影响系统的识别性能.本文以音素识别器后接向量空间模型(Phone recognizer followed by vectorspace model,PRVSM)为语种识别系统,引入联合自适应算法来解决系统中测试和训练条件的失配问题.研究了三种自适应方法用于系统的不同阶段:1)基于受约束的最大似然线性回归(Constr ained maximum likelihood linear regression,CMLLR)的声学模型自适应;2)基于全局N元文法的音位特征向量自适应;3)VSM模型中的支持向量机(Support vector machines,SVM)自适应.在综合采用多种自适应技术后,PRVSM系统的性能有了较大的提高,在NIST LRE 2009测试库上对于30s、10s和3s的测试段,基于不同音素识别器的PRVSM系统的等错误率(Equal errorrate,EER)分别相对降低了18%~23%、12%~20%以及5%~9%.展开更多
研究了将自适应领域的最大似然线性回归(Maximum likelihood linear regression,MLLR)变换矩阵作为特征进行文本无关的说话人识别算法.本文引入了基于统一背景模型的MLLRSV-SVM说话人识别算法,并在此基础上进行高层音素聚类以进一步提...研究了将自适应领域的最大似然线性回归(Maximum likelihood linear regression,MLLR)变换矩阵作为特征进行文本无关的说话人识别算法.本文引入了基于统一背景模型的MLLRSV-SVM说话人识别算法,并在此基础上进行高层音素聚类以进一步提高识别性能.在采用多种信道补偿技术后,在NISTSRE2006年1训练语段-1测试语段同信道和跨信道数据库上,基于MLLR特征的系统与其他最好的系统性能接近并有很强的互补性,经过简单线性融合可以极大提高识别性能.展开更多
文摘Medical research data are often skewed and heteroscedastic. It has therefore become practice to log-transform data in regression analysis, in order to stabilize the variance. Regression analysis on log-transformed data estimates the relative effect, whereas it is often the absolute effect of a predictor that is of interest. We propose a maximum likelihood (ML)-based approach to estimate a linear regression model on log-normal, heteroscedastic data. The new method was evaluated with a large simulation study. Log-normal observations were generated according to the simulation models and parameters were estimated using the new ML method, ordinary least-squares regression (LS) and weighed least-squares regression (WLS). All three methods produced unbiased estimates of parameters and expected response, and ML and WLS yielded smaller standard errors than LS. The approximate normality of the Wald statistic, used for tests of the ML estimates, in most situations produced correct type I error risk. Only ML and WLS produced correct confidence intervals for the estimated expected value. ML had the highest power for tests regarding β1.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
基金Supported by Teaching reform project of Zhengzhou University of Science and Technology(KFCZ201909)National Foundation for Cultivating Scientific Research Projects of Zhengzhou Institute of Technology(GJJKTPY2018K4)+1 种基金Henan Big Data Double Base of Zhengzhou Institute of Technology(20174101546503022265)the Key Scientific Research Foundation of Education Bureau of Henan Province(20B110020)
文摘One of the most powerful algorithms for obtaining maximum likelihood estimates for many incomplete-data problems is the EM algorithm.However,when the parameters satisfy a set of nonlinear restrictions,It is difficult to apply the EM algorithm directly.In this paper,we propose an asymptotic maximum likelihood estimation procedure under a set of nonlinear inequalities restrictions on the parameters,in which the EM algorithm can be used.Essentially this kind of estimation problem is a stochastic optimization problem in the M-step.We make use of methods in stochastic optimization to overcome the difficulty caused by nonlinearity in the given constraints.
文摘Logistic regression is usually used to model probabilities of categorical responses as functions of covariates. However, the link connecting the probabilities to the covariates is non-linear. We show in this paper that when the cross-classification of all the covariates and the dependent variable have no empty cells, then the probabilities of responses can be expressed as linear functions of the covariates. We demonstrate this for both the dichotmous and polytomous dependent variables.
文摘针对真实环境下的语种识别,信道类型和通话内容等非语种方面因素的不同都会造成测试和训练条件的不匹配,从而影响系统的识别性能.本文以音素识别器后接向量空间模型(Phone recognizer followed by vectorspace model,PRVSM)为语种识别系统,引入联合自适应算法来解决系统中测试和训练条件的失配问题.研究了三种自适应方法用于系统的不同阶段:1)基于受约束的最大似然线性回归(Constr ained maximum likelihood linear regression,CMLLR)的声学模型自适应;2)基于全局N元文法的音位特征向量自适应;3)VSM模型中的支持向量机(Support vector machines,SVM)自适应.在综合采用多种自适应技术后,PRVSM系统的性能有了较大的提高,在NIST LRE 2009测试库上对于30s、10s和3s的测试段,基于不同音素识别器的PRVSM系统的等错误率(Equal errorrate,EER)分别相对降低了18%~23%、12%~20%以及5%~9%.
文摘研究了将自适应领域的最大似然线性回归(Maximum likelihood linear regression,MLLR)变换矩阵作为特征进行文本无关的说话人识别算法.本文引入了基于统一背景模型的MLLRSV-SVM说话人识别算法,并在此基础上进行高层音素聚类以进一步提高识别性能.在采用多种信道补偿技术后,在NISTSRE2006年1训练语段-1测试语段同信道和跨信道数据库上,基于MLLR特征的系统与其他最好的系统性能接近并有很强的互补性,经过简单线性融合可以极大提高识别性能.