Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the ...Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.展开更多
Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rare...Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rarely used in superresolution methods, especially in low elevation estimation. The target airspace information in the difference beam is different from the target airspace information in the sum beam. And the use of difference beams does not significantly increase the complexity of the system and algorithms. Thus, this paper applies the difference beam to the beamformer to improve the elevation estimation performance of BML algorithm. And the direction and number of beams can be adjusted according to the actual needs. The theoretical target elevation angle root means square error(RMSE) and the computational complexity of the proposed algorithms are analyzed. Finally, computer simulations and real data processing results demonstrate the effectiveness of the proposed algorithms.展开更多
In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistic...In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistics are established for the parameters of the MESS model.It is shown that the limiting distributions of EL ratio statistics follow chi-square distributions,which are used to construct the confidence regions of model parameters.Simulation experiments are conducted to compare the performances of confidence regions based on EL method and normal approximation method.展开更多
The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regressi...The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.展开更多
BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging ...BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging studies.Activation likeli-hood estimation(ALE)offers a method to synthesize these diverse findings and identify consistent brain anomalies.METHODS We performed a comprehensive literature search in PubMed,Web of Science,Embase,and Chinese National Knowledge Infrastructure databases for neuroi-maging studies on MDD among adolescents and young adults published up to November 19,2023.Two independent researchers performed the study selection,quality assessment,and data extraction.The ALE technique was employed to synthesize findings on localized brain function anomalies in MDD patients,which was supplemented by sensitivity analyses.RESULTS Twenty-two studies comprising fourteen diffusion tensor imaging(DTI)studies and eight voxel-based morphome-try(VBM)studies,and involving 451 MDD patients and 465 healthy controls(HCs)for DTI and 664 MDD patients and 946 HCs for VBM,were included.DTI-based ALE demonstrated significant reductions in fractional anisotropy(FA)values in the right caudate head,right insula,and right lentiform nucleus putamen in adolescents and young adults with MDD compared to HCs,with no regions exhibiting increased FA values.VBM-based ALE did not demonstrate significant alterations in gray matter volume.Sensitivity analyses highlighted consistent findings in the right caudate head(11 of 14 analyses),right insula(10 of 14 analyses),and right lentiform nucleus putamen(11 of 14 analyses).CONCLUSION Structural alterations in the right caudate head,right insula,and right lentiform nucleus putamen in young MDD patients may contribute to its recurrent nature,offering insights for targeted therapies.展开更多
BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers uniqu...BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers unique insights into the neural mechanisms underlying this condition.However,despite previous research,the specific vulnerable brain regions affected in adolescent MDD patients have not been fully elucidated.AIM To identify consistent vulnerable brain regions in adolescent MDD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We performed a comprehensive literature search through July 12,2023,for studies investigating brain functional changes in adolescent MDD patients.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with MDD vs healthy controls(HCs)using ALE.RESULTS Ten studies(369 adolescent MDD patients and 313 HCs)were included.Combining the ReHo and ALFF/fALFF data,the results revealed that the activity in the right cuneus and left precuneus was lower in the adolescent MDD patients than in the HCs(voxel size:648 mm3,P<0.05),and no brain region exhibited increased activity.Based on the ALFF data,we found decreased activity in the right cuneus and left precuneus in adolescent MDD patients(voxel size:736 mm3,P<0.05),with no regions exhibiting increased activity.CONCLUSION Through ALE meta-analysis,we consistently identified the right cuneus and left precuneus as vulnerable brain regions in adolescent MDD patients,increasing our understanding of the neuropathology of affected adolescents.展开更多
Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the c...Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.展开更多
The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Lea...The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Least square is a widely used simplistic approach based on certain assumptions. Literature shows that these least square approach assumptions may not apply to the non-normal distributions. This study introduces a new methodology for calibrating the transverse cracking and international roughness index(IRI) models in rigid pavements using maximum likelihood estimation(MLE). Synthetic data for transverse cracking, with and without variability, are generated to illustrate the applicability of MLE using different known probability distributions(exponential,gamma, log-normal, and negative binomial). The approach uses measured data from the Michigan Department of Transportation's(MDOT) pavement management system(PMS) database for 70 jointed plain concrete pavement(JPCP) sections to calibrate and validate transfer functions. The MLE approach is combined with resampling techniques to improve the robustness of calibration coefficients. The results show that the MLE transverse cracking model using the gamma distribution consistently outperforms the least square for synthetic and observed data. For observed data, MLE estimates of parameters produced lower SSE and bias than least squares(e.g., for the transverse cracking model, the SSE values are 3.98 vs. 4.02, and the bias values are 0.00 and-0.41). Although negative binomial distribution is the most suitable fit for the IRI model for MLE, the least square results are slightly better than MLE. The bias values are-0.312 and 0.000 for the MLE and least square methods. Overall, the findings indicate that MLE is a robust method for calibration, especially for non-normally distributed data such as transverse cracking.展开更多
In longitudinal data analysis, our primary interest is in the estimation of regression parameters for the marginal expectations of the longitudinal responses, and the longitudinal correlation parameters are of seconda...In longitudinal data analysis, our primary interest is in the estimation of regression parameters for the marginal expectations of the longitudinal responses, and the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly due to correlated responses. Marginal models, such as generalized estimating equations (GEEs), have received much attention based on the assumption of the first two moments of the data and a working correlation structure. The confidence regions and hypothesis tests are constructed based on the asymptotic normality. This approach is sensitive to the misspecification of the variance function and the working correlation structure which may yield inefficient and inconsistent estimates leading to wrong conclusions. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its <span style="font-family:Verdana;">characteristics and asymptotic properties. We also provide an algorithm base</span><span style="font-family:Verdana;">d on EL principles for the estimation of the regression parameters and the construction of its confidence region. We have applied the proposed method in two case examples.</span>展开更多
In order to detect whether the data conforms to the given model, it is necessary to diagnose the data in the statistical way. The diagnostic problem in generalized nonlinear models based on the maximum Lq-likelihood e...In order to detect whether the data conforms to the given model, it is necessary to diagnose the data in the statistical way. The diagnostic problem in generalized nonlinear models based on the maximum Lq-likelihood estimation is considered. Three diagnostic statistics are used to detect whether the outliers exist in the data set. Simulation results show that when the sample size is small, the values of diagnostic statistics based on the maximum Lq-likelihood estimation are greater than the values based on the maximum likelihood estimation. As the sample size increases, the difference between the values of the diagnostic statistics based on two estimation methods diminishes gradually. It means that the outliers can be distinguished easier through the maximum Lq-likelihood method than those through the maximum likelihood estimation method.展开更多
In order to obtain the life information of the vacuum fluorescent display (VFD) in a short time, a model of constant stress accelerated life tests (CSALT) is established with its filament temperature increased, an...In order to obtain the life information of the vacuum fluorescent display (VFD) in a short time, a model of constant stress accelerated life tests (CSALT) is established with its filament temperature increased, and four constant stress tests are conducted. The Weibull function is applied to describe the life distribution of the VFD, and the maximum likelihood estimation (MLE) and its iterative flow chart are used to calculate the shape parameters and the scale parameters. Furthermore, the accelerated life equation is determined by the least square method, the Kolmogorov-Smirnov test is performed to verify whether the VFD life meets the Weibull distribution or not, and selfdeveloped software is employed to predict the average life and the reliable life. Statistical data analysis results demonstrate that the test plans are feasible and versatile, that the VFD life follows the Weibull distribution, and that the VFD accelerated model satisfies the linear Arrhenius equation. The proposed method and the estimated life information of the VFD can provide some significant guideline to its manufacturers and customers.展开更多
An improved Gaussian mixture model (GMM)- based clustering method is proposed for the difficult case where the true distribution of data is against the assumed GMM. First, an improved model selection criterion, the ...An improved Gaussian mixture model (GMM)- based clustering method is proposed for the difficult case where the true distribution of data is against the assumed GMM. First, an improved model selection criterion, the completed likelihood minimum message length criterion, is derived. It can measure both the goodness-of-fit of the candidate GMM to the data and the goodness-of-partition of the data. Secondly, by utilizing the proposed criterion as the clustering objective function, an improved expectation- maximization (EM) algorithm is developed, which can avoid poor local optimal solutions compared to the standard EM algorithm for estimating the model parameters. The experimental results demonstrate that the proposed method can rectify the over-fitting tendency of representative GMM-based clustering approaches and can robustly provide more accurate clustering results.展开更多
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case. Under some regularity conditions, the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) ...Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case. Under some regularity conditions, the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM. In an important case, this rate is O(n-^1/2(loglogn)^1/2), which is just the rate of LIL of partial sums for i.i.d variables, and thus cannot be improved anymore.展开更多
By using maximum likelihood classification, several landscape indexes have been adopted to evaluate landscape structure of the irrigated area of Hongsibao Town, and landscape pattern and dynamic change of Hongsibao in...By using maximum likelihood classification, several landscape indexes have been adopted to evaluate landscape structure of the irrigated area of Hongsibao Town, and landscape pattern and dynamic change of Hongsibao in 1989, 1999, 2003 and 2008 had been analyzed based on landscape patch, landscape type and transfer matrix. The results show that landscape pattern had changed obviously, patch number, fragmentation and dominance had increased, evenness had decreased, and landscape shape had become regular in the irrigated area of Hongsibao Town from 1989 to 2008. The primary landscape type in 1989 was grassland and in 2008 was sand, directly influenced by human activities.展开更多
Fisher [1] proposed a simple method to combine p-values from independent investigations without using detailed information of the original data. In recent years, likelihood-based asymptotic methods have been developed...Fisher [1] proposed a simple method to combine p-values from independent investigations without using detailed information of the original data. In recent years, likelihood-based asymptotic methods have been developed to produce highly accurate p-values. These likelihood-based methods generally required the likelihood function and the standardized maximum likelihood estimates departure calculated in the canonical parameter scale. In this paper, a method is proposed to obtain a p-value by combining the likelihood functions and the standardized maximum likelihood estimates departure of independent investigations for testing a scalar parameter of interest. Examples are presented to illustrate the application of the proposed method and simulation studies are performed to compare the accuracy of the proposed method with Fisher’s method.展开更多
The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multip...The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multiplexing (OFDM) in underground coal mine is sensitive to the frequency selection of multiple path fading channel, whose decoding is separated from the traditional channel estimation algorithm. In order to increase its accuracy and reliability, a new iterating channel estimation algorithm combining the logarithm likelihood ratio (LLR) decode iterate based on the maximum likelihood estimation (ML) is proposed in this paper, which estimates iteration channel in combination with LLR decode. Without estimating the channel noise power, it exchanges the information between the ML channel estimation and the LLR decode using the feedback information of LLR decode. The decoding speed is very quick, and the satisfied result will be obtained by iterating in some time. The simulation results of the shortwave broadband channel in the coal mine show that the error rate of the system is basically convergent after the iteration in two times.展开更多
WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted ma...WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu. une.edu.au/-kmeyer/wombat.html展开更多
基金Supported by the National Natural Science Foundation of China(12061017,12361055)the Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(22-A-01-01)。
文摘Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.
基金supported by the Fund for Foreign Scholars in University Research and Teaching Programs (B18039)。
文摘Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rarely used in superresolution methods, especially in low elevation estimation. The target airspace information in the difference beam is different from the target airspace information in the sum beam. And the use of difference beams does not significantly increase the complexity of the system and algorithms. Thus, this paper applies the difference beam to the beamformer to improve the elevation estimation performance of BML algorithm. And the direction and number of beams can be adjusted according to the actual needs. The theoretical target elevation angle root means square error(RMSE) and the computational complexity of the proposed algorithms are analyzed. Finally, computer simulations and real data processing results demonstrate the effectiveness of the proposed algorithms.
基金Supported by the National Natural Science Foundation of China(12061017,12161009)the Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(22-A-01-01)。
文摘In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistics are established for the parameters of the MESS model.It is shown that the limiting distributions of EL ratio statistics follow chi-square distributions,which are used to construct the confidence regions of model parameters.Simulation experiments are conducted to compare the performances of confidence regions based on EL method and normal approximation method.
基金supported in part by the National Key Research and Development Program of China(2019YFB1503700)the Hunan Natural Science Foundation-Science and Education Joint Project(2019JJ70063)。
文摘The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.
基金Supported by the Guizhou Province Science and Technology Plan Project,No.ZK-2023-1952021 Health Commission of Guizhou Province Project,No.gzwkj2021-150.
文摘BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging studies.Activation likeli-hood estimation(ALE)offers a method to synthesize these diverse findings and identify consistent brain anomalies.METHODS We performed a comprehensive literature search in PubMed,Web of Science,Embase,and Chinese National Knowledge Infrastructure databases for neuroi-maging studies on MDD among adolescents and young adults published up to November 19,2023.Two independent researchers performed the study selection,quality assessment,and data extraction.The ALE technique was employed to synthesize findings on localized brain function anomalies in MDD patients,which was supplemented by sensitivity analyses.RESULTS Twenty-two studies comprising fourteen diffusion tensor imaging(DTI)studies and eight voxel-based morphome-try(VBM)studies,and involving 451 MDD patients and 465 healthy controls(HCs)for DTI and 664 MDD patients and 946 HCs for VBM,were included.DTI-based ALE demonstrated significant reductions in fractional anisotropy(FA)values in the right caudate head,right insula,and right lentiform nucleus putamen in adolescents and young adults with MDD compared to HCs,with no regions exhibiting increased FA values.VBM-based ALE did not demonstrate significant alterations in gray matter volume.Sensitivity analyses highlighted consistent findings in the right caudate head(11 of 14 analyses),right insula(10 of 14 analyses),and right lentiform nucleus putamen(11 of 14 analyses).CONCLUSION Structural alterations in the right caudate head,right insula,and right lentiform nucleus putamen in young MDD patients may contribute to its recurrent nature,offering insights for targeted therapies.
基金Supported by The 2024 Guizhou Provincial Health Commission Science and Technology Fund Project,No.gzwkj2024-47502022 Provincial Clinical Key Specialty Construction Project。
文摘BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers unique insights into the neural mechanisms underlying this condition.However,despite previous research,the specific vulnerable brain regions affected in adolescent MDD patients have not been fully elucidated.AIM To identify consistent vulnerable brain regions in adolescent MDD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We performed a comprehensive literature search through July 12,2023,for studies investigating brain functional changes in adolescent MDD patients.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with MDD vs healthy controls(HCs)using ALE.RESULTS Ten studies(369 adolescent MDD patients and 313 HCs)were included.Combining the ReHo and ALFF/fALFF data,the results revealed that the activity in the right cuneus and left precuneus was lower in the adolescent MDD patients than in the HCs(voxel size:648 mm3,P<0.05),and no brain region exhibited increased activity.Based on the ALFF data,we found decreased activity in the right cuneus and left precuneus in adolescent MDD patients(voxel size:736 mm3,P<0.05),with no regions exhibiting increased activity.CONCLUSION Through ALE meta-analysis,we consistently identified the right cuneus and left precuneus as vulnerable brain regions in adolescent MDD patients,increasing our understanding of the neuropathology of affected adolescents.
文摘Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.
基金the Michigan Department of Transportation (MDOT) for the financial support of this study (report no. SPR1723)。
文摘The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Least square is a widely used simplistic approach based on certain assumptions. Literature shows that these least square approach assumptions may not apply to the non-normal distributions. This study introduces a new methodology for calibrating the transverse cracking and international roughness index(IRI) models in rigid pavements using maximum likelihood estimation(MLE). Synthetic data for transverse cracking, with and without variability, are generated to illustrate the applicability of MLE using different known probability distributions(exponential,gamma, log-normal, and negative binomial). The approach uses measured data from the Michigan Department of Transportation's(MDOT) pavement management system(PMS) database for 70 jointed plain concrete pavement(JPCP) sections to calibrate and validate transfer functions. The MLE approach is combined with resampling techniques to improve the robustness of calibration coefficients. The results show that the MLE transverse cracking model using the gamma distribution consistently outperforms the least square for synthetic and observed data. For observed data, MLE estimates of parameters produced lower SSE and bias than least squares(e.g., for the transverse cracking model, the SSE values are 3.98 vs. 4.02, and the bias values are 0.00 and-0.41). Although negative binomial distribution is the most suitable fit for the IRI model for MLE, the least square results are slightly better than MLE. The bias values are-0.312 and 0.000 for the MLE and least square methods. Overall, the findings indicate that MLE is a robust method for calibration, especially for non-normally distributed data such as transverse cracking.
文摘In longitudinal data analysis, our primary interest is in the estimation of regression parameters for the marginal expectations of the longitudinal responses, and the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly due to correlated responses. Marginal models, such as generalized estimating equations (GEEs), have received much attention based on the assumption of the first two moments of the data and a working correlation structure. The confidence regions and hypothesis tests are constructed based on the asymptotic normality. This approach is sensitive to the misspecification of the variance function and the working correlation structure which may yield inefficient and inconsistent estimates leading to wrong conclusions. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its <span style="font-family:Verdana;">characteristics and asymptotic properties. We also provide an algorithm base</span><span style="font-family:Verdana;">d on EL principles for the estimation of the regression parameters and the construction of its confidence region. We have applied the proposed method in two case examples.</span>
基金The National Natural Science Foundation of China(No.11171065)the Natural Science Foundation of Jiangsu Province(No.BK2011058)
文摘In order to detect whether the data conforms to the given model, it is necessary to diagnose the data in the statistical way. The diagnostic problem in generalized nonlinear models based on the maximum Lq-likelihood estimation is considered. Three diagnostic statistics are used to detect whether the outliers exist in the data set. Simulation results show that when the sample size is small, the values of diagnostic statistics based on the maximum Lq-likelihood estimation are greater than the values based on the maximum likelihood estimation. As the sample size increases, the difference between the values of the diagnostic statistics based on two estimation methods diminishes gradually. It means that the outliers can be distinguished easier through the maximum Lq-likelihood method than those through the maximum likelihood estimation method.
基金Undergraduate Education High land Construction Project of Shanghaithe Key Course Construction of Shanghai Education Committee (No.20075302)the Key Technology R&D Program of Shanghai Municipality (No.08160510600)
文摘In order to obtain the life information of the vacuum fluorescent display (VFD) in a short time, a model of constant stress accelerated life tests (CSALT) is established with its filament temperature increased, and four constant stress tests are conducted. The Weibull function is applied to describe the life distribution of the VFD, and the maximum likelihood estimation (MLE) and its iterative flow chart are used to calculate the shape parameters and the scale parameters. Furthermore, the accelerated life equation is determined by the least square method, the Kolmogorov-Smirnov test is performed to verify whether the VFD life meets the Weibull distribution or not, and selfdeveloped software is employed to predict the average life and the reliable life. Statistical data analysis results demonstrate that the test plans are feasible and versatile, that the VFD life follows the Weibull distribution, and that the VFD accelerated model satisfies the linear Arrhenius equation. The proposed method and the estimated life information of the VFD can provide some significant guideline to its manufacturers and customers.
基金The National Natural Science Foundation of China(No.61105048,60972165)the Doctoral Fund of Ministry of Education of China(No.20110092120034)+2 种基金the Natural Science Foundation of Jiangsu Province(No.BK2010240)the Technology Foundation for Selected Overseas Chinese Scholar,Ministry of Human Resources and Social Security of China(No.6722000008)the Open Fund of Jiangsu Province Key Laboratory for Remote Measuring and Control(No.YCCK201005)
文摘An improved Gaussian mixture model (GMM)- based clustering method is proposed for the difficult case where the true distribution of data is against the assumed GMM. First, an improved model selection criterion, the completed likelihood minimum message length criterion, is derived. It can measure both the goodness-of-fit of the candidate GMM to the data and the goodness-of-partition of the data. Secondly, by utilizing the proposed criterion as the clustering objective function, an improved expectation- maximization (EM) algorithm is developed, which can avoid poor local optimal solutions compared to the standard EM algorithm for estimating the model parameters. The experimental results demonstrate that the proposed method can rectify the over-fitting tendency of representative GMM-based clustering approaches and can robustly provide more accurate clustering results.
基金Supported by the National Natural Sciences Foundation of China (10761011)Mathematical Tianyuan Fund of National Natural Science Fundation of China(10626048)
文摘Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case. Under some regularity conditions, the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM. In an important case, this rate is O(n-^1/2(loglogn)^1/2), which is just the rate of LIL of partial sums for i.i.d variables, and thus cannot be improved anymore.
文摘By using maximum likelihood classification, several landscape indexes have been adopted to evaluate landscape structure of the irrigated area of Hongsibao Town, and landscape pattern and dynamic change of Hongsibao in 1989, 1999, 2003 and 2008 had been analyzed based on landscape patch, landscape type and transfer matrix. The results show that landscape pattern had changed obviously, patch number, fragmentation and dominance had increased, evenness had decreased, and landscape shape had become regular in the irrigated area of Hongsibao Town from 1989 to 2008. The primary landscape type in 1989 was grassland and in 2008 was sand, directly influenced by human activities.
文摘Fisher [1] proposed a simple method to combine p-values from independent investigations without using detailed information of the original data. In recent years, likelihood-based asymptotic methods have been developed to produce highly accurate p-values. These likelihood-based methods generally required the likelihood function and the standardized maximum likelihood estimates departure calculated in the canonical parameter scale. In this paper, a method is proposed to obtain a p-value by combining the likelihood functions and the standardized maximum likelihood estimates departure of independent investigations for testing a scalar parameter of interest. Examples are presented to illustrate the application of the proposed method and simulation studies are performed to compare the accuracy of the proposed method with Fisher’s method.
文摘The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multiplexing (OFDM) in underground coal mine is sensitive to the frequency selection of multiple path fading channel, whose decoding is separated from the traditional channel estimation algorithm. In order to increase its accuracy and reliability, a new iterating channel estimation algorithm combining the logarithm likelihood ratio (LLR) decode iterate based on the maximum likelihood estimation (ML) is proposed in this paper, which estimates iteration channel in combination with LLR decode. Without estimating the channel noise power, it exchanges the information between the ML channel estimation and the LLR decode using the feedback information of LLR decode. The decoding speed is very quick, and the satisfied result will be obtained by iterating in some time. The simulation results of the shortwave broadband channel in the coal mine show that the error rate of the system is basically convergent after the iteration in two times.
基金Project (No. BFGEN.100B) supported by the Meat and LivestockLtd., Australia (MLA)
文摘WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu. une.edu.au/-kmeyer/wombat.html