Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the c...Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.展开更多
文章研究了背景为子空间干扰加高斯杂波的距离扩展目标方向检测问题。杂波是均值为零协方差矩阵未知但具有斜对称特性的高斯杂波,目标与干扰分别通过具备斜对称特性的目标子空间和干扰子空间描述。针对方向检测问题,利用上述斜对称性,...文章研究了背景为子空间干扰加高斯杂波的距离扩展目标方向检测问题。杂波是均值为零协方差矩阵未知但具有斜对称特性的高斯杂波,目标与干扰分别通过具备斜对称特性的目标子空间和干扰子空间描述。针对方向检测问题,利用上述斜对称性,根据广义似然比检验(Generalized Likeli-hood Ratio Test,GLRT)准则的一步与两步设计方法,设计了基于GLRT的一步法与两步法的距离扩展目标方向检测器。通过理论推导证明了这2种检测器相对于未知杂波协方差矩阵都具有恒虚警率。对比相同背景下已有检测器,特别是在辅助数据有限的场景下,文章提出的2个检测器表现出了优越的检测性能。展开更多
In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose ...In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose to the null hypothesis. These expansions are given in series form of beta distributions.展开更多
In this paper, the nonnull moments and the distributions of the likelihood ratio criterion for testing the equality of diagonal blocks with blockwise independence under certain alternatives have derived.
A novel technique is proposed to improve the performance of voice activity detection(VAD) by using deep belief networks(DBN) with a likelihood ratio(LR). The likelihood ratio is derived from the speech and noise spect...A novel technique is proposed to improve the performance of voice activity detection(VAD) by using deep belief networks(DBN) with a likelihood ratio(LR). The likelihood ratio is derived from the speech and noise spectral components that are assumed to follow the Gaussian probability density function(PDF). The proposed algorithm employs DBN learning in order to classify voice activity by using the input signal to calculate the likelihood ratio. Experiments show that the proposed algorithm yields improved results in various noise environments, compared to the conventional VAD algorithms. Furthermore, the DBN based algorithm decreases the detection probability of error with [0.7, 2.6] compared to the support vector machine based algorithm.展开更多
Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying ac...Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying accuracies are obtained. Statistical inference should be based on the pooled samples. In this article, the authors also assumes that all the imperfect instruments are unbiased. They consider the problem of combining this information to make statistical tests for parameters more relevant. They define the empirical likelihood ratio functions and obtain their asymptotic distributions in the presence of measurement error.展开更多
This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ...This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ratio test is X^2(1).展开更多
In normal theory exploratory factor analysis, likelihood ratio (LR) statistic plays an important role in evaluating the goodness-of-fit of the model. In this paper, we derive an approximation of the LR statistic. The ...In normal theory exploratory factor analysis, likelihood ratio (LR) statistic plays an important role in evaluating the goodness-of-fit of the model. In this paper, we derive an approximation of the LR statistic. The approximation is then used to show explicitly that the expectation of the LR statistic agrees with the degrees of freedom of the asymptotic chi-square distribution.展开更多
This in virtue of the notion of likelihood ratio and the tool of moment generating function, the limit properties of the sequences of random discrete random variables are studied, and a class of strong deviation theor...This in virtue of the notion of likelihood ratio and the tool of moment generating function, the limit properties of the sequences of random discrete random variables are studied, and a class of strong deviation theorems which represented by inequalities between random variables and their expectation are obtained. As a result, we obtain some strong deviation theorems for Poisson distribution and binomial distribution.展开更多
It is well-known that the power of Cochran’s Q test to assess the presence of heterogeneity among treatment effects in a clinical meta-analysis is low due to the small number of studies combined. Two modified tests (...It is well-known that the power of Cochran’s Q test to assess the presence of heterogeneity among treatment effects in a clinical meta-analysis is low due to the small number of studies combined. Two modified tests (PL1, PL2) were proposed by replacing the profile maximum likelihood estimator (PMLE) into the variance formula of logarithm of risk ratio in the standard chi-square test statistic for testing the null common risk ratios across all k studies (i = 1, L, k). The simply naive test (SIM) as another comparative candidate has considerably arisen. The performance of tests in terms of type I error rate under the null hypothesis and power of test under the random effects hypothesis was done via a simulation plan with various combinations of significance levels, numbers of studies, sample sizes in treatment and control arms, and true risk ratios as effect sizes of interest. The results indicated that for moderate to large study sizes (k?≥ 16)?in combination with moderate to large sample sizes?(?≥ 50), three tests (PL1, PL2, and Q) could control type I error rates in almost all situations. Two proposed tests (PL1, PL2) performed best with the highest power when?k?≥ 16?and moderate sample sizes (= 50,100);this finding was very useful to make a recommendation to use them in practical situations. Meanwhile, the standard Q test performed best when?k?≥ 16 and large sample sizes (≥ 500). Moreover, no tests were reasonable for small sample sizes (≤ 10), regardless of study size k. The simply naive test (SIM) is recommended to be adopted with high performance when k = 4 in combination with (≥ 500).展开更多
In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null d...In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null distribution follows a distribution, with the scale constant and the number of degree of freedom being independent of nuisance parameters or functions, which is called the wilks phenomenon. Both simulated and real data examples are given to illustrate the performance of the testing approach.展开更多
The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multip...The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multiplexing (OFDM) in underground coal mine is sensitive to the frequency selection of multiple path fading channel, whose decoding is separated from the traditional channel estimation algorithm. In order to increase its accuracy and reliability, a new iterating channel estimation algorithm combining the logarithm likelihood ratio (LLR) decode iterate based on the maximum likelihood estimation (ML) is proposed in this paper, which estimates iteration channel in combination with LLR decode. Without estimating the channel noise power, it exchanges the information between the ML channel estimation and the LLR decode using the feedback information of LLR decode. The decoding speed is very quick, and the satisfied result will be obtained by iterating in some time. The simulation results of the shortwave broadband channel in the coal mine show that the error rate of the system is basically convergent after the iteration in two times.展开更多
A distributed turbo codes( DTC) scheme with log likelihood ratio( LLR)-based threshold at the relay for a two-hop relay networks is proposed. Different from traditional DTC schemes,the retransmission scheme at the...A distributed turbo codes( DTC) scheme with log likelihood ratio( LLR)-based threshold at the relay for a two-hop relay networks is proposed. Different from traditional DTC schemes,the retransmission scheme at the relay,where imperfect decoding occurs,is considered in the proposed scheme. By employing a LLR-based threshold at the relay in the proposed scheme,the reliability of decoder-LLRs can be measured. As a result,only reliable symbols will be forwarded to the destination and a maximum ratio combiner( MRC) is used to combine signals received from both the source and the relay. In order to obtain the optimal threshold at the relay,an equivalent model of decoderLLRs is investigated,so as to derive the expression of the bit error probability( BEP) of the proposed scheme under binary phase shift keying( BPSK) modulation. Simulation results demonstrate that the proposed scheme can effectively mitigate error propagation at the relay and also outperforms other existing methods.展开更多
In this paper, a varying-coefficient density-ratio model for case-control studies is developed. We investigate the local empirical likelihood diagnosis of varying coefficient density-ratio model for case-control data....In this paper, a varying-coefficient density-ratio model for case-control studies is developed. We investigate the local empirical likelihood diagnosis of varying coefficient density-ratio model for case-control data. The local empirical log-likelihood ratios for the nonparametric coefficient functions are introduced. First, the estimation equations based on empirical likelihood method are established. Then, a few of diagnostic statistics are proposed. At last, we also examine the performance of proposed method for finite sample sizes through simulation studies.展开更多
从偶联位点、偶联剂、载体蛋白和偶联步骤等分析出发,设计了微囊藻毒素-LR (Microcystin- LR ,MC- LR)完全抗原的制备方法.在半抗原分子第7位氨基酸分子上引进1个自由的氨基;再用戊二醛2步法将此中间产物(H2 N etMC- LR)分别与BSA和OVA...从偶联位点、偶联剂、载体蛋白和偶联步骤等分析出发,设计了微囊藻毒素-LR (Microcystin- LR ,MC- LR)完全抗原的制备方法.在半抗原分子第7位氨基酸分子上引进1个自由的氨基;再用戊二醛2步法将此中间产物(H2 N etMC- LR)分别与BSA和OVA偶联.中间产物和偶联产物分别经固相萃取和透析纯化后,经SDS凝胶电泳、紫外扫描及生物质谱技术鉴定,结果表明MC- LR与BSA的平均偶联比能达到5以上,满足了进一步免疫的要求.展开更多
扩展目标检测通常采用距离像能量积累检测的方法,由于距离像信息掌握不完备,陷落损失会降低检测性能。本文提出一种距离像先验引导的扩展目标检测方法,通过利用距离像包络模先验,对信号进行积累以提升检测性能。该方法考虑了复距离像与...扩展目标检测通常采用距离像能量积累检测的方法,由于距离像信息掌握不完备,陷落损失会降低检测性能。本文提出一种距离像先验引导的扩展目标检测方法,通过利用距离像包络模先验,对信号进行积累以提升检测性能。该方法考虑了复距离像与复高斯白噪声的相干叠加与相位预测不准的因素,采用将观测数据取模的检测模型,基于似然比检测(Likelihood Ratio Test,LRT)理论推导了低信噪比下的特征平方匹配检测器。该检测器将目标复距离像的包络模与观测数据的包络模进行平方匹配,并通过门限判决来判断目标是否存在。包络模先验的获取是通过从ISAR图像提取二维散射中心,向对应姿态角下的雷达视线方向进行投影,来获得目标近似的一维散射中心模型,再由该模型进一步生成目标距离像的包络模先验。同时,本文从理论与实验两方面分析了能量检测器和特征平方匹配检测器之间的关系,通过散射中心模型重构与暗室测量的方法获取数据进行了实验验证。实验结果表明:在低信噪比下,距离像先验引导的特征平方匹配检测器能有效提升目标的检测性能,并且对先验模型失配的情况具有良好的适应性。展开更多
文摘Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.
文摘文章研究了背景为子空间干扰加高斯杂波的距离扩展目标方向检测问题。杂波是均值为零协方差矩阵未知但具有斜对称特性的高斯杂波,目标与干扰分别通过具备斜对称特性的目标子空间和干扰子空间描述。针对方向检测问题,利用上述斜对称性,根据广义似然比检验(Generalized Likeli-hood Ratio Test,GLRT)准则的一步与两步设计方法,设计了基于GLRT的一步法与两步法的距离扩展目标方向检测器。通过理论推导证明了这2种检测器相对于未知杂波协方差矩阵都具有恒虚警率。对比相同背景下已有检测器,特别是在辅助数据有限的场景下,文章提出的2个检测器表现出了优越的检测性能。
文摘In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose to the null hypothesis. These expansions are given in series form of beta distributions.
文摘In this paper, the nonnull moments and the distributions of the likelihood ratio criterion for testing the equality of diagonal blocks with blockwise independence under certain alternatives have derived.
基金supported by the KERI Primary Research Program through the Korea Research Council for Industrial Science & Technology funded by the Ministry of Science,ICT and Future Planning (No.15-12-N0101-46)
文摘A novel technique is proposed to improve the performance of voice activity detection(VAD) by using deep belief networks(DBN) with a likelihood ratio(LR). The likelihood ratio is derived from the speech and noise spectral components that are assumed to follow the Gaussian probability density function(PDF). The proposed algorithm employs DBN learning in order to classify voice activity by using the input signal to calculate the likelihood ratio. Experiments show that the proposed algorithm yields improved results in various noise environments, compared to the conventional VAD algorithms. Furthermore, the DBN based algorithm decreases the detection probability of error with [0.7, 2.6] compared to the support vector machine based algorithm.
基金This work is supported by NNSF of China (10571093)
文摘Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying accuracies are obtained. Statistical inference should be based on the pooled samples. In this article, the authors also assumes that all the imperfect instruments are unbiased. They consider the problem of combining this information to make statistical tests for parameters more relevant. They define the empirical likelihood ratio functions and obtain their asymptotic distributions in the presence of measurement error.
基金Supported by the National Natural Science Foundation of China(10661003)the SRF for ROCS,SEM([2004]527)the NSF of Guangxi(0728092)
文摘This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ratio test is X^2(1).
文摘In normal theory exploratory factor analysis, likelihood ratio (LR) statistic plays an important role in evaluating the goodness-of-fit of the model. In this paper, we derive an approximation of the LR statistic. The approximation is then used to show explicitly that the expectation of the LR statistic agrees with the degrees of freedom of the asymptotic chi-square distribution.
文摘This in virtue of the notion of likelihood ratio and the tool of moment generating function, the limit properties of the sequences of random discrete random variables are studied, and a class of strong deviation theorems which represented by inequalities between random variables and their expectation are obtained. As a result, we obtain some strong deviation theorems for Poisson distribution and binomial distribution.
文摘It is well-known that the power of Cochran’s Q test to assess the presence of heterogeneity among treatment effects in a clinical meta-analysis is low due to the small number of studies combined. Two modified tests (PL1, PL2) were proposed by replacing the profile maximum likelihood estimator (PMLE) into the variance formula of logarithm of risk ratio in the standard chi-square test statistic for testing the null common risk ratios across all k studies (i = 1, L, k). The simply naive test (SIM) as another comparative candidate has considerably arisen. The performance of tests in terms of type I error rate under the null hypothesis and power of test under the random effects hypothesis was done via a simulation plan with various combinations of significance levels, numbers of studies, sample sizes in treatment and control arms, and true risk ratios as effect sizes of interest. The results indicated that for moderate to large study sizes (k?≥ 16)?in combination with moderate to large sample sizes?(?≥ 50), three tests (PL1, PL2, and Q) could control type I error rates in almost all situations. Two proposed tests (PL1, PL2) performed best with the highest power when?k?≥ 16?and moderate sample sizes (= 50,100);this finding was very useful to make a recommendation to use them in practical situations. Meanwhile, the standard Q test performed best when?k?≥ 16 and large sample sizes (≥ 500). Moreover, no tests were reasonable for small sample sizes (≤ 10), regardless of study size k. The simply naive test (SIM) is recommended to be adopted with high performance when k = 4 in combination with (≥ 500).
文摘In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null distribution follows a distribution, with the scale constant and the number of degree of freedom being independent of nuisance parameters or functions, which is called the wilks phenomenon. Both simulated and real data examples are given to illustrate the performance of the testing approach.
文摘The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multiplexing (OFDM) in underground coal mine is sensitive to the frequency selection of multiple path fading channel, whose decoding is separated from the traditional channel estimation algorithm. In order to increase its accuracy and reliability, a new iterating channel estimation algorithm combining the logarithm likelihood ratio (LLR) decode iterate based on the maximum likelihood estimation (ML) is proposed in this paper, which estimates iteration channel in combination with LLR decode. Without estimating the channel noise power, it exchanges the information between the ML channel estimation and the LLR decode using the feedback information of LLR decode. The decoding speed is very quick, and the satisfied result will be obtained by iterating in some time. The simulation results of the shortwave broadband channel in the coal mine show that the error rate of the system is basically convergent after the iteration in two times.
文摘A distributed turbo codes( DTC) scheme with log likelihood ratio( LLR)-based threshold at the relay for a two-hop relay networks is proposed. Different from traditional DTC schemes,the retransmission scheme at the relay,where imperfect decoding occurs,is considered in the proposed scheme. By employing a LLR-based threshold at the relay in the proposed scheme,the reliability of decoder-LLRs can be measured. As a result,only reliable symbols will be forwarded to the destination and a maximum ratio combiner( MRC) is used to combine signals received from both the source and the relay. In order to obtain the optimal threshold at the relay,an equivalent model of decoderLLRs is investigated,so as to derive the expression of the bit error probability( BEP) of the proposed scheme under binary phase shift keying( BPSK) modulation. Simulation results demonstrate that the proposed scheme can effectively mitigate error propagation at the relay and also outperforms other existing methods.
文摘In this paper, a varying-coefficient density-ratio model for case-control studies is developed. We investigate the local empirical likelihood diagnosis of varying coefficient density-ratio model for case-control data. The local empirical log-likelihood ratios for the nonparametric coefficient functions are introduced. First, the estimation equations based on empirical likelihood method are established. Then, a few of diagnostic statistics are proposed. At last, we also examine the performance of proposed method for finite sample sizes through simulation studies.
文摘从偶联位点、偶联剂、载体蛋白和偶联步骤等分析出发,设计了微囊藻毒素-LR (Microcystin- LR ,MC- LR)完全抗原的制备方法.在半抗原分子第7位氨基酸分子上引进1个自由的氨基;再用戊二醛2步法将此中间产物(H2 N etMC- LR)分别与BSA和OVA偶联.中间产物和偶联产物分别经固相萃取和透析纯化后,经SDS凝胶电泳、紫外扫描及生物质谱技术鉴定,结果表明MC- LR与BSA的平均偶联比能达到5以上,满足了进一步免疫的要求.
文摘扩展目标检测通常采用距离像能量积累检测的方法,由于距离像信息掌握不完备,陷落损失会降低检测性能。本文提出一种距离像先验引导的扩展目标检测方法,通过利用距离像包络模先验,对信号进行积累以提升检测性能。该方法考虑了复距离像与复高斯白噪声的相干叠加与相位预测不准的因素,采用将观测数据取模的检测模型,基于似然比检测(Likelihood Ratio Test,LRT)理论推导了低信噪比下的特征平方匹配检测器。该检测器将目标复距离像的包络模与观测数据的包络模进行平方匹配,并通过门限判决来判断目标是否存在。包络模先验的获取是通过从ISAR图像提取二维散射中心,向对应姿态角下的雷达视线方向进行投影,来获得目标近似的一维散射中心模型,再由该模型进一步生成目标距离像的包络模先验。同时,本文从理论与实验两方面分析了能量检测器和特征平方匹配检测器之间的关系,通过散射中心模型重构与暗室测量的方法获取数据进行了实验验证。实验结果表明:在低信噪比下,距离像先验引导的特征平方匹配检测器能有效提升目标的检测性能,并且对先验模型失配的情况具有良好的适应性。