In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose ...In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose to the null hypothesis. These expansions are given in series form of beta distributions.展开更多
A novel technique is proposed to improve the performance of voice activity detection(VAD) by using deep belief networks(DBN) with a likelihood ratio(LR). The likelihood ratio is derived from the speech and noise spect...A novel technique is proposed to improve the performance of voice activity detection(VAD) by using deep belief networks(DBN) with a likelihood ratio(LR). The likelihood ratio is derived from the speech and noise spectral components that are assumed to follow the Gaussian probability density function(PDF). The proposed algorithm employs DBN learning in order to classify voice activity by using the input signal to calculate the likelihood ratio. Experiments show that the proposed algorithm yields improved results in various noise environments, compared to the conventional VAD algorithms. Furthermore, the DBN based algorithm decreases the detection probability of error with [0.7, 2.6] compared to the support vector machine based algorithm.展开更多
Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying ac...Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying accuracies are obtained. Statistical inference should be based on the pooled samples. In this article, the authors also assumes that all the imperfect instruments are unbiased. They consider the problem of combining this information to make statistical tests for parameters more relevant. They define the empirical likelihood ratio functions and obtain their asymptotic distributions in the presence of measurement error.展开更多
This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ...This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ratio test is X^2(1).展开更多
The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multip...The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multiplexing (OFDM) in underground coal mine is sensitive to the frequency selection of multiple path fading channel, whose decoding is separated from the traditional channel estimation algorithm. In order to increase its accuracy and reliability, a new iterating channel estimation algorithm combining the logarithm likelihood ratio (LLR) decode iterate based on the maximum likelihood estimation (ML) is proposed in this paper, which estimates iteration channel in combination with LLR decode. Without estimating the channel noise power, it exchanges the information between the ML channel estimation and the LLR decode using the feedback information of LLR decode. The decoding speed is very quick, and the satisfied result will be obtained by iterating in some time. The simulation results of the shortwave broadband channel in the coal mine show that the error rate of the system is basically convergent after the iteration in two times.展开更多
A distributed turbo codes( DTC) scheme with log likelihood ratio( LLR)-based threshold at the relay for a two-hop relay networks is proposed. Different from traditional DTC schemes,the retransmission scheme at the...A distributed turbo codes( DTC) scheme with log likelihood ratio( LLR)-based threshold at the relay for a two-hop relay networks is proposed. Different from traditional DTC schemes,the retransmission scheme at the relay,where imperfect decoding occurs,is considered in the proposed scheme. By employing a LLR-based threshold at the relay in the proposed scheme,the reliability of decoder-LLRs can be measured. As a result,only reliable symbols will be forwarded to the destination and a maximum ratio combiner( MRC) is used to combine signals received from both the source and the relay. In order to obtain the optimal threshold at the relay,an equivalent model of decoderLLRs is investigated,so as to derive the expression of the bit error probability( BEP) of the proposed scheme under binary phase shift keying( BPSK) modulation. Simulation results demonstrate that the proposed scheme can effectively mitigate error propagation at the relay and also outperforms other existing methods.展开更多
In normal theory exploratory factor analysis, likelihood ratio (LR) statistic plays an important role in evaluating the goodness-of-fit of the model. In this paper, we derive an approximation of the LR statistic. The ...In normal theory exploratory factor analysis, likelihood ratio (LR) statistic plays an important role in evaluating the goodness-of-fit of the model. In this paper, we derive an approximation of the LR statistic. The approximation is then used to show explicitly that the expectation of the LR statistic agrees with the degrees of freedom of the asymptotic chi-square distribution.展开更多
This in virtue of the notion of likelihood ratio and the tool of moment generating function, the limit properties of the sequences of random discrete random variables are studied, and a class of strong deviation theor...This in virtue of the notion of likelihood ratio and the tool of moment generating function, the limit properties of the sequences of random discrete random variables are studied, and a class of strong deviation theorems which represented by inequalities between random variables and their expectation are obtained. As a result, we obtain some strong deviation theorems for Poisson distribution and binomial distribution.展开更多
In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null d...In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null distribution follows a distribution, with the scale constant and the number of degree of freedom being independent of nuisance parameters or functions, which is called the wilks phenomenon. Both simulated and real data examples are given to illustrate the performance of the testing approach.展开更多
Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the c...Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.展开更多
The Laplace distribution can be compared against the normal distribution.The Laplace distribution has an unusual,symmetric shape with a sharp peak and tailsthat are longer than the tails of a normal distribution.It ha...The Laplace distribution can be compared against the normal distribution.The Laplace distribution has an unusual,symmetric shape with a sharp peak and tailsthat are longer than the tails of a normal distribution.It has recently become quitepopular in modeling financial variables(Brownian Laplace motion)like stock returnsbecause of the greater tails.The Laplace distribution is very extensively reviewed in themonograph(Kotz et al.in the laplace distribution and generalizations-a revisit withapplications to communications,economics,engineering,and finance.Birkhauser,Boston,2001).In this article,we propose a density-based empirical likelihood ratio(DBELR)goodness-of-fit test statistic for the Laplace distribution.The test statisticis constructed based on the approach proposed by Vexler and Gurevich(Comput StatData Anal 54:531-545,2010).In order to compute the test statistic,parameters of theLaplace distribution are estimated by the maximum likelihood method.Critical valuesand power values of the proposed test are obtained by Monte Carlo simulations.Also,power comparisons of the proposed test with some known competing tests are carriedout.Finally,two illustrative examples are presented and analyzed.展开更多
To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the ...To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.展开更多
Due to the openness of the cognitive radio network, spectrum sensing data falsification (SSDF) can attack the spectrum sensing easily, while there is no effective algorithm proposed in current research work, so this...Due to the openness of the cognitive radio network, spectrum sensing data falsification (SSDF) can attack the spectrum sensing easily, while there is no effective algorithm proposed in current research work, so this paper introduces the malicious users removing to the weight sequential probability radio test (WSPRT). The terminals' weight is weighted by the accuracy of their spectrum sensing information, which can also be used to detect the malicious user. If one terminal owns a low weight, it can be treated as malicious user, and should be removed from the aggregation center. Simulation results show that the improved WSPRT can achieve higher performance compared with the other two conventional sequential detection methods under different number of malicious users.展开更多
Forensic speaker recognition is experiencing a remarkable paradigm shift in terms of the evaluation framework and presentation of voice evidence.This paper proposes a new method of forensic automatic speaker recogniti...Forensic speaker recognition is experiencing a remarkable paradigm shift in terms of the evaluation framework and presentation of voice evidence.This paper proposes a new method of forensic automatic speaker recognition using the likelihood ratio framework to quantify the strength of voice evidence.The proposed method uses a reference database to calculate the within-and between-speaker variability.Some acoustic-phonetic features are extracted automatically using the software VbiceSauce.The effectiveness of the approach was tested using two Mandarin databases:A mobile telephone database and a landline database.The experimenfs results indicate that these acoustic-phonetic features do have some discriminating potential and are worth trying in discrimination.The automatic acoustic-phonetic features have acceptable discriminative performance and can provide more reliable results in evidence analysis when fused with other kind of voice features.展开更多
This paper investigates the asymptotic properties of the modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models with an unknown structural parameter. It is shown that the modifi...This paper investigates the asymptotic properties of the modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models with an unknown structural parameter. It is shown that the modified likelihood ratio statistic has χ22 null limiting distribution.展开更多
In this paper we consider some related negative hypergeometric distributions arising from the problem of sampling without replacement from an urn containing balls of different colours and in different proportions but ...In this paper we consider some related negative hypergeometric distributions arising from the problem of sampling without replacement from an urn containing balls of different colours and in different proportions but stopping only after some specific number of balls of different colours have been obtained. With the aid of some simple recurrence relations and identities we obtain in the case of two colours the moments for the maximum negative hypergeometric distribution, the minimum negative hypergeometric distribution, the likelihood ratio negative hypergeometric distribution and consequently the likelihood proportional negative hypergeometric distributiuon. To the extent that the sampling scheme is applicable to modelling data as illustrated with a biological example and in fact many situations of estimating Bernoulli parameters for binary traits within a finite population, these are important first-step results.展开更多
In lifetime data analysis, naturally recorded observations are length-biased data if the probability to select an item is proportional to its length. Based on i.i.d, observations of the true distribution, empirical li...In lifetime data analysis, naturally recorded observations are length-biased data if the probability to select an item is proportional to its length. Based on i.i.d, observations of the true distribution, empirical likelihood (EL) procedure is proposed for the inference on mean residual life (MRL) of naturally recorded item. The limit distribution of the EL based log-likelihood ratio is proved to be the chi-square distribution. Under right censorship, since the EL based log-likelihood ratio leads to a scaled chi-square distribution and estimating the scale parameter leads to lower coverage of confidence interval, we propose an algorithm to calculate the likelihood ratio (LR) directly. The corresponding log-likelihood ratio converges to the standard chi-square distribution and the corresponding confidence interval has a better coverage. Simulation studies are used to support the theoretical results.展开更多
Multivariate likelihood ratio order of order statistics conditioned on both the right tail and the left tail are built. These results strengthen and generalize those conclusions in terms of the univariate likelihood r...Multivariate likelihood ratio order of order statistics conditioned on both the right tail and the left tail are built. These results strengthen and generalize those conclusions in terms of the univariate likelihood ratio order by Khaledi and Shaked (2007), Li and Zhao (2006), Hu, et al. (2006), and Hu, Jin, and Khaledi (2007).展开更多
This paper investigates the asymptotic properties of a modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models of two samples. The asymptotic null distribution of the modified li...This paper investigates the asymptotic properties of a modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models of two samples. The asymptotic null distribution of the modified likelihood ratio statistic is found to be X2^2, where X2^2 is a chi-squared distribution with 2 degrees of freedom.展开更多
The double-threshold autoregressive conditional heteroscedastic(DTARCH) model is a useful tool to measure and forecast the mean and volatility of an asset return in a financial time series. The DTARCH model can handle...The double-threshold autoregressive conditional heteroscedastic(DTARCH) model is a useful tool to measure and forecast the mean and volatility of an asset return in a financial time series. The DTARCH model can handle situations wherein the conditional mean and conditional variance specifications are piecewise linear based on previous information. In practical applications, it is important to check whether the model has a double threshold for the conditional mean and conditional heteroscedastic variance. In this study, we develop a likelihood ratio test based on the estimated residual error for the hypothesis testing of DTARCH models. We first investigate DTARCH models with restrictions on parameters and propose the unrestricted and restricted weighted composite quantile regression(WCQR) estimation for the model parameters. These estimators can be used to construct the likelihood ratio-type test statistic. We establish the asymptotic results of the WCQR estimators and asymptotic distribution of the proposed test statistics. The finite sample performance of the proposed WCQR estimation and the test statistic is shown to be acceptable and promising using simulation studies. We use two real datasets derived from the Shanghai and Shenzhen Composite Indexes to illustrate the methodology.展开更多
文摘In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose to the null hypothesis. These expansions are given in series form of beta distributions.
基金supported by the KERI Primary Research Program through the Korea Research Council for Industrial Science & Technology funded by the Ministry of Science,ICT and Future Planning (No.15-12-N0101-46)
文摘A novel technique is proposed to improve the performance of voice activity detection(VAD) by using deep belief networks(DBN) with a likelihood ratio(LR). The likelihood ratio is derived from the speech and noise spectral components that are assumed to follow the Gaussian probability density function(PDF). The proposed algorithm employs DBN learning in order to classify voice activity by using the input signal to calculate the likelihood ratio. Experiments show that the proposed algorithm yields improved results in various noise environments, compared to the conventional VAD algorithms. Furthermore, the DBN based algorithm decreases the detection probability of error with [0.7, 2.6] compared to the support vector machine based algorithm.
基金This work is supported by NNSF of China (10571093)
文摘Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying accuracies are obtained. Statistical inference should be based on the pooled samples. In this article, the authors also assumes that all the imperfect instruments are unbiased. They consider the problem of combining this information to make statistical tests for parameters more relevant. They define the empirical likelihood ratio functions and obtain their asymptotic distributions in the presence of measurement error.
基金Supported by the National Natural Science Foundation of China(10661003)the SRF for ROCS,SEM([2004]527)the NSF of Guangxi(0728092)
文摘This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ratio test is X^2(1).
文摘The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multiplexing (OFDM) in underground coal mine is sensitive to the frequency selection of multiple path fading channel, whose decoding is separated from the traditional channel estimation algorithm. In order to increase its accuracy and reliability, a new iterating channel estimation algorithm combining the logarithm likelihood ratio (LLR) decode iterate based on the maximum likelihood estimation (ML) is proposed in this paper, which estimates iteration channel in combination with LLR decode. Without estimating the channel noise power, it exchanges the information between the ML channel estimation and the LLR decode using the feedback information of LLR decode. The decoding speed is very quick, and the satisfied result will be obtained by iterating in some time. The simulation results of the shortwave broadband channel in the coal mine show that the error rate of the system is basically convergent after the iteration in two times.
文摘A distributed turbo codes( DTC) scheme with log likelihood ratio( LLR)-based threshold at the relay for a two-hop relay networks is proposed. Different from traditional DTC schemes,the retransmission scheme at the relay,where imperfect decoding occurs,is considered in the proposed scheme. By employing a LLR-based threshold at the relay in the proposed scheme,the reliability of decoder-LLRs can be measured. As a result,only reliable symbols will be forwarded to the destination and a maximum ratio combiner( MRC) is used to combine signals received from both the source and the relay. In order to obtain the optimal threshold at the relay,an equivalent model of decoderLLRs is investigated,so as to derive the expression of the bit error probability( BEP) of the proposed scheme under binary phase shift keying( BPSK) modulation. Simulation results demonstrate that the proposed scheme can effectively mitigate error propagation at the relay and also outperforms other existing methods.
文摘In normal theory exploratory factor analysis, likelihood ratio (LR) statistic plays an important role in evaluating the goodness-of-fit of the model. In this paper, we derive an approximation of the LR statistic. The approximation is then used to show explicitly that the expectation of the LR statistic agrees with the degrees of freedom of the asymptotic chi-square distribution.
文摘This in virtue of the notion of likelihood ratio and the tool of moment generating function, the limit properties of the sequences of random discrete random variables are studied, and a class of strong deviation theorems which represented by inequalities between random variables and their expectation are obtained. As a result, we obtain some strong deviation theorems for Poisson distribution and binomial distribution.
文摘In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null distribution follows a distribution, with the scale constant and the number of degree of freedom being independent of nuisance parameters or functions, which is called the wilks phenomenon. Both simulated and real data examples are given to illustrate the performance of the testing approach.
文摘Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.
文摘The Laplace distribution can be compared against the normal distribution.The Laplace distribution has an unusual,symmetric shape with a sharp peak and tailsthat are longer than the tails of a normal distribution.It has recently become quitepopular in modeling financial variables(Brownian Laplace motion)like stock returnsbecause of the greater tails.The Laplace distribution is very extensively reviewed in themonograph(Kotz et al.in the laplace distribution and generalizations-a revisit withapplications to communications,economics,engineering,and finance.Birkhauser,Boston,2001).In this article,we propose a density-based empirical likelihood ratio(DBELR)goodness-of-fit test statistic for the Laplace distribution.The test statisticis constructed based on the approach proposed by Vexler and Gurevich(Comput StatData Anal 54:531-545,2010).In order to compute the test statistic,parameters of theLaplace distribution are estimated by the maximum likelihood method.Critical valuesand power values of the proposed test are obtained by Monte Carlo simulations.Also,power comparisons of the proposed test with some known competing tests are carriedout.Finally,two illustrative examples are presented and analyzed.
基金This work was supported by the Natural Science Foundation of Tianjin (Grant No. 033603111).
文摘To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.
基金supported by the National Natural Science Foundation of China(61172073)the State Key Laboratory of Rail Traffic Control and Safety Beijing Jiaotong University(RCS2011ZT003)+2 种基金the Open Research Fund of Key Laboratory of Wireless Sensor Network & Communication,Chinese Academy of Sciences(2011005)the Fundamental Research Funds for the Central Universities of Ministry of Education of China(2013JBZ001,2012YJS129,2009JBM012)the Program for New Century Excellent Talents in University of Ministry of China(NCET-12-0766)
文摘Due to the openness of the cognitive radio network, spectrum sensing data falsification (SSDF) can attack the spectrum sensing easily, while there is no effective algorithm proposed in current research work, so this paper introduces the malicious users removing to the weight sequential probability radio test (WSPRT). The terminals' weight is weighted by the accuracy of their spectrum sensing information, which can also be used to detect the malicious user. If one terminal owns a low weight, it can be treated as malicious user, and should be removed from the aggregation center. Simulation results show that the improved WSPRT can achieve higher performance compared with the other two conventional sequential detection methods under different number of malicious users.
文摘Forensic speaker recognition is experiencing a remarkable paradigm shift in terms of the evaluation framework and presentation of voice evidence.This paper proposes a new method of forensic automatic speaker recognition using the likelihood ratio framework to quantify the strength of voice evidence.The proposed method uses a reference database to calculate the within-and between-speaker variability.Some acoustic-phonetic features are extracted automatically using the software VbiceSauce.The effectiveness of the approach was tested using two Mandarin databases:A mobile telephone database and a landline database.The experimenfs results indicate that these acoustic-phonetic features do have some discriminating potential and are worth trying in discrimination.The automatic acoustic-phonetic features have acceptable discriminative performance and can provide more reliable results in evidence analysis when fused with other kind of voice features.
基金the National Natural Science Foundation of China (Grant No. 10661003)the Natural Science Foundation of Guangxi (Grant No. 0728092) SRF for ROCS, SEM (Grant No. [2004]527)
文摘This paper investigates the asymptotic properties of the modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models with an unknown structural parameter. It is shown that the modified likelihood ratio statistic has χ22 null limiting distribution.
文摘In this paper we consider some related negative hypergeometric distributions arising from the problem of sampling without replacement from an urn containing balls of different colours and in different proportions but stopping only after some specific number of balls of different colours have been obtained. With the aid of some simple recurrence relations and identities we obtain in the case of two colours the moments for the maximum negative hypergeometric distribution, the minimum negative hypergeometric distribution, the likelihood ratio negative hypergeometric distribution and consequently the likelihood proportional negative hypergeometric distributiuon. To the extent that the sampling scheme is applicable to modelling data as illustrated with a biological example and in fact many situations of estimating Bernoulli parameters for binary traits within a finite population, these are important first-step results.
基金Supported by the National Natural Science Foundation of China(No.11171230,11231010,11471272)
文摘In lifetime data analysis, naturally recorded observations are length-biased data if the probability to select an item is proportional to its length. Based on i.i.d, observations of the true distribution, empirical likelihood (EL) procedure is proposed for the inference on mean residual life (MRL) of naturally recorded item. The limit distribution of the EL based log-likelihood ratio is proved to be the chi-square distribution. Under right censorship, since the EL based log-likelihood ratio leads to a scaled chi-square distribution and estimating the scale parameter leads to lower coverage of confidence interval, we propose an algorithm to calculate the likelihood ratio (LR) directly. The corresponding log-likelihood ratio converges to the standard chi-square distribution and the corresponding confidence interval has a better coverage. Simulation studies are used to support the theoretical results.
基金This research is supported by the National Natural Science Foundations of China under Grant No. 10771090. Authors thank Professor Xiaohu Li for providing us insightful instruction and his encouraging comments on this manuscript.
文摘Multivariate likelihood ratio order of order statistics conditioned on both the right tail and the left tail are built. These results strengthen and generalize those conclusions in terms of the univariate likelihood ratio order by Khaledi and Shaked (2007), Li and Zhao (2006), Hu, et al. (2006), and Hu, Jin, and Khaledi (2007).
基金supported by the National Natural Science Foundation of China under Grant No. 10661003SRF for ROCS, SEM under Grant No. [2004]527the Natm'aI Science Foundation of Guangxi under Grant No. 0728092
文摘This paper investigates the asymptotic properties of a modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models of two samples. The asymptotic null distribution of the modified likelihood ratio statistic is found to be X2^2, where X2^2 is a chi-squared distribution with 2 degrees of freedom.
基金supported by National Natural Science Foundation of China(Grant No.71601123)MOE(Ministry of Education in China)Project of Humanities and Social Sciences(Grant No.15YJC910004)+3 种基金supported by National Natural Science Foundation of China(Grant No.11471277)the Research Grant Council of the Hong Kong Special Administration Region(Grant No.GRF14305014)supported by the State Key Program of National Natural Science Foundation of China(Grant No.71331006)the Major Research Plan of National Natural Science Foundation of China(Grant No.91546202)
文摘The double-threshold autoregressive conditional heteroscedastic(DTARCH) model is a useful tool to measure and forecast the mean and volatility of an asset return in a financial time series. The DTARCH model can handle situations wherein the conditional mean and conditional variance specifications are piecewise linear based on previous information. In practical applications, it is important to check whether the model has a double threshold for the conditional mean and conditional heteroscedastic variance. In this study, we develop a likelihood ratio test based on the estimated residual error for the hypothesis testing of DTARCH models. We first investigate DTARCH models with restrictions on parameters and propose the unrestricted and restricted weighted composite quantile regression(WCQR) estimation for the model parameters. These estimators can be used to construct the likelihood ratio-type test statistic. We establish the asymptotic results of the WCQR estimators and asymptotic distribution of the proposed test statistics. The finite sample performance of the proposed WCQR estimation and the test statistic is shown to be acceptable and promising using simulation studies. We use two real datasets derived from the Shanghai and Shenzhen Composite Indexes to illustrate the methodology.