Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the c...Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.展开更多
This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ...This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ratio test is X^2(1).展开更多
Wireless Communication is a system for communicating information from one point to other,without utilizing any connections like wire,cable,or other physical medium.Cognitive Radio(CR)based systems and networks are a r...Wireless Communication is a system for communicating information from one point to other,without utilizing any connections like wire,cable,or other physical medium.Cognitive Radio(CR)based systems and networks are a revolutionary new perception in wireless communications.Spectrum sensing is a vital task of CR to avert destructive intrusion with licensed primary or main users and discover the accessible spectrum for the efficient utilization of the spectrum.Centralized Cooperative Spectrum Sensing(CSS)is a kind of spectrum sensing.Most of the test metrics designed till now for sensing the spectrum is produced by using the Sample Covariance Matrix(SCM)of the received signal.Some of the methods that use the SCM for the process of detection are Pietra-Ricci Index Detector(PRIDe),Hadamard Ratio(HR)detector,Gini Index Detector(GID),etc.This paper presents the simulation and comparative perfor-mance analysis of PRIDe with various other detectors like GID,HR,Arithmetic to Geometric Mean(AGM),Volume-based Detector number 1(VD1),Maximum-to-Minimum Eigenvalue Detection(MMED),and Generalized Likelihood Ratio Test(GLRT)using the MATLAB software.The PRIDe provides better performance in the presence of variations in the power of the signal and the noise power with less computational complexity.展开更多
The occurrence of lightning-induced forest fires during a time period is count data featuring over-dispersion (i.e., variance is larger than mean) and a high frequency of zero counts. In this study, we used six gene...The occurrence of lightning-induced forest fires during a time period is count data featuring over-dispersion (i.e., variance is larger than mean) and a high frequency of zero counts. In this study, we used six generalized linear models to examine the relationship between the occurrence of lightning-induced forest fires and meteorological factors in the Northern Daxing'an Mountains of China. The six models included Poisson, negative binomial (NB), zero- inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), Poisson hurdle (PH), and negative binomial hurdle (NBH) models. Goodness-of-fit was compared and tested among the six models using Akaike information criterion (AIC), sum of squared errors, likelihood ratio test, and Vuong test. The predictive performance of the models was assessed and compared using independent validation data by the data-splitting method. Based on the model AIC, the ZINB model best fitted the fire occurrence data, followed by (in order of smaller AIC) NBH, ZIP, NB, PH, and Poisson models. The ZINB model was also best for pre- dicting either zero counts or positive counts (〉1). The two Hurdle models (PH and NBH) were better than ZIP, Poisson, and NB models for predicting positive counts, but worse than these three models for predicting zero counts. Thus, the ZINB model was the first choice for modeling the occurrence of lightning-induced forest fires in this study, which implied that the excessive zero counts of lightning- induced fires came from both structure and sampling zeros.展开更多
Hypothesis testing analysis and unknown parameter estimation of both the intermediate frequency(IF) and baseband GPS signal detection are given by using the generalized likelihood ratio test(GLRT) approach,applying th...Hypothesis testing analysis and unknown parameter estimation of both the intermediate frequency(IF) and baseband GPS signal detection are given by using the generalized likelihood ratio test(GLRT) approach,applying the model of GPS signal in white Gaussian noise,It is proved that the test statistic follows central or noncentral F distribution,It is also pointed out that the test statistic is nearly identical to central or noncentral chi-squared distribution because the processing samples are large enough to be considered as infinite in GPS acquisition problem.It is also proved that the probability of false alarm,the probability of detection and the threshold are affected largely when the hypothesis testing refers to the full pseudorandom noise(PRN) code phase and Doppler frequency search space cells instead of each individual cell.The performance of the test statistic is also given with combining the noncoherent integration.展开更多
As location-based techniques and applications have become ubiquitous in emerging wireless networks, the verification of location information has become more important. In recent years, there has been an explosion of a...As location-based techniques and applications have become ubiquitous in emerging wireless networks, the verification of location information has become more important. In recent years, there has been an explosion of activity related to lo- cation-verification techniques in wireless networks. In particular, there has been a specific focus on intelligent transport systems because of the mission-critical nature of vehicle location verification. In this paper, we review recent research on wireless location verification related to vehicular networks. We focus on location verification systems that rely on for- mal mathematical classification frameworks and show how many systems are either partially or fully encompassed by such frameworks.展开更多
To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the ...To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.展开更多
When all the rules of sensor decision are known, the optimal distributeddecision fusion, which relies only on the joint conditional probability densities, can be derivedfor very general decision systems. They include ...When all the rules of sensor decision are known, the optimal distributeddecision fusion, which relies only on the joint conditional probability densities, can be derivedfor very general decision systems. They include those systems with interdependent sensorobservations and any network structure. It is also valid for m-ary Bayesian decision problems andbinary problems under the Neyman-Pearson criterion. Local decision rules of a sensor withcommunication from other sensors that are optimal for the sensor itself are also presented, whichtake the form of a generalized likelihood ratio test. Numerical examples are given to reveal someinteresting phenomena that communication between sensors can improve performance of a senordecision, but cannot guarantee to improve the global fusion performance when sensor rules were givenbefore fusing.展开更多
As traditional two-parameter constant false alarm rate (CFAR) target detec-tion algorithms in SAR images ignore target distribution, their performances are not the best or near best. As the resolution of SAR images ...As traditional two-parameter constant false alarm rate (CFAR) target detec-tion algorithms in SAR images ignore target distribution, their performances are not the best or near best. As the resolution of SAR images increases, small targets present more pixels in SAR images. So the target distribution is of much significance. Distribution-based CFAR detection algorithm is presented. We unite the pixels around the test cell, and estimate the distribution of test cell by them. Generalized Likelihood Ratio Test (GLRT) is used to deduce the detectors. The performance of the distribution-based CFAR (DBCFAR) detectors is analyzed theoretically. False alarms of DBCFAR detection are fewer than those of CFAR at the same detection rate. Finally experiments are done and the results show the performance of DBCFAR is out of conventional CFAR. False alarms of DBCFAR detection are concentrated while those of CFAR detection are dispersive.展开更多
Varying-coefficient models are a useful extension of classical linear model. They are widely applied to economics, biomedicine, epidemiology, and so on. There are extensive studies on them in the latest three decade y...Varying-coefficient models are a useful extension of classical linear model. They are widely applied to economics, biomedicine, epidemiology, and so on. There are extensive studies on them in the latest three decade years. In this paper, many of models related to varying-coefficient models are gathered up. All kinds of the estimation procedures and theory of hypothesis test on the varying-coefficients model are summarized. Prom my opinion, some aspects waiting to study are proposed.展开更多
In this papert we give an approach for detecting one or more outliers inrandomized linear model.The likelihood ratio test statistic and its distributions underthe null hypothesis and the alternative hypothesis are giv...In this papert we give an approach for detecting one or more outliers inrandomized linear model.The likelihood ratio test statistic and its distributions underthe null hypothesis and the alternative hypothesis are given. Furthermore,the robustnessof the test statistic in a certain sense is proved. Finally,the optimality properties of thetest are derived.展开更多
The problem of detecting signal with multiple input mul-tiple output (MIMO) radar in correlated Gaussian clutter dominated scenario with unknown covariance matrix is dealt with. The gen-eral MIMO model, with widely ...The problem of detecting signal with multiple input mul-tiple output (MIMO) radar in correlated Gaussian clutter dominated scenario with unknown covariance matrix is dealt with. The gen-eral MIMO model, with widely separated sub-arrays and co-located antennas at each sub-array, is adopted. Firstly, the generalized likelihood ratio test (GLRT) with known covariance matrix is ob-tained, and then the Rao and Wald detectors are devised, which have proved that the Rao and Wald test coincide with GLRT detec-tor. To make the detectors fully adaptive, the secondary data with signal-free will be collected to estimate the covariance. The per-formance of the proposed detector is analyzed, however, it is just ancillary. A thorough performance assessment by several numer-ical examples is also given, which has considered the sense with co-located antennas configure of transmitters and receivers array. The results show that the performance the proposed adaptive de-tector is better than LJ-GLRT, and the loss can be acceptable in comparison to their non-adaptive counterparts.展开更多
The problem of adaptive detection in the situation of signal mismatch is considered; that is, the actual signal steering vector is not aligned with the nominal one. Two novel tunable detectors are proposed. They can c...The problem of adaptive detection in the situation of signal mismatch is considered; that is, the actual signal steering vector is not aligned with the nominal one. Two novel tunable detectors are proposed. They can control the degree to which the mismatched signals are rejected. Remarkably, it is found that they both cover existing famous detectors as their special cases. More importantly, they possess the constant false alarm rate(CFAR)property and achieve enhanced mismatched signal rejection or improved robustness than their natural competitors. Besides, they can provide slightly better matched signals detection performance than the existing detectors.展开更多
An optimized detection model based on weighted entropy for multiple input multiple output (MIMO) radar in multipath environment is presented. After defining the multipath distance difference (MDD), the multipath recei...An optimized detection model based on weighted entropy for multiple input multiple output (MIMO) radar in multipath environment is presented. After defining the multipath distance difference (MDD), the multipath received signal model with four paths is built systematically. Both the variance and correlation coefficient of multipath scattering coefficient with MDD are analyzed, which indicates that the multipath variable can decrease the detection performance by reducing the echo power. By making use of the likelihood ratio test (LRT), a new method based on weighted entropy is introduced to use the positive multipath echo power and suppress the negative echo power, which results in better performance. Simulation results show that, compared with non-multipath environment or other recently developed methods, the proposed method can achieve detection performance improvement with the increase of sensors.展开更多
The problem of adaptive radar detection in compound-Gaussian clutter without secondary data is considered in this paper.In most practical applications,the number of training data is limited.To overcome the lack of tra...The problem of adaptive radar detection in compound-Gaussian clutter without secondary data is considered in this paper.In most practical applications,the number of training data is limited.To overcome the lack of training data,an autoregressive(AR)-process-based covariance matrix estimator is proposed.Then,with the estimated covariance matrix the one-step generalized likelihood ratio test(GLRT) detector is designed without training data.Finally,detection performance of our proposed detector is assessed.展开更多
Mainly due to its implementation simplicity, the non-coherent Ultra-Wide Band (UWB) receiver is attractive for lower data rate applications, which gains much attention again in recent years. In this paper, a General L...Mainly due to its implementation simplicity, the non-coherent Ultra-Wide Band (UWB) receiver is attractive for lower data rate applications, which gains much attention again in recent years. In this paper, a General Likelihood Ratio Test (GLRT) based non-coherent receiver on UWB Pulse-Position-Modulation (PPM) signal in multipath channels is derived, and a novel structure is proposed as well. Subsequently, the closed-form expressions of asymptotic error-rate performance related to the non-coherent receiver are also derived and verified.展开更多
In this paper, we propose the test statistic to check whether the nonparametric function in partially linear models is linear or not. We estimate the nonparametric function in alternative by using the local linear met...In this paper, we propose the test statistic to check whether the nonparametric function in partially linear models is linear or not. We estimate the nonparametric function in alternative by using the local linear method, and then estimate the parameters by the two stage method. The test statistic under the null hypothesis is calculated, and it is shown to be asymptotically normal.展开更多
The NI (non-inferiority) trial design based on the likelihood ratio test eliminates the dependency on the conventional NI margin, and it explicitly uses the MCID (minimum clinical important difference) that links ...The NI (non-inferiority) trial design based on the likelihood ratio test eliminates the dependency on the conventional NI margin, and it explicitly uses the MCID (minimum clinical important difference) that links the statistical analysis to the clinical sense. Different from the conventional trial design, the new methodology is self-adaptive to the change in the sample size and overall cure rate, and it has an asymptotic property. It is shown that MCID is de-composite into constant MCID and statistical MCID. Along with this concept, the concept of the allowed inferiority does not exist, the interpretation of the trial result is more accurate and consistent to the statistical theory as well as the clinical interpretations.展开更多
This paper investigates the asymptotic properties of the modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models with an unknown structural parameter. It is shown that the modifi...This paper investigates the asymptotic properties of the modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models with an unknown structural parameter. It is shown that the modified likelihood ratio statistic has χ22 null limiting distribution.展开更多
In recent years,machine learning algorithms and in particular deep learning has shown promising results when used in the field of legal domain.The legal field is strongly affected by the problem of information overloa...In recent years,machine learning algorithms and in particular deep learning has shown promising results when used in the field of legal domain.The legal field is strongly affected by the problem of information overload,due to the large amount of legal material stored in textual form.Legal text processing is essential in the legal domain to analyze the texts of the court events to automatically predict smart decisions.With an increasing number of digitally available documents,legal text processing is essential to analyze documents which helps to automate various legal domain tasks.Legal document classification is a valuable tool in legal services for enhancing the quality and efficiency of legal document review.In this paper,we propose Sammon Keyword Mapping-based Quadratic Discriminant Recurrent Multilayer Perceptive Deep Neural Classifier(SKM-QDRMPDNC),a system that applies deep neural methods to the problem of legal document classification.The SKM-QDRMPDNC technique consists of many layers to perform the keyword extraction and classification.First,the set of legal documents are collected from the dataset.Then the keyword extraction is performed using SammonMapping technique based on the distance measure.With the extracted features,Quadratic Discriminant analysis is applied to performthe document classification based on the likelihood ratio test.Finally,the classified legal documents are obtained at the output layer.This process is repeated until minimum error is attained.The experimental assessment is carried out using various performance metrics such as accuracy,precision,recall,F-measure,and computational time based on several legal documents collected from the dataset.The observed results validated that the proposed SKM-QDRMPDNC technique provides improved performance in terms of achieving higher accuracy,precision,recall,and F-measure with minimum computation time when compared to existing methods.展开更多
文摘Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.
基金Supported by the National Natural Science Foundation of China(10661003)the SRF for ROCS,SEM([2004]527)the NSF of Guangxi(0728092)
文摘This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ratio test is X^2(1).
文摘Wireless Communication is a system for communicating information from one point to other,without utilizing any connections like wire,cable,or other physical medium.Cognitive Radio(CR)based systems and networks are a revolutionary new perception in wireless communications.Spectrum sensing is a vital task of CR to avert destructive intrusion with licensed primary or main users and discover the accessible spectrum for the efficient utilization of the spectrum.Centralized Cooperative Spectrum Sensing(CSS)is a kind of spectrum sensing.Most of the test metrics designed till now for sensing the spectrum is produced by using the Sample Covariance Matrix(SCM)of the received signal.Some of the methods that use the SCM for the process of detection are Pietra-Ricci Index Detector(PRIDe),Hadamard Ratio(HR)detector,Gini Index Detector(GID),etc.This paper presents the simulation and comparative perfor-mance analysis of PRIDe with various other detectors like GID,HR,Arithmetic to Geometric Mean(AGM),Volume-based Detector number 1(VD1),Maximum-to-Minimum Eigenvalue Detection(MMED),and Generalized Likelihood Ratio Test(GLRT)using the MATLAB software.The PRIDe provides better performance in the presence of variations in the power of the signal and the noise power with less computational complexity.
基金funded by Asia–Pacific Forests Net(APFNET/2010/FPF/001)National Natural Science Foundation of China(Grant No.31400552)
文摘The occurrence of lightning-induced forest fires during a time period is count data featuring over-dispersion (i.e., variance is larger than mean) and a high frequency of zero counts. In this study, we used six generalized linear models to examine the relationship between the occurrence of lightning-induced forest fires and meteorological factors in the Northern Daxing'an Mountains of China. The six models included Poisson, negative binomial (NB), zero- inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), Poisson hurdle (PH), and negative binomial hurdle (NBH) models. Goodness-of-fit was compared and tested among the six models using Akaike information criterion (AIC), sum of squared errors, likelihood ratio test, and Vuong test. The predictive performance of the models was assessed and compared using independent validation data by the data-splitting method. Based on the model AIC, the ZINB model best fitted the fire occurrence data, followed by (in order of smaller AIC) NBH, ZIP, NB, PH, and Poisson models. The ZINB model was also best for pre- dicting either zero counts or positive counts (〉1). The two Hurdle models (PH and NBH) were better than ZIP, Poisson, and NB models for predicting positive counts, but worse than these three models for predicting zero counts. Thus, the ZINB model was the first choice for modeling the occurrence of lightning-induced forest fires in this study, which implied that the excessive zero counts of lightning- induced fires came from both structure and sampling zeros.
文摘Hypothesis testing analysis and unknown parameter estimation of both the intermediate frequency(IF) and baseband GPS signal detection are given by using the generalized likelihood ratio test(GLRT) approach,applying the model of GPS signal in white Gaussian noise,It is proved that the test statistic follows central or noncentral F distribution,It is also pointed out that the test statistic is nearly identical to central or noncentral chi-squared distribution because the processing samples are large enough to be considered as infinite in GPS acquisition problem.It is also proved that the probability of false alarm,the probability of detection and the threshold are affected largely when the hypothesis testing refers to the full pseudorandom noise(PRN) code phase and Doppler frequency search space cells instead of each individual cell.The performance of the test statistic is also given with combining the noncoherent integration.
基金supported by the University of New South Wales and the Australian Research Council under grant No.DP120102607
文摘As location-based techniques and applications have become ubiquitous in emerging wireless networks, the verification of location information has become more important. In recent years, there has been an explosion of activity related to lo- cation-verification techniques in wireless networks. In particular, there has been a specific focus on intelligent transport systems because of the mission-critical nature of vehicle location verification. In this paper, we review recent research on wireless location verification related to vehicular networks. We focus on location verification systems that rely on for- mal mathematical classification frameworks and show how many systems are either partially or fully encompassed by such frameworks.
基金This work was supported by the Natural Science Foundation of Tianjin (Grant No. 033603111).
文摘To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.
文摘When all the rules of sensor decision are known, the optimal distributeddecision fusion, which relies only on the joint conditional probability densities, can be derivedfor very general decision systems. They include those systems with interdependent sensorobservations and any network structure. It is also valid for m-ary Bayesian decision problems andbinary problems under the Neyman-Pearson criterion. Local decision rules of a sensor withcommunication from other sensors that are optimal for the sensor itself are also presented, whichtake the form of a generalized likelihood ratio test. Numerical examples are given to reveal someinteresting phenomena that communication between sensors can improve performance of a senordecision, but cannot guarantee to improve the global fusion performance when sensor rules were givenbefore fusing.
文摘As traditional two-parameter constant false alarm rate (CFAR) target detec-tion algorithms in SAR images ignore target distribution, their performances are not the best or near best. As the resolution of SAR images increases, small targets present more pixels in SAR images. So the target distribution is of much significance. Distribution-based CFAR detection algorithm is presented. We unite the pixels around the test cell, and estimate the distribution of test cell by them. Generalized Likelihood Ratio Test (GLRT) is used to deduce the detectors. The performance of the distribution-based CFAR (DBCFAR) detectors is analyzed theoretically. False alarms of DBCFAR detection are fewer than those of CFAR at the same detection rate. Finally experiments are done and the results show the performance of DBCFAR is out of conventional CFAR. False alarms of DBCFAR detection are concentrated while those of CFAR detection are dispersive.
基金Foundation item: Supported by the National Natural Science Foundation of China(10501053) Acknowledgement I would like to thank Henan Society of Applied Statistics for which give me a chance to declare my opinion about the varying-coefficient model.
文摘Varying-coefficient models are a useful extension of classical linear model. They are widely applied to economics, biomedicine, epidemiology, and so on. There are extensive studies on them in the latest three decade years. In this paper, many of models related to varying-coefficient models are gathered up. All kinds of the estimation procedures and theory of hypothesis test on the varying-coefficients model are summarized. Prom my opinion, some aspects waiting to study are proposed.
文摘In this papert we give an approach for detecting one or more outliers inrandomized linear model.The likelihood ratio test statistic and its distributions underthe null hypothesis and the alternative hypothesis are given. Furthermore,the robustnessof the test statistic in a certain sense is proved. Finally,the optimality properties of thetest are derived.
基金supported by the Fundamental Research Funds for the Central Universities (103.1.2-E022050205)
文摘The problem of detecting signal with multiple input mul-tiple output (MIMO) radar in correlated Gaussian clutter dominated scenario with unknown covariance matrix is dealt with. The gen-eral MIMO model, with widely separated sub-arrays and co-located antennas at each sub-array, is adopted. Firstly, the generalized likelihood ratio test (GLRT) with known covariance matrix is ob-tained, and then the Rao and Wald detectors are devised, which have proved that the Rao and Wald test coincide with GLRT detec-tor. To make the detectors fully adaptive, the secondary data with signal-free will be collected to estimate the covariance. The per-formance of the proposed detector is analyzed, however, it is just ancillary. A thorough performance assessment by several numer-ical examples is also given, which has considered the sense with co-located antennas configure of transmitters and receivers array. The results show that the performance the proposed adaptive de-tector is better than LJ-GLRT, and the loss can be acceptable in comparison to their non-adaptive counterparts.
基金supported by the National Natural Science Foundation of China(6110216960925005)
文摘The problem of adaptive detection in the situation of signal mismatch is considered; that is, the actual signal steering vector is not aligned with the nominal one. Two novel tunable detectors are proposed. They can control the degree to which the mismatched signals are rejected. Remarkably, it is found that they both cover existing famous detectors as their special cases. More importantly, they possess the constant false alarm rate(CFAR)property and achieve enhanced mismatched signal rejection or improved robustness than their natural competitors. Besides, they can provide slightly better matched signals detection performance than the existing detectors.
基金supported by the Natural Science Foundation Research Project of Shaanxi Province(2016JQ6020)
文摘An optimized detection model based on weighted entropy for multiple input multiple output (MIMO) radar in multipath environment is presented. After defining the multipath distance difference (MDD), the multipath received signal model with four paths is built systematically. Both the variance and correlation coefficient of multipath scattering coefficient with MDD are analyzed, which indicates that the multipath variable can decrease the detection performance by reducing the echo power. By making use of the likelihood ratio test (LRT), a new method based on weighted entropy is introduced to use the positive multipath echo power and suppress the negative echo power, which results in better performance. Simulation results show that, compared with non-multipath environment or other recently developed methods, the proposed method can achieve detection performance improvement with the increase of sensors.
基金supported by the Fundamental Research Funds for the Central Universities under Grant No. E022050205
文摘The problem of adaptive radar detection in compound-Gaussian clutter without secondary data is considered in this paper.In most practical applications,the number of training data is limited.To overcome the lack of training data,an autoregressive(AR)-process-based covariance matrix estimator is proposed.Then,with the estimated covariance matrix the one-step generalized likelihood ratio test(GLRT) detector is designed without training data.Finally,detection performance of our proposed detector is assessed.
文摘Mainly due to its implementation simplicity, the non-coherent Ultra-Wide Band (UWB) receiver is attractive for lower data rate applications, which gains much attention again in recent years. In this paper, a General Likelihood Ratio Test (GLRT) based non-coherent receiver on UWB Pulse-Position-Modulation (PPM) signal in multipath channels is derived, and a novel structure is proposed as well. Subsequently, the closed-form expressions of asymptotic error-rate performance related to the non-coherent receiver are also derived and verified.
文摘In this paper, we propose the test statistic to check whether the nonparametric function in partially linear models is linear or not. We estimate the nonparametric function in alternative by using the local linear method, and then estimate the parameters by the two stage method. The test statistic under the null hypothesis is calculated, and it is shown to be asymptotically normal.
文摘The NI (non-inferiority) trial design based on the likelihood ratio test eliminates the dependency on the conventional NI margin, and it explicitly uses the MCID (minimum clinical important difference) that links the statistical analysis to the clinical sense. Different from the conventional trial design, the new methodology is self-adaptive to the change in the sample size and overall cure rate, and it has an asymptotic property. It is shown that MCID is de-composite into constant MCID and statistical MCID. Along with this concept, the concept of the allowed inferiority does not exist, the interpretation of the trial result is more accurate and consistent to the statistical theory as well as the clinical interpretations.
基金the National Natural Science Foundation of China (Grant No. 10661003)the Natural Science Foundation of Guangxi (Grant No. 0728092) SRF for ROCS, SEM (Grant No. [2004]527)
文摘This paper investigates the asymptotic properties of the modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models with an unknown structural parameter. It is shown that the modified likelihood ratio statistic has χ22 null limiting distribution.
文摘In recent years,machine learning algorithms and in particular deep learning has shown promising results when used in the field of legal domain.The legal field is strongly affected by the problem of information overload,due to the large amount of legal material stored in textual form.Legal text processing is essential in the legal domain to analyze the texts of the court events to automatically predict smart decisions.With an increasing number of digitally available documents,legal text processing is essential to analyze documents which helps to automate various legal domain tasks.Legal document classification is a valuable tool in legal services for enhancing the quality and efficiency of legal document review.In this paper,we propose Sammon Keyword Mapping-based Quadratic Discriminant Recurrent Multilayer Perceptive Deep Neural Classifier(SKM-QDRMPDNC),a system that applies deep neural methods to the problem of legal document classification.The SKM-QDRMPDNC technique consists of many layers to perform the keyword extraction and classification.First,the set of legal documents are collected from the dataset.Then the keyword extraction is performed using SammonMapping technique based on the distance measure.With the extracted features,Quadratic Discriminant analysis is applied to performthe document classification based on the likelihood ratio test.Finally,the classified legal documents are obtained at the output layer.This process is repeated until minimum error is attained.The experimental assessment is carried out using various performance metrics such as accuracy,precision,recall,F-measure,and computational time based on several legal documents collected from the dataset.The observed results validated that the proposed SKM-QDRMPDNC technique provides improved performance in terms of achieving higher accuracy,precision,recall,and F-measure with minimum computation time when compared to existing methods.