In the case of Z+^d(d ≥ 2)-the positive d-dimensional lattice points with partial ordering ≤, {Xk,k∈ Z+^d} i.i.d, random variables with mean 0, Sn =∑k≤nXk and Vn^2 = ∑j≤nXj^2, the precise asymptotics for ∑...In the case of Z+^d(d ≥ 2)-the positive d-dimensional lattice points with partial ordering ≤, {Xk,k∈ Z+^d} i.i.d, random variables with mean 0, Sn =∑k≤nXk and Vn^2 = ∑j≤nXj^2, the precise asymptotics for ∑n1/|n|(log|n|dP(|Sn/Vn|≥ε√log log|n|) and ∑n(logn|)b/|n|(log|n|)^d-1P(|Sn/Vn|≥ε√log n),as ε↓0,is established.展开更多
In this article, the unit root test for AR(p) model with GARCH errors is considered. The Dickey-Fuller test statistics are rewritten in the form of self-normalized sums, and the asymptotic distribution of the test s...In this article, the unit root test for AR(p) model with GARCH errors is considered. The Dickey-Fuller test statistics are rewritten in the form of self-normalized sums, and the asymptotic distribution of the test statistics is derived under the weak conditions.展开更多
Let variables in the {X, Xn, n ≥ 1} be a sequence of strictly stationary φ-mixing positive random domain of attraction of the normal law. Under some suitable conditions the principle for self-normalized products of ...Let variables in the {X, Xn, n ≥ 1} be a sequence of strictly stationary φ-mixing positive random domain of attraction of the normal law. Under some suitable conditions the principle for self-normalized products of partial sums is obtained.展开更多
In this case study, we would like to illustrate the utility of characteristic functions, using an example of a sample statistic defined for samples from Cauchy distribution. The derivation of the corresponding asympto...In this case study, we would like to illustrate the utility of characteristic functions, using an example of a sample statistic defined for samples from Cauchy distribution. The derivation of the corresponding asymptotic probability density function is based on [1], elaborating and expanding the individual steps of their presentation, and including a small extension;our reason for such a plagiarism is to make the technique, its mathematical tools and ingenious arguments available to the widest possible audience.展开更多
With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex...With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline.Abstractive summarization task is framed as seq2seq modeling.Existing seq2seq methods perform better on short sequences;however,for long sequences,the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper.The novelty is to parallelize the sequence computation training by incorporating feed-forward,the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity(Ext-ICAS)with sentence dependency position.Also,it does not require any normalization technique explicitly.Our proposed abstractive Bidirectional Long Short Term Memory(Bi-LSTM)encoder sequence model performs better than the Bidirectional Gated Recurrent Unit(Bi-GRU)encoder with minimum training loss and with fast convergence.The proposed model was evaluated on the Cable News Network(CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59%with an average number of similarity computations.展开更多
The Berry-Esseen bound provides an upper bound on the Kolmogorov distance between a random variable and the normal distribution.In this paper,we establish Berry-Esseen bounds with optimal rates for self-normalized sum...The Berry-Esseen bound provides an upper bound on the Kolmogorov distance between a random variable and the normal distribution.In this paper,we establish Berry-Esseen bounds with optimal rates for self-normalized sums of locally dependent random variables,assuming only a second-moment condition.Our proof leverages Stein's method and introduces a novel randomized concentration inequality,which may also be of independent interest for other applications.Our main results have applied to self-normalized sums of m-dependent random variables and graph dependency models.展开更多
In this paper,we establish normalized and self-normalized Cramér-type moderate deviations for the Euler-Maruyama scheme for SDE.Due to our results,Berry-Esseen's bounds and moderate deviation principles are a...In this paper,we establish normalized and self-normalized Cramér-type moderate deviations for the Euler-Maruyama scheme for SDE.Due to our results,Berry-Esseen's bounds and moderate deviation principles are also obtained.Our normalized Cramér-type moderate deviations refine the recent work of Lu et al.(2022).展开更多
Let X1,X2,... be a sequence of independent random variables (r.v.s) belonging to the domain of attraction of a normal or stable law. In this paper, we study moderate deviations for the self-normalized sum n X ∑^n_i...Let X1,X2,... be a sequence of independent random variables (r.v.s) belonging to the domain of attraction of a normal or stable law. In this paper, we study moderate deviations for the self-normalized sum n X ∑^n_i=1Xi/Vm,p ,where Vn,p (∑^n_i=1|Xi|p)^1/p (P 〉 1).Applications to the self-normalized law of the iteratedlogarithm, Studentized increments of partial sums, t-statistic, and weighted sum of independent and identically distributed (i.i.d.) r.v.s are considered.展开更多
A Berry–Esseen bound is obtained for self-normalized martingales under the assumption of finite moments.The bound coincides with the classical Berry–Esseenboundforstandardizedmartingales.Anexampleisgiventoshowtheopt...A Berry–Esseen bound is obtained for self-normalized martingales under the assumption of finite moments.The bound coincides with the classical Berry–Esseenboundforstandardizedmartingales.Anexampleisgiventoshowtheoptimality of the bound.Applications to Student’s statistic and autoregressive process are also discussed.展开更多
Let{xn,n≥0}be a Markov chain with a countable state space S and let f(·)be a measurable function from S to R and consider the functionals of the Markov chain yn:=f(xn).We construct a new type of self-normalized ...Let{xn,n≥0}be a Markov chain with a countable state space S and let f(·)be a measurable function from S to R and consider the functionals of the Markov chain yn:=f(xn).We construct a new type of self-normalized sums based on the random-block scheme and establish a Crame′r-type moderate deviations for self-normalized sums of functionals of the Markov chain.展开更多
Let {X,Xn,n1} be a sequence of independent identically distributed random variables with EX=0 and assume that EX2I(|X|≤x) is slowly varying as x→∞,i.e.,X is in the domain of attraction of the normal law.In this pap...Let {X,Xn,n1} be a sequence of independent identically distributed random variables with EX=0 and assume that EX2I(|X|≤x) is slowly varying as x→∞,i.e.,X is in the domain of attraction of the normal law.In this paper a Strassen-type strong approximation is established for self-normalized sums of such random variables.展开更多
Let X, X1, X2, be a sequence of nondegenerate i.i.d, random variables with zero means, which is in the domain of attraction of the normal law. Let (ani, 1 ≤ i ≤n,n ≥1} be an array of real numbers with some suitab...Let X, X1, X2, be a sequence of nondegenerate i.i.d, random variables with zero means, which is in the domain of attraction of the normal law. Let (ani, 1 ≤ i ≤n,n ≥1} be an array of real numbers with some suitable conditions. In this paper, we show that a central limit theorem for self-normalized weighted sums holds. We also deduce a version of ASCLT for self-normalized weighted sums.展开更多
Let X_1, X_2,... be a sequence of independent random variables and S_n=sum X_1 from i=1 to n and V_n^2=sum X_1~2 from i=1 to n . When the elements of the sequence are i.i.d., it is known that the self-normalized sum S...Let X_1, X_2,... be a sequence of independent random variables and S_n=sum X_1 from i=1 to n and V_n^2=sum X_1~2 from i=1 to n . When the elements of the sequence are i.i.d., it is known that the self-normalized sum S_n/V_n converges to a standard normal distribution if and only if max1≤i≤n|X_i|/V_n → 0 in probability and the mean of X_1 is zero. In this paper, sufficient conditions for the self-normalized central limit theorem are obtained for general independent random variables. It is also shown that if max1≤i≤n|X_i|/V_n → 0 in probability, then these sufficient conditions are necessary.展开更多
The sub-linear expectation or called G-expectation is a non-linear expectation having advantage of modeling non-additive probability problems and the volatilityuncertainty in finance.Let{Xn;n≥1}be a sequence of indep...The sub-linear expectation or called G-expectation is a non-linear expectation having advantage of modeling non-additive probability problems and the volatilityuncertainty in finance.Let{Xn;n≥1}be a sequence of independent random vari-ables in a sub-linear expectation space(Ω,H,E^(^)).Denote S_(n)=∑_(k=1)^(n)Xk and=V_(n)^(2)=∑_(k=1)^(n)X_(k)^(2).In this paper,a moderate deviation for self-normalized sums,thatis,the asymptotic capacity of the event{Sn/Vn≥x_(n)}for x_(n)=o(√n),is found both for identically distributed random variables and independent but not necessarilyidentically distributed random variables.As an application,the self-normalized lawsof the iterated logarithm are obtained.A Bernstein's type inequality is also establishedfor proving the law of the iterated logarithm.展开更多
G-Brownian motion has a very rich and interesting new structure that nontrivially generalizes the classical Brownian motion.Its quadratic variation process is also a continuous process with independent and stationary ...G-Brownian motion has a very rich and interesting new structure that nontrivially generalizes the classical Brownian motion.Its quadratic variation process is also a continuous process with independent and stationary increments.We prove a self-normalized functional central limit theorem for independent and identically distributed random variables under the sub-linear expectation with the limit process being a G-Brownian motion self-normalized by its quadratic variation.To prove the self-normalized central limit theorem,we also establish a new Donsker’s invariance principle with the limit process being a generalized G-Brownian motion.展开更多
Let {X, Xn; n ≥ 0} be a sequence of independent and identically distributed random variables with EX=0, and assume that EX^2I(|X| ≤ x) is slowly varying as x →∞, i.e., X is in the domain of attraction of the n...Let {X, Xn; n ≥ 0} be a sequence of independent and identically distributed random variables with EX=0, and assume that EX^2I(|X| ≤ x) is slowly varying as x →∞, i.e., X is in the domain of attraction of the normal law. In this paper, a self-normalized law of the iterated logarithm for the geometrically weighted random series Σ~∞(n=0)β~nXn(0 〈 β 〈 1) is obtained, under some minimal conditions.展开更多
Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more fr...Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more friendly to data parallelism,kernel fusion,and emerging architectures such as ReRAM-based accelerators.However,existing SNNs have mainly demonstrated their effectiveness on toy datasets and fall short in accuracy when dealing with large-scale tasks like ImageNet.They lack the strong normalization,regularization,and expression power required for wider,deeper models and larger-scale tasks.To enhance the normalization strength,this paper introduces a comprehensive and practical definition of the self-normalization property in terms of the stability and attractiveness of the statistical fixed points.It is comprehensive as it jointly considers all the fixed points used by existing studies:the first and second moment of forward activation and the expected Frobenius norm of backward gradient.The practicality comes from the analytical equations provided by our paper to assess the stability and attractiveness of each fixed point,which are derived from theoretical analysis of the forward and backward signals.The proposed definition is applied to a meta activation function inspired by prior research,leading to a stronger self-normalizing activation function named‘‘bi-scaled exponential linear unit with backward standardized’’(bSELU-BSTD).We provide both theoretical and empirical evidence to show that it is superior to existing studies.To enhance the regularization and expression power,we further propose scaled-Mixup and channel-wise scale&shift.With these three techniques,our approach achieves 75.23%top-1 accuracy on the ImageNet with Conv MobileNet V1,surpassing the performance of existing self-normalizing activation functions.To the best of our knowledge,this is the first SNN that achieves comparable accuracy to batch normalization on ImageNet.展开更多
The past two decades have witnessed the active development of a rich probability theory of Studentized statistics or self-normalized processes, typified by Student’s t-statistic as introduced by W. S. Gosset more tha...The past two decades have witnessed the active development of a rich probability theory of Studentized statistics or self-normalized processes, typified by Student’s t-statistic as introduced by W. S. Gosset more than a century ago, and their applications to statistical problems in high dimensions, including feature selection and ranking, large-scale multiple testing and sparse, high dimensional signal detection. Many of these applications rely on the robustness property of Studentization/self-normalization against heavy-tailed sampling distributions. This paper gives an overview of the salient progress of self-normalized limit theory, from Student’s t-statistic to more general Studentized nonlinear statistics. Prototypical examples include Studentized one- and two-sample U-statistics. Furthermore, we go beyond independence and glimpse some very recent advances in self-normalized moderate deviations under dependence.展开更多
Saddlepoint approximations for the studentized compound Poisson sums with no moment conditions in audit sampling are derived. This result not only provides a very accurate approximation for studentized compound Poisso...Saddlepoint approximations for the studentized compound Poisson sums with no moment conditions in audit sampling are derived. This result not only provides a very accurate approximation for studentized compound Poisson sums, but also can be applied much more widely in statistical inference of the error amount in an audit population of accounts to check the validity of financial statements of a firm. Some numerical illustrations and comparison with the normal approximation method are presented.展开更多
Using suitable self-normalization for partial sums of i.i.d.random variables,Griffin and Kuelbs established the law of the iterated logarithm for all distributions in the domain of attraction of a normal law.We obtain...Using suitable self-normalization for partial sums of i.i.d.random variables,Griffin and Kuelbs established the law of the iterated logarithm for all distributions in the domain of attraction of a normal law.We obtain the corresponding results for Studentized increments of partial sums under thesame condition.展开更多
文摘In the case of Z+^d(d ≥ 2)-the positive d-dimensional lattice points with partial ordering ≤, {Xk,k∈ Z+^d} i.i.d, random variables with mean 0, Sn =∑k≤nXk and Vn^2 = ∑j≤nXj^2, the precise asymptotics for ∑n1/|n|(log|n|dP(|Sn/Vn|≥ε√log log|n|) and ∑n(logn|)b/|n|(log|n|)^d-1P(|Sn/Vn|≥ε√log n),as ε↓0,is established.
基金National Natural Science Foundation of China(1047112610671176).
文摘In this article, the unit root test for AR(p) model with GARCH errors is considered. The Dickey-Fuller test statistics are rewritten in the form of self-normalized sums, and the asymptotic distribution of the test statistics is derived under the weak conditions.
基金National Natural Science Foundation of China(1067117610771192).
文摘Let variables in the {X, Xn, n ≥ 1} be a sequence of strictly stationary φ-mixing positive random domain of attraction of the normal law. Under some suitable conditions the principle for self-normalized products of partial sums is obtained.
文摘In this case study, we would like to illustrate the utility of characteristic functions, using an example of a sample statistic defined for samples from Cauchy distribution. The derivation of the corresponding asymptotic probability density function is based on [1], elaborating and expanding the individual steps of their presentation, and including a small extension;our reason for such a plagiarism is to make the technique, its mathematical tools and ingenious arguments available to the widest possible audience.
文摘With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline.Abstractive summarization task is framed as seq2seq modeling.Existing seq2seq methods perform better on short sequences;however,for long sequences,the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper.The novelty is to parallelize the sequence computation training by incorporating feed-forward,the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity(Ext-ICAS)with sentence dependency position.Also,it does not require any normalization technique explicitly.Our proposed abstractive Bidirectional Long Short Term Memory(Bi-LSTM)encoder sequence model performs better than the Bidirectional Gated Recurrent Unit(Bi-GRU)encoder with minimum training loss and with fast convergence.The proposed model was evaluated on the Cable News Network(CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59%with an average number of similarity computations.
基金supported by the Singapore Ministry of Education Academic Research Fund Tier 2(Grant No.MOE2018-T2-2-076)。
文摘The Berry-Esseen bound provides an upper bound on the Kolmogorov distance between a random variable and the normal distribution.In this paper,we establish Berry-Esseen bounds with optimal rates for self-normalized sums of locally dependent random variables,assuming only a second-moment condition.Our proof leverages Stein's method and introduces a novel randomized concentration inequality,which may also be of independent interest for other applications.Our main results have applied to self-normalized sums of m-dependent random variables and graph dependency models.
基金supported by National Natural Science Foundation of China(Grant No.11971063)。
文摘In this paper,we establish normalized and self-normalized Cramér-type moderate deviations for the Euler-Maruyama scheme for SDE.Due to our results,Berry-Esseen's bounds and moderate deviation principles are also obtained.Our normalized Cramér-type moderate deviations refine the recent work of Lu et al.(2022).
基金supported by Hong Kong Research Grant Committee (Grant Nos.HKUST6019/10P and HKUST6019/12P)National Natural Science Foundation of China (Grant Nos. 10871146 and 11271286)the National University of Singapore (Grant No. R-155-000-106-112)
文摘Let X1,X2,... be a sequence of independent random variables (r.v.s) belonging to the domain of attraction of a normal or stable law. In this paper, we study moderate deviations for the self-normalized sum n X ∑^n_i=1Xi/Vm,p ,where Vn,p (∑^n_i=1|Xi|p)^1/p (P 〉 1).Applications to the self-normalized law of the iteratedlogarithm, Studentized increments of partial sums, t-statistic, and weighted sum of independent and identically distributed (i.i.d.) r.v.s are considered.
文摘A Berry–Esseen bound is obtained for self-normalized martingales under the assumption of finite moments.The bound coincides with the classical Berry–Esseenboundforstandardizedmartingales.Anexampleisgiventoshowtheoptimality of the bound.Applications to Student’s statistic and autoregressive process are also discussed.
基金partially supported by Hong Kong Research Grants Council General Research Fund 14304917.Corresponding author.
文摘Let{xn,n≥0}be a Markov chain with a countable state space S and let f(·)be a measurable function from S to R and consider the functionals of the Markov chain yn:=f(xn).We construct a new type of self-normalized sums based on the random-block scheme and establish a Crame′r-type moderate deviations for self-normalized sums of functionals of the Markov chain.
基金supported by an NSERC Canada Discovery Grant of M.Csrgo at Carleton UniversityNational Natural Science Foundation of China(Grant No.10801122)+1 种基金Research Fund for the Doctoral Program of Higher Education of China(Grant No.200803581009)the Fundamental Research Funds for the Central Universities
文摘Let {X,Xn,n1} be a sequence of independent identically distributed random variables with EX=0 and assume that EX2I(|X|≤x) is slowly varying as x→∞,i.e.,X is in the domain of attraction of the normal law.In this paper a Strassen-type strong approximation is established for self-normalized sums of such random variables.
基金Supported by the National Natural Science Foundation of China (No. 10971081, 11101180).
文摘Let X, X1, X2, be a sequence of nondegenerate i.i.d, random variables with zero means, which is in the domain of attraction of the normal law. Let (ani, 1 ≤ i ≤n,n ≥1} be an array of real numbers with some suitable conditions. In this paper, we show that a central limit theorem for self-normalized weighted sums holds. We also deduce a version of ASCLT for self-normalized weighted sums.
基金supported by Hong Kong Research Grants Council General Research Fund(Grant Nos.14302515 and 14304917)
文摘Let X_1, X_2,... be a sequence of independent random variables and S_n=sum X_1 from i=1 to n and V_n^2=sum X_1~2 from i=1 to n . When the elements of the sequence are i.i.d., it is known that the self-normalized sum S_n/V_n converges to a standard normal distribution if and only if max1≤i≤n|X_i|/V_n → 0 in probability and the mean of X_1 is zero. In this paper, sufficient conditions for the self-normalized central limit theorem are obtained for general independent random variables. It is also shown that if max1≤i≤n|X_i|/V_n → 0 in probability, then these sufficient conditions are necessary.
基金Grants from the National Natural Science Foundation of China(No.11225104)973 Program(No.2015CB352302)the Fundamental Research Funds for the CentralUniversities.
文摘The sub-linear expectation or called G-expectation is a non-linear expectation having advantage of modeling non-additive probability problems and the volatilityuncertainty in finance.Let{Xn;n≥1}be a sequence of independent random vari-ables in a sub-linear expectation space(Ω,H,E^(^)).Denote S_(n)=∑_(k=1)^(n)Xk and=V_(n)^(2)=∑_(k=1)^(n)X_(k)^(2).In this paper,a moderate deviation for self-normalized sums,thatis,the asymptotic capacity of the event{Sn/Vn≥x_(n)}for x_(n)=o(√n),is found both for identically distributed random variables and independent but not necessarilyidentically distributed random variables.As an application,the self-normalized lawsof the iterated logarithm are obtained.A Bernstein's type inequality is also establishedfor proving the law of the iterated logarithm.
基金Research supported by Grants from the National Natural Science Foundation of China(No.11225104)the 973 Program(No.2015CB352302)and the Fundamental Research Funds for the Central Universities.
文摘G-Brownian motion has a very rich and interesting new structure that nontrivially generalizes the classical Brownian motion.Its quadratic variation process is also a continuous process with independent and stationary increments.We prove a self-normalized functional central limit theorem for independent and identically distributed random variables under the sub-linear expectation with the limit process being a G-Brownian motion self-normalized by its quadratic variation.To prove the self-normalized central limit theorem,we also establish a new Donsker’s invariance principle with the limit process being a generalized G-Brownian motion.
基金Supported by National Natural Science Foundation of China(Grant Nos.11301481,11371321 and 10901138)National Statistical Science Research Project of China(Grant No.2012LY174)+1 种基金Zhejiang Provincial Natural Science Foundation of China(Grant No.LQ12A01018)the Fundamental Research Funds for the Central Universities and Zhejiang Provincial Key Research Base for Humanities and Social Science Research(Statistics)
文摘Let {X, Xn; n ≥ 0} be a sequence of independent and identically distributed random variables with EX=0, and assume that EX^2I(|X| ≤ x) is slowly varying as x →∞, i.e., X is in the domain of attraction of the normal law. In this paper, a self-normalized law of the iterated logarithm for the geometrically weighted random series Σ~∞(n=0)β~nXn(0 〈 β 〈 1) is obtained, under some minimal conditions.
基金National Key R&D Program of China(2018AAA0102600)National Natural Science Foundation of China(No.61876215,62106119)+1 种基金Beijing Academy of Artificial Intelligence(BAAI),ChinaChinese Institute for Brain Research,Beijing,and the Science and Technology Major Project of Guangzhou,China(202007030006).
文摘Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more friendly to data parallelism,kernel fusion,and emerging architectures such as ReRAM-based accelerators.However,existing SNNs have mainly demonstrated their effectiveness on toy datasets and fall short in accuracy when dealing with large-scale tasks like ImageNet.They lack the strong normalization,regularization,and expression power required for wider,deeper models and larger-scale tasks.To enhance the normalization strength,this paper introduces a comprehensive and practical definition of the self-normalization property in terms of the stability and attractiveness of the statistical fixed points.It is comprehensive as it jointly considers all the fixed points used by existing studies:the first and second moment of forward activation and the expected Frobenius norm of backward gradient.The practicality comes from the analytical equations provided by our paper to assess the stability and attractiveness of each fixed point,which are derived from theoretical analysis of the forward and backward signals.The proposed definition is applied to a meta activation function inspired by prior research,leading to a stronger self-normalizing activation function named‘‘bi-scaled exponential linear unit with backward standardized’’(bSELU-BSTD).We provide both theoretical and empirical evidence to show that it is superior to existing studies.To enhance the regularization and expression power,we further propose scaled-Mixup and channel-wise scale&shift.With these three techniques,our approach achieves 75.23%top-1 accuracy on the ImageNet with Conv MobileNet V1,surpassing the performance of existing self-normalizing activation functions.To the best of our knowledge,this is the first SNN that achieves comparable accuracy to batch normalization on ImageNet.
文摘The past two decades have witnessed the active development of a rich probability theory of Studentized statistics or self-normalized processes, typified by Student’s t-statistic as introduced by W. S. Gosset more than a century ago, and their applications to statistical problems in high dimensions, including feature selection and ranking, large-scale multiple testing and sparse, high dimensional signal detection. Many of these applications rely on the robustness property of Studentization/self-normalization against heavy-tailed sampling distributions. This paper gives an overview of the salient progress of self-normalized limit theory, from Student’s t-statistic to more general Studentized nonlinear statistics. Prototypical examples include Studentized one- and two-sample U-statistics. Furthermore, we go beyond independence and glimpse some very recent advances in self-normalized moderate deviations under dependence.
基金National Natural Science Foundation of China(Grant Nos. 71032005, 70802035)the MOE Project of Key Research Institute of Humanities and Social Science in University (Grant No. 07JJD63007)supported in part by National University of Singapore (Grant No. R-155-050-095-112)
文摘Saddlepoint approximations for the studentized compound Poisson sums with no moment conditions in audit sampling are derived. This result not only provides a very accurate approximation for studentized compound Poisson sums, but also can be applied much more widely in statistical inference of the error amount in an audit population of accounts to check the validity of financial statements of a firm. Some numerical illustrations and comparison with the normal approximation method are presented.
基金Project supported by the National Natural Science Foundation of Chinaan NSERC Canada grant of M.Csorgo at Carletoa University of Canada+1 种基金the Fok Yingtung Education Foundationan NSERC Canada Scientific Exchange Award at Carleton University
文摘Using suitable self-normalization for partial sums of i.i.d.random variables,Griffin and Kuelbs established the law of the iterated logarithm for all distributions in the domain of attraction of a normal law.We obtain the corresponding results for Studentized increments of partial sums under thesame condition.