期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
双侧截断回归模型的变量选择
1
作者 郑明 林婵娟 郁文 《中国科学:数学》 CSCD 北大核心 2022年第12期1433-1448,共16页
在回归分析中,当因变量存在双侧截断时,已有的统计方法会使得回归模型的系数估计与变量选择产生偏差.本文提出一种适用于双侧截断回归模型的系数估计与变量选择方法,且该方法允许回归模型中自变量的个数随着样本量增大并趋于无穷而趋于... 在回归分析中,当因变量存在双侧截断时,已有的统计方法会使得回归模型的系数估计与变量选择产生偏差.本文提出一种适用于双侧截断回归模型的系数估计与变量选择方法,且该方法允许回归模型中自变量的个数随着样本量增大并趋于无穷而趋于无穷.该方法的主要思想是,提出一种Mann-Whitney型的损失函数来进行纠偏,随后加入自适应最小绝对收缩和选择算子(least absolute shrinkage and selection operator,LASSO)惩罚项来进行变量选择.本文同时设计一种迭代算法来实现损失函数的优化;且证明了所提出估计量的相合性与渐近正态性,还给出所提出变量选择方法的神谕性(oracle property).本文通过随机模拟展示所提出方法在有限样本量下的表现,并使用所提出方法分析一个天文学领域的实际数据集. 展开更多
关键词 双侧截断 变量选择 维数发散 自适应LASSO 最小绝对离差 神谕性
原文传递
The Adaptive LASSO Spline Estimation of Single-Index Model 被引量:4
2
作者 LU Yiqiang ZHANG Riquan HU Bin 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2016年第4期1100-1111,共12页
In this paper, based on spline approximation, the authors propose a unified variable selection approach for single-index model via adaptive L1 penalty. The calculation methods of the proposed estimators are given on t... In this paper, based on spline approximation, the authors propose a unified variable selection approach for single-index model via adaptive L1 penalty. The calculation methods of the proposed estimators are given on the basis of the known lars algorithm. Under some regular conditions, the authors demonstrate the asymptotic properties of the proposed estimators and the oracle properties of adaptive LASSO(aL ASSO) variable selection. Simulations are used to investigate the performances of the proposed estimator and illustrate that it is effective for simultaneous variable selection as well as estimation of the single-index models. 展开更多
关键词 Adaptive LASSO B-SPLINE oracle property single-index model variable selection.
原文传递
Penalized least squares estimation with weakly dependent data 被引量:2
3
作者 FAN JianQing QI Lei TONG Xin 《Science China Mathematics》 SCIE CSCD 2016年第12期2335-2354,共20页
In statistics and machine learning communities, the last fifteen years have witnessed a surge of high-dimensional models backed by penalized methods and other state-of-the-art variable selection techniques.The high-di... In statistics and machine learning communities, the last fifteen years have witnessed a surge of high-dimensional models backed by penalized methods and other state-of-the-art variable selection techniques.The high-dimensional models we refer to differ from conventional models in that the number of all parameters p and number of significant parameters s are both allowed to grow with the sample size T. When the field-specific knowledge is preliminary and in view of recent and potential affluence of data from genetics, finance and on-line social networks, etc., such(s, T, p)-triply diverging models enjoy ultimate flexibility in terms of modeling, and they can be used as a data-guided first step of investigation. However, model selection consistency and other theoretical properties were addressed only for independent data, leaving time series largely uncovered. On a simple linear regression model endowed with a weakly dependent sequence, this paper applies a penalized least squares(PLS) approach. Under regularity conditions, we show sign consistency, derive finite sample bound with high probability for estimation error, and prove that PLS estimate is consistent in L_2 norm with rate (s log s/T)~1/2. 展开更多
关键词 weakly dependent high-dimensional model oracle property model selection consistency penalized least squares
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部