针对现有算法多标签分类器收敛效率低和标签查询策略未考虑特征辨别能力的弊端,提出一种基于判别采样和镜像梯度下降规则的多标签在线主动学习算法(multi-label active mirror descent by discrimination sampling,MLAMD_D)。MLAMD_D算...针对现有算法多标签分类器收敛效率低和标签查询策略未考虑特征辨别能力的弊端,提出一种基于判别采样和镜像梯度下降规则的多标签在线主动学习算法(multi-label active mirror descent by discrimination sampling,MLAMD_D)。MLAMD_D算法采用二元关联策略将包含C个标签的多标签分类问题分解成C个相互独立的二分类问题,算法使用镜像梯度下降规则更新其二分类器,并采用基于判别的采样策略。将MLAMD_D算法与现有算法以及基于随机采样和镜像梯度下降规则的多标签在线主动学习算法(multi-label active mirror descent by random sampling,MLAMD_R)在6个多标签分类数据集上进行对比试验。试验结果表明,MLAMD_D算法的多标签分类性能优于其他多标签在线主动学习算法。因此,MLAMD_D算法在处理多标签在线主动学习的任务中具有可行性和有效性。展开更多
This paper develops a novel online algorithm, namely moving average stochastic variational inference (MASVI), which applies the results obtained by previous iterations to smooth out noisy natural gradients. We analy...This paper develops a novel online algorithm, namely moving average stochastic variational inference (MASVI), which applies the results obtained by previous iterations to smooth out noisy natural gradients. We analyze the convergence property of the proposed algorithm and conduct a set of experiments on two large-scale collections that contain millions of documents. Experimental results indicate that in contrast to algorithms named 'stochastic variational inference' and 'SGRLD', our algorithm achieves a faster convergence rate and better performance.展开更多
文摘针对现有算法多标签分类器收敛效率低和标签查询策略未考虑特征辨别能力的弊端,提出一种基于判别采样和镜像梯度下降规则的多标签在线主动学习算法(multi-label active mirror descent by discrimination sampling,MLAMD_D)。MLAMD_D算法采用二元关联策略将包含C个标签的多标签分类问题分解成C个相互独立的二分类问题,算法使用镜像梯度下降规则更新其二分类器,并采用基于判别的采样策略。将MLAMD_D算法与现有算法以及基于随机采样和镜像梯度下降规则的多标签在线主动学习算法(multi-label active mirror descent by random sampling,MLAMD_R)在6个多标签分类数据集上进行对比试验。试验结果表明,MLAMD_D算法的多标签分类性能优于其他多标签在线主动学习算法。因此,MLAMD_D算法在处理多标签在线主动学习的任务中具有可行性和有效性。
基金Project supported by the National Natural Science Foundation of China (Nos. 61170092, 61133011, and 61103091)
文摘This paper develops a novel online algorithm, namely moving average stochastic variational inference (MASVI), which applies the results obtained by previous iterations to smooth out noisy natural gradients. We analyze the convergence property of the proposed algorithm and conduct a set of experiments on two large-scale collections that contain millions of documents. Experimental results indicate that in contrast to algorithms named 'stochastic variational inference' and 'SGRLD', our algorithm achieves a faster convergence rate and better performance.