期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Stochastic Variational Inference-Based Parallel and Online Supervised Topic Model for Large-Scale Text Processing 被引量:1
1
作者 Yang Li Wen-Zhuo Song Bo Yang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第5期1007-1022,共16页
Topic modeling is a mainstream and effective technology to deal with text data, with wide applications in text analysis, natural language, personalized recommendation, computer vision, etc. Among all the known topic m... Topic modeling is a mainstream and effective technology to deal with text data, with wide applications in text analysis, natural language, personalized recommendation, computer vision, etc. Among all the known topic models, supervised Latent Dirichlet Allocation (sLDA) is acknowledged as a popular and competitive supervised topic model. How- ever, the gradual increase of the scale of datasets makes sLDA more and more inefficient and time-consuming, and limits its applications in a very narrow range. To solve it, a parallel online sLDA, named PO-sLDA (Parallel and Online sLDA), is proposed in this study. It uses the stochastic variational inference as the learning method to make the training procedure more rapid and efficient, and a parallel computing mechanism implemented via the MapReduce framework is proposed to promote the capacity of cloud computing and big data processing. The online training capacity supported by PO-sLDA expands the application scope of this approach, making it instrumental for real-life applications with high real-time demand. The validation using two datasets with different sizes shows that the proposed approach has the comparative accuracy as the sLDA and can efficiently accelerate the training procedure. Moreover, its good convergence and online training capacity make it lucrative for the large-scale text data analyzing and processing. 展开更多
关键词 topic modeling large-scale text classification stochastic variational inference cloud computing online learning
原文传递
Trust-Region Based Stochastic Variational Inference for Distributed and Asynchronous Networks
2
作者 FU Weiming QIN Jiahu +2 位作者 LING Qing KANG Yu YE Baijia 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2022年第6期2062-2076,共15页
Stochastic variational inference is an efficient Bayesian inference technology for massive datasets,which approximates posteriors by using noisy gradient estimates.Traditional stochastic variational inference can only... Stochastic variational inference is an efficient Bayesian inference technology for massive datasets,which approximates posteriors by using noisy gradient estimates.Traditional stochastic variational inference can only be performed in a centralized manner,which limits its applications in a wide range of situations where data is possessed by multiple nodes.Therefore,this paper develops a novel trust-region based stochastic variational inference algorithm for a general class of conjugate-exponential models over distributed and asynchronous networks,where the global parameters are diffused over the network by using the Metropolis rule and the local parameters are updated by using the trust-region method.Besides,a simple rule is introduced to balance the transmission frequencies between neighboring nodes such that the proposed distributed algorithm can be performed in an asynchronous manner.The utility of the proposed algorithm is tested by fitting the Bernoulli model and the Gaussian model to different datasets on a synthetic network,and experimental results demonstrate its effectiveness and advantages over existing works. 展开更多
关键词 Asynchronous networks Bayesian inference distributed algorithm stochastic variational inference trust-region method
原文传递
Tuning the Learning Rate for Stochastic Variational Inference
3
作者 Xi-Ming Li Ji-Hong Ouyang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2016年第2期428-436,共9页
Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This ra... Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates. 展开更多
关键词 stochastic variational inference online learning adaptive learning rate topic model
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部