期刊文献+

朴素并行LDA 被引量:8

Na?ve Parallel LDA
下载PDF
导出
摘要 并行潜在狄利克雷分配(LDA)主题模型在计算与通信两方面的时间消耗较大,导致训练模型的时间过长,因而无法被广泛应用。提出朴素并行LDA算法,针对计算和通信分别提出改进方法。一方面通过加入单词影响因子以及设置阈值的方法来降低文本训练的粒度,另一方面通过降低通信频率来减少通信时间。实验结果表明,优化后的并行LDA在保证精度损失为1%的前提下,将训练速度提高了36%,有效提高了并行的加速比。 The parallel latent Dirichlet allocation (LDA) costs a lot of time in computation and communication, which brings about long time to train a LDA model and then it can't be widely applied. This paper proposed naive parallel LDA algorithm, presenting two methods to solve this problerr. One is to add impact factor of each word and set threshold to reduce the amount of corpus, the other is to reduce the communication frequency to decrease the communication time. Experimental results show that the optimized distributed LDA can accelerate the total training time by 36% and improve the speedup ratio, while the loss of accuracy is below 1 %.
出处 《计算机科学》 CSCD 北大核心 2015年第6期243-246,共4页 Computer Science
基金 国家自然科学基金(61003154 61373092 61033013 61272449 61202029) 江苏省教育厅重大项目(12KJA520004) 苏州大学创新团队(SDT2012B02) 广东省重点实验室开放课题(SZU-GDPHPCL-2012-09)资助
关键词 潜在狄利克雷分配 并行 加速优化 Latent Dirichlet allocation Parallel Speedup optimization
  • 相关文献

参考文献11

  • 1Deerwester S C, Dumais S T, Landauer T K, et al. Indexing by latent semantic analysis[J]. Journal of the American Society for Information Science, 1990,41 ( 6 ) : 391-407.
  • 2Hofmann T. Probabilistic latent semantic indexing[C]//Special Inspector General for Iraq Reconstruction. 1999 : 50-57.
  • 3Blei D M,Ng A Y,Jordan M I. Latent diriehlet allocation[C]// Neural Information Processing Systems. 2001:601-608.
  • 4Griffiths T L, Steyvers M. Finding scientific topics [J]. Procee- dings of the National Academy of Sciences, 2004,101 (1):5228- 5235.
  • 5Zeng J, Cheung W K, Liu J. Learning topic models by belief propagation [ J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013,35 (5) : 1121-1134.
  • 6Newman D, Asuncion A U, Smyth P, et al. Distributed inference for latent dirichlct allocation [C]// Neural Information Proces- sing Systems. 2007.
  • 7Asuncion A U, Smyth P,Welling M. Asynchronous distributed learning of topic models [C]//Neural Information Processing Systems. 2008 : 81-88.
  • 8Wang Y, Bai H, Stanton M, et al. Plda: Parallel latent dirichlet allocation for large-scale applications[C]//AAIM. 2009:301-314.
  • 9Liu Z, Zhang Y, Chang E Y, et al. Plda+: Parallel latent dirichlet allocation with data placement and pipeline processing [J]. ACM TIST, 2011,2(3) : 1-18.
  • 10Zhai K, Boyd-Graber J L, Asadi N, et al. Ida: a flexible large scale topic modeling package using variational inference in ma- preduce[C]//WWW. 2012 : 879-888.

同被引文献85

引证文献8

二级引证文献99

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部