Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these...Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these models neglect the class frequency information of words(i.e.,the number of classes where a word has occurred in the training data),which is significant for classification.To address this,we propose a method,namely the class frequency weight(CF-weight),to weight words by considering the class frequency knowledge.This CF-weight is based on the intuition that a word with higher(lower)class frequency will be less(more)discriminative.In this study,the CF-weight is used to improve L-LDA and dependency-LDA.A number of experiments have been conducted on real-world multi-label datasets.Experimental results demonstrate that CF-weight based algorithms are competitive with the existing supervised topic models.展开更多
Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This ra...Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates.展开更多
This paper deals with a novel local arc length estimator for curves in gray-scale images.The method first estimates a cubic spline curve fit for the boundary points using the gray-level information of the nearby pixel...This paper deals with a novel local arc length estimator for curves in gray-scale images.The method first estimates a cubic spline curve fit for the boundary points using the gray-level information of the nearby pixels,and then computes the sum of the spline segments’lengths.In this model,the second derivatives and y coordinates at the knots are required in the computation;the spline polynomial coefficients need not be computed explicitly.We provide the algorithm pseudo code for estimation and preprocessing,both taking linear time.Implementation shows that the proposed model gains a smaller relative error than other state-of-the-art methods.展开更多
基金Project supported by the National Natural Science Foundation of China(No.61602204)
文摘Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these models neglect the class frequency information of words(i.e.,the number of classes where a word has occurred in the training data),which is significant for classification.To address this,we propose a method,namely the class frequency weight(CF-weight),to weight words by considering the class frequency knowledge.This CF-weight is based on the intuition that a word with higher(lower)class frequency will be less(more)discriminative.In this study,the CF-weight is used to improve L-LDA and dependency-LDA.A number of experiments have been conducted on real-world multi-label datasets.Experimental results demonstrate that CF-weight based algorithms are competitive with the existing supervised topic models.
基金This work was supported by the National Natural Science Foundation of China under Grant Nos. 61170092, 61133011 and 61103091.
文摘Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates.
基金Project supported by the National Natural Science Foundationof China(Nos.61170092,61133011,61272208,61103091,and61202308)the Fundamental Research Funds for the CentralUniversities,China(Nos.450060445674 and 450060481512)
文摘This paper deals with a novel local arc length estimator for curves in gray-scale images.The method first estimates a cubic spline curve fit for the boundary points using the gray-level information of the nearby pixels,and then computes the sum of the spline segments’lengths.In this model,the second derivatives and y coordinates at the knots are required in the computation;the spline polynomial coefficients need not be computed explicitly.We provide the algorithm pseudo code for estimation and preprocessing,both taking linear time.Implementation shows that the proposed model gains a smaller relative error than other state-of-the-art methods.