Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these...Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these models neglect the class frequency information of words(i.e.,the number of classes where a word has occurred in the training data),which is significant for classification.To address this,we propose a method,namely the class frequency weight(CF-weight),to weight words by considering the class frequency knowledge.This CF-weight is based on the intuition that a word with higher(lower)class frequency will be less(more)discriminative.In this study,the CF-weight is used to improve L-LDA and dependency-LDA.A number of experiments have been conducted on real-world multi-label datasets.Experimental results demonstrate that CF-weight based algorithms are competitive with the existing supervised topic models.展开更多
Image super-resolution is essential for a variety of applications such as medical imaging,surveillance imaging,and satellite imaging,among others.Traditionally,the most popular color image super-resolution is performe...Image super-resolution is essential for a variety of applications such as medical imaging,surveillance imaging,and satellite imaging,among others.Traditionally,the most popular color image super-resolution is performed in each color channel independently.In this paper,we show that the super-resolution quality can be further enhanced by exploiting the cross-channel correlation.Inspired by the High-Quality Linear Interpolation(HQLI)demosaicking algorithm by Malvar et al.,we design an image super-resolution scheme that integrates intra-channel interpolation with cross-channel details by isotropic linear combinations.Despite its simplicity,our super-resolution method achieves the accuracy comparable with the existing fastest state-of-the-art super-resolution algorithm at 20 times faster speed.It is well applicable to applications that adopt traditional interpolations,for improved visual quality at trivial computation cost.Our comparative study verifies the effectiveness and efficiency of the proposed super-resolution algorithm.展开更多
Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This ra...Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates.展开更多
基金Project supported by the National Natural Science Foundation of China(No.61602204)
文摘Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these models neglect the class frequency information of words(i.e.,the number of classes where a word has occurred in the training data),which is significant for classification.To address this,we propose a method,namely the class frequency weight(CF-weight),to weight words by considering the class frequency knowledge.This CF-weight is based on the intuition that a word with higher(lower)class frequency will be less(more)discriminative.In this study,the CF-weight is used to improve L-LDA and dependency-LDA.A number of experiments have been conducted on real-world multi-label datasets.Experimental results demonstrate that CF-weight based algorithms are competitive with the existing supervised topic models.
基金supported by the National Natural Science Foundation of China under Grant Nos.61472157 and 61872162the Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems,Beihang University,under Grant No.VRLAB2020C05.
文摘Image super-resolution is essential for a variety of applications such as medical imaging,surveillance imaging,and satellite imaging,among others.Traditionally,the most popular color image super-resolution is performed in each color channel independently.In this paper,we show that the super-resolution quality can be further enhanced by exploiting the cross-channel correlation.Inspired by the High-Quality Linear Interpolation(HQLI)demosaicking algorithm by Malvar et al.,we design an image super-resolution scheme that integrates intra-channel interpolation with cross-channel details by isotropic linear combinations.Despite its simplicity,our super-resolution method achieves the accuracy comparable with the existing fastest state-of-the-art super-resolution algorithm at 20 times faster speed.It is well applicable to applications that adopt traditional interpolations,for improved visual quality at trivial computation cost.Our comparative study verifies the effectiveness and efficiency of the proposed super-resolution algorithm.
基金This work was supported by the National Natural Science Foundation of China under Grant Nos. 61170092, 61133011 and 61103091.
文摘Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates.