期刊文献+
共找到14篇文章
< 1 >
每页显示 20 50 100
Variational Inference Based Kernel Dynamic Bayesian Networks for Construction of Prediction Intervals for Industrial Time Series With Incomplete Input 被引量:1
1
作者 Long Chen Linqing Wang +2 位作者 Zhongyang Han Jun Zhao Wei Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2020年第5期1437-1445,共9页
Prediction intervals(PIs)for industrial time series can provide useful guidance for workers.Given that the failure of industrial sensors may cause the missing point in inputs,the existing kernel dynamic Bayesian netwo... Prediction intervals(PIs)for industrial time series can provide useful guidance for workers.Given that the failure of industrial sensors may cause the missing point in inputs,the existing kernel dynamic Bayesian networks(KDBN),serving as an effective method for PIs construction,suffer from high computational load using the stochastic algorithm for inference.This study proposes a variational inference method for the KDBN for the purpose of fast inference,which avoids the timeconsuming stochastic sampling.The proposed algorithm contains two stages.The first stage involves the inference of the missing inputs by using a local linearization based variational inference,and based on the computed posterior distributions over the missing inputs the second stage sees a Gaussian approximation for probability over the nodes in future time slices.To verify the effectiveness of the proposed method,a synthetic dataset and a practical dataset of generation flow of blast furnace gas(BFG)are employed with different ratios of missing inputs.The experimental results indicate that the proposed method can provide reliable PIs for the generation flow of BFG and it exhibits shorter computing time than the stochastic based one. 展开更多
关键词 Industrial time series kernel dynamic Bayesian networks(KDBN) prediction intervals(PIs) variational inference
下载PDF
Stochastic Variational Inference-Based Parallel and Online Supervised Topic Model for Large-Scale Text Processing 被引量:1
2
作者 Yang Li Wen-Zhuo Song Bo Yang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第5期1007-1022,共16页
Topic modeling is a mainstream and effective technology to deal with text data, with wide applications in text analysis, natural language, personalized recommendation, computer vision, etc. Among all the known topic m... Topic modeling is a mainstream and effective technology to deal with text data, with wide applications in text analysis, natural language, personalized recommendation, computer vision, etc. Among all the known topic models, supervised Latent Dirichlet Allocation (sLDA) is acknowledged as a popular and competitive supervised topic model. How- ever, the gradual increase of the scale of datasets makes sLDA more and more inefficient and time-consuming, and limits its applications in a very narrow range. To solve it, a parallel online sLDA, named PO-sLDA (Parallel and Online sLDA), is proposed in this study. It uses the stochastic variational inference as the learning method to make the training procedure more rapid and efficient, and a parallel computing mechanism implemented via the MapReduce framework is proposed to promote the capacity of cloud computing and big data processing. The online training capacity supported by PO-sLDA expands the application scope of this approach, making it instrumental for real-life applications with high real-time demand. The validation using two datasets with different sizes shows that the proposed approach has the comparative accuracy as the sLDA and can efficiently accelerate the training procedure. Moreover, its good convergence and online training capacity make it lucrative for the large-scale text data analyzing and processing. 展开更多
关键词 topic modeling large-scale text classification stochastic variational inference cloud computing online learning
原文传递
Trust-Region Based Stochastic Variational Inference for Distributed and Asynchronous Networks
3
作者 FU Weiming QIN Jiahu +2 位作者 LING Qing KANG Yu YE Baijia 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2022年第6期2062-2076,共15页
Stochastic variational inference is an efficient Bayesian inference technology for massive datasets,which approximates posteriors by using noisy gradient estimates.Traditional stochastic variational inference can only... Stochastic variational inference is an efficient Bayesian inference technology for massive datasets,which approximates posteriors by using noisy gradient estimates.Traditional stochastic variational inference can only be performed in a centralized manner,which limits its applications in a wide range of situations where data is possessed by multiple nodes.Therefore,this paper develops a novel trust-region based stochastic variational inference algorithm for a general class of conjugate-exponential models over distributed and asynchronous networks,where the global parameters are diffused over the network by using the Metropolis rule and the local parameters are updated by using the trust-region method.Besides,a simple rule is introduced to balance the transmission frequencies between neighboring nodes such that the proposed distributed algorithm can be performed in an asynchronous manner.The utility of the proposed algorithm is tested by fitting the Bernoulli model and the Gaussian model to different datasets on a synthetic network,and experimental results demonstrate its effectiveness and advantages over existing works. 展开更多
关键词 Asynchronous networks Bayesian inference distributed algorithm stochastic variational inference trust-region method
原文传递
Tuning the Learning Rate for Stochastic Variational Inference
4
作者 Xi-Ming Li Ji-Hong Ouyang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2016年第2期428-436,共9页
Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This ra... Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates. 展开更多
关键词 stochastic variational inference online learning adaptive learning rate topic model
原文传递
Skew t Distribution-Based Nonlinear Filter with Asymmetric Measurement Noise Using Variational Bayesian Inference 被引量:1
5
作者 Chen Xu Yawen Mao +2 位作者 Hongtian Chen Hongfeng Tao Fei Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第4期349-364,共16页
This paper is focused on the state estimation problem for nonlinear systems with unknown statistics of measurement noise.Based on the cubature Kalman filter,we propose a new nonlinear filtering algorithm that employs ... This paper is focused on the state estimation problem for nonlinear systems with unknown statistics of measurement noise.Based on the cubature Kalman filter,we propose a new nonlinear filtering algorithm that employs a skew t distribution to characterize the asymmetry of the measurement noise.The system states and the statistics of skew t noise distribution,including the shape matrix,the scale matrix,and the degree of freedom(DOF)are estimated jointly by employing variational Bayesian(VB)inference.The proposed method is validated in a target tracking example.Results of the simulation indicate that the proposed nonlinear filter can perform satisfactorily in the presence of unknown statistics of measurement noise and outperform than the existing state-of-the-art nonlinear filters. 展开更多
关键词 Nonlinear filter asymmetric measurement noise skew t distribution unknown noise statistics variational Bayesian inference
下载PDF
Gridless Variational Bayesian Inference of Line Spectral from Quantized Samples
6
作者 Jiang Zhu Qi Zhang Xiangming Meng 《China Communications》 SCIE CSCD 2021年第10期77-95,共19页
Efficient estimation of line spectral from quantized samples is of significant importance in information theory and signal processing,e.g.,channel estimation in energy efficient massive MIMO systems and direction of a... Efficient estimation of line spectral from quantized samples is of significant importance in information theory and signal processing,e.g.,channel estimation in energy efficient massive MIMO systems and direction of arrival estimation.The goal of this paper is to recover the line spectral as well as its corresponding parameters including the model order,frequencies and amplitudes from heavily quantized samples.To this end,we propose an efficient gridless Bayesian algorithm named VALSE-EP,which is a combination of the high resolution and low complexity gridless variational line spectral estimation(VALSE)and expectation propagation(EP).The basic idea of VALSE-EP is to iteratively approximate the challenging quantized model of line spectral estimation as a sequence of simple pseudo unquantized models,where VALSE is applied.Moreover,to obtain a benchmark of the performance of the proposed algorithm,the Cram′er Rao bound(CRB)is derived.Finally,numerical experiments on both synthetic and real data are performed,demonstrating the near CRB performance of the proposed VALSE-EP for line spectral estimation from quantized samples. 展开更多
关键词 variational Bayesian inference expectation propagation QUANTIZATION line spectral estimation MMSE gridless
下载PDF
Adaptive cubature Kalman filter based on variational Bayesian inference under measurement uncertainty
7
作者 胡振涛 JIA Haoqian GONG Delong 《High Technology Letters》 EI CAS 2022年第4期354-362,共9页
A novel variational Bayesian inference based on adaptive cubature Kalman filter(VBACKF)algorithm is proposed for the problem of state estimation in a target tracking system with time-varying measurement noise and rand... A novel variational Bayesian inference based on adaptive cubature Kalman filter(VBACKF)algorithm is proposed for the problem of state estimation in a target tracking system with time-varying measurement noise and random measurement losses.Firstly,the Inverse-Wishart(IW)distribution is chosen to model the covariance matrix of time-varying measurement noise in the cubature Kalman filter framework.Secondly,the Bernoulli random variable is introduced as the judgement factor of the measurement losses,and the Beta distribution is selected as the conjugate prior distribution of measurement loss probability to ensure that the posterior distribution and prior distribution have the same function form.Finally,the joint posterior probability density function of the estimated variables is approximately decoupled by the variational Bayesian inference,and the fixed-point iteration approach is used to update the estimated variables.The simulation results show that the proposed VBACKF algorithm considers the comprehensive effects of system nonlinearity,time-varying measurement noise and unknown measurement loss probability,moreover,effectively improves the accuracy of target state estimation in complex scene. 展开更多
关键词 variational Bayesian inference cubature Kalman filter(CKF) measurement uncertainty Inverse-Wishart(IW)distribution
下载PDF
Gaussian-Student's t mixture distribution PHD robust filtering algorithm based on variational Bayesian inference
8
作者 胡振涛 YANG Linlin +1 位作者 HU Yumei YANG Shibo 《High Technology Letters》 EI CAS 2022年第2期181-189,共9页
Aiming at the problem of filtering precision degradation caused by the random outliers of process noise and measurement noise in multi-target tracking(MTT) system,a new Gaussian-Student’s t mixture distribution proba... Aiming at the problem of filtering precision degradation caused by the random outliers of process noise and measurement noise in multi-target tracking(MTT) system,a new Gaussian-Student’s t mixture distribution probability hypothesis density(PHD) robust filtering algorithm based on variational Bayesian inference(GST-vbPHD) is proposed.Firstly,since it can accurately describe the heavy-tailed characteristics of noise with outliers,Gaussian-Student’s t mixture distribution is employed to model process noise and measurement noise respectively.Then Bernoulli random variable is introduced to correct the likelihood distribution of the mixture probability,leading hierarchical Gaussian distribution constructed by the Gaussian-Student’s t mixture distribution suitable to model non-stationary noise.Finally,the approximate solutions including target weights,measurement noise covariance and state estimation error covariance are obtained according to variational Bayesian inference approach.The simulation results show that,in the heavy-tailed noise environment,the proposed algorithm leads to strong improvements over the traditional PHD filter and the Student’s t distribution PHD filter. 展开更多
关键词 multi-target tracking(MTT) variational Bayesian inference Gaussian-Student’s t mixture distribution heavy-tailed noise
下载PDF
基于吉布斯采样的稀疏水声信道估计方法
9
作者 佟文涛 葛威 +1 位作者 贾亦真 张嘉恒 《哈尔滨工程大学学报(英文版)》 CSCD 2024年第2期434-442,共9页
The estimation of sparse underwater acoustic(UWA)channels can be regarded as an inference problem involving hidden variables within the Bayesian framework.While the classical sparse Bayesian learning(SBL),derived thro... The estimation of sparse underwater acoustic(UWA)channels can be regarded as an inference problem involving hidden variables within the Bayesian framework.While the classical sparse Bayesian learning(SBL),derived through the expectation maximization(EM)algorithm,has been widely employed for UWA channel estimation,it still differs from the real posterior expectation of channels.In this paper,we propose an approach that combines variational inference(VI)and Markov chain Monte Carlo(MCMC)methods to provide a more accurate posterior estimation.Specifically,the SBL is first re-derived with VI,allowing us to replace the posterior distribution of the hidden variables with a variational distribution.Then,we determine the full conditional probability distribution for each variable in the variational distribution and then iteratively perform random Gibbs sampling in MCMC to converge the Markov chain.The results of simulation and experiment indicate that our estimation method achieves lower mean square error and bit error rate compared to the classic SBL approach.Additionally,it demonstrates an acceptable convergence speed. 展开更多
关键词 Sparse bayesian learning Channel estimation variational inference Gibbs sampling
下载PDF
Variational learning for finite Beta-Liouville mixture models
10
作者 LAI Yu-ping ZHOU Ya-jian +2 位作者 PING Yuan GUO Yu-cui YANG Yi-xian 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2014年第2期98-103,共6页
In the article, an improved variational inference (VI) framework for learning finite Beta-Liouville mixture models (BLM) is proposed for proportional data classification and clustering. Within the VI framework, so... In the article, an improved variational inference (VI) framework for learning finite Beta-Liouville mixture models (BLM) is proposed for proportional data classification and clustering. Within the VI framework, some non-linear approximation techniques are adopted to obtain the approximated variational object functions. Analytical solutions are obtained for the variational posterior distributions. Compared to the expectation maximization (EM) algorithm which is commonly used for learning mixture models, underfitting and overfitting events can be prevented. Furthermore, parameters and complexity of the mixture model (model order) can be estimated simultaneously. Experiment shows that both synthetic and real-world data sets are to demonstrate the feasibility and advantages of the proposed method. 展开更多
关键词 variational inference model selection factorized approximation Beta-Liouville distribution mixing modeling
原文传递
Heterogeneous clustering via adversarial deep Bayesian generative model
11
作者 Xulun YE Jieyu ZHAO 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第3期103-112,共10页
This paper aims to study the deep clustering problem with heterogeneous features and unknown cluster number.To address this issue,a novel deep Bayesian clustering framework is proposed.In particular,a heterogeneous fe... This paper aims to study the deep clustering problem with heterogeneous features and unknown cluster number.To address this issue,a novel deep Bayesian clustering framework is proposed.In particular,a heterogeneous feature metric is first constructed to measure the similarity between different types of features.Then,a feature metric-restricted hierarchical sample generation process is established,in which sample with heterogeneous features is clustered by generating it from a similarity constraint hidden space.When estimating the model parameters and posterior probability,the corresponding variational inference algorithm is derived and implemented.To verify our model capability,we demonstrate our model on the synthetic dataset and show the superiority of the proposed method on some real datasets.Our source code is released on the website:Github.com/yexlwh/Heterogeneousclustering. 展开更多
关键词 dirichlet process heterogeneous clustering generative adversarial network laplacian approximation variational inference
原文传递
Deep Learning in Power Systems Research:A Review 被引量:6
12
作者 Mahdi Khodayar Guangyi Liu +1 位作者 Jianhui Wang Mohammad E.Khodayar 《CSEE Journal of Power and Energy Systems》 SCIE CSCD 2021年第2期209-220,共12页
With the rapid growth of power systems measurements in terms of size and complexity,discovering statistical patterns for a large variety of real-world applications such as renewable energy prediction,demand response,e... With the rapid growth of power systems measurements in terms of size and complexity,discovering statistical patterns for a large variety of real-world applications such as renewable energy prediction,demand response,energy disaggregation,and state estimation is considered a crucial challenge.In recent years,deep learning has emerged as a novel class of machine learning algorithms that represents power systems data via a large hypothesis space that leads to the state-of-the-art performance compared to most recent data-driven algorithms.This study explores the theoretical advantages of deep representation learning in power systems research.We review deep learning methodologies presented and applied in a wide range of supervised,unsupervised,and semi-supervised applications as well as reinforcement learning tasks.We discuss various settings of problems solved by discriminative deep models including stacked autoencoders and convolutional neural networks as well as generative deep architectures such as deep belief networks and variational autoencoders.The theoretical and experimental analysis of deep neural networks in this study motivates longterm research on optimizing this cutting-edge class of models to achieve significant improvements in the future power systems research. 展开更多
关键词 Autoencoder convolution neural network deep learning discriminative model deep belief network generative architecture variational inference
原文传递
Simultaneous image classification and annotation based on probabilistic model 被引量:2
13
作者 LI Xiao-xu SUN Chao-bo +2 位作者 LU Peng WANG Xiao-jie ZHONG Yi-xin 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2012年第2期107-115,共9页
The paper proposes a novel probabilistic generative model for simultaneous image classification and annotation. The model considers the fact that the category information can provide valuable information for image ann... The paper proposes a novel probabilistic generative model for simultaneous image classification and annotation. The model considers the fact that the category information can provide valuable information for image annotation. Once the category of an image is ascertained, the scope of annotation words can be narrowed, and the probability of generating irrelevant annotation words can be reduced. To this end, the idea that annotates images according to class is introduced in the model. Using variational methods, the approximate inference and parameters estimation algorithms of the model are derived, and efficient approximations for classifying and annotating new images are also given. The power of our model is demonstrated on two real world datasets: a 1 600-images LabelMe dataset and a 1 791-images UIUC-Sport dataset. The experiment results show that the classification performance is on par with several state-of-the-art classification models, while the annotation performance is better than that of several state-of-the-art annotation models. 展开更多
关键词 image classification image annotation probabilistic model variational inference
原文传递
Topic model for graph mining based on hierarchical Dirichlet process
14
作者 Haibin Zhang Shang Huating Xianyi Wu 《Statistical Theory and Related Fields》 2020年第1期66-77,共12页
In this paper,a nonparametric Bayesian graph topic model(GTM)based on hierarchical Dirichlet process(HDP)is proposed.The HDP makes the number of topics selected flexibly,which breaks the limitation that the number of ... In this paper,a nonparametric Bayesian graph topic model(GTM)based on hierarchical Dirichlet process(HDP)is proposed.The HDP makes the number of topics selected flexibly,which breaks the limitation that the number of topics need to be given in advance.Moreover,theGTMreleases the assumption of‘bag of words’and considers the graph structure of the text.The combination of HDP and GTM takes advantage of both which is named as HDP–GTM.The variational inference algorithm is used for the posterior inference and the convergence of the algorithm is analysed.We apply the proposed model in text categorisation,comparing to three related topic models,latent Dirichlet allocation(LDA),GTM and HDP. 展开更多
关键词 Graph topic model hierarchical Dirichlet process variational inference text classification
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部