Max Welling will be co-presenting a tutorial at the International Conference on Machine Learning on Bayesian Posterior Inference in the Big Data Arena. ICML is the leading conference on machine learning.
Abstract: Traditional algorithms for Bayesian posterior inference require processing the entire dataset in each iteration and are quickly getting obsoleted by the data deluge in various application domains. Most successful applications of learning with big data have been with very simple algorithms such as Stochastic Gradient Descent, because they are the only ones that can computationally handle today’s large datasets. However, by restricting ourselves to these algorithms, we miss out on all the advantages of Bayesian modeling, such as quantifying uncertainty and avoiding over-fitting. In this tutorial, we will explore recent advances in scalable Bayesian posterior inference. We will talk about a new generation of MCMC algorithms and variational methods that use only a mini-batch of data points per iteration, whether to generate an MCMC sample or update a variational parameter. We will also present applications to various real world problems and datasets.