Variational Bayesian inference and beyond: Bayesian inference for big data
This short course took place at Bocconi University,
Dept of Decision Sciences,
Via Roentgen 1, Milano.
See this link for other tutorials. The schedule of lectures is as follows.
Lecture 1: Tuesday, 2018 January 9, 2:30--4 PM
Lecture 2: Wednesday, 2018 January 10, 2:30--4 PM
Lecture 3: Thursday, 2018 January 11, 10:30 AM--12 noon
Related research seminar: Thursday, 2018 January 11, 12:30
Lecture 4: Monday, 2018 January 15, 2:30--4 PM
Lecture 5: Tuesday, 2018 January 16, 2:30--4 PM
Lecture 6: Wednesday, 2018 January 17, 10:30 AM--12 noon
Professor Tamara Broderick
Bayesian methods have a number of desirable properties for modern data analysis---including
coherent quantification of uncertainty, a modular modeling framework that allows a practitioner to capture
complex phenomena, and the ability to incorporate prior information from an expert source.
However, Bayesian inference typically requires a high-dimensional integration, and in most moderately complex
problems, this integration must be approximated.
This tutorial will introduce variational Bayes (VB) as a tool for approximate
Bayesian inference that can scale to modern data and model sizes. We will discuss the benefits and drawbacks
of variational Bayes in the context of recent research.
A likely collection of topics we will cover includes:
Variational Bayes, mean-field variational Bayes, latent
Dirichlet allocation, stochastic gradient and stochastic variational
inference, streaming and distributed methods, automatic
differentiation and black-box variational inference, linear response
variational Bayes, robustness quantification.
And the seminar will cover new data summarization methods for scalable Bayesian inference
with finite-data theoretical guarantees on quality.
Basic familiarity with Bayesian data analysis and its goals. Be familiar with the following concepts: priors, likelihoods, posteriors, Bayes Theorem,
and conjugacy (for discrete and continuous distributions).