6.882 Bayesian Modeling and Inference (Spring 2016)
Room 4-153
Tuesday, Thursday 2:30–4:00 PM
Instructor:
Professor Tamara Broderick
Office Hours: Tuesday & Thursday, 4–5pm, 32-G498
Email:
Introduction
As both the number of data sets and data set sizes grow, practitioners are interested in learning increasingly complex information and interactions from data. Probabilistic modeling in general, and Bayesian approaches in particular, provide a unifying framework for flexible modeling that includes prediction, estimation, and coherent uncertainty quantification. In this course, we will cover the modern challenges of Bayesian inference, including (but not limited to) speed of approximate inference, making use of distributed architectures, streaming data, and complex data interactions. We will study Bayesian nonparametric models, wherein model complexity grows with the size of the data; this allows us to learn, e.g., a greater diversity of topics as we read more documents from Wikipedia, identify more friend groups as we process more of Facebook's network structure, etc.
Piazza Site
Scribed notes, readings, discussions outside of class, and other resources can be found at the course Piazza page.
(Email Prof. Broderick if you are having trouble accessing Piazza.)
Description
This course will cover Bayesian modeling and inference at an advanced graduate level. A tentative list of topics (which may change depending on our interests) is as follows:
- Introduction to Bayesian inference; motivations from de Finetti, decision theory, etc.
- Hierarchical modeling, including popular models such as latent Dirichlet allocation
- Approximate posterior inference
- Variational inference, mean-field, stochastic variational inference, challenges/limitations of VI, etc
- Monte Carlo, avoiding random-walk behavior, Hamiltonian Monte Carlo/NUTS/Stan, biasing, etc
- Evaluation, sensitivity, robustness
- Bayesian nonparametrics: why and how
- Mixture models, admixtures, Dirichlet process, Chinese restaurant process
- Feature allocations, beta process, Indian buffet process
- Combinatorial stochastic processes
- Learning functions, Gaussian processes
- Probabilistic numerics
- Bayesian optimization
Prerequisites
A graduate-level familiarity with statistics, machine learning, and probability is required. We will assume familiarity with graphical models, exponential families, finite-dimensional Gaussian mixture models, expectation maximization, linear & logistic regression, hidden Markov models.
Some good courses to have taken beforehand include: 6.437, 6.438, 6.867, 6.260 (theoretical stats)
Assessment
- Project.
- A project proposal and proposal interview will be due before Spring Break.
- A project final report and presentation will be due at the end of the semester.
- Potential project ideas and more details on the project coming soon.
- Presentation and Participation
- There will be assigned reading (typically research papers) each week.
- Students in the class are expected to take a turn presenting and leading the discussion in class.
- Students will submit a weekly reflection on the reading (<= 1 page) before class. E.g., commentary on what worked well or was surprising in the reading, what could be improved, ideas for future directions of research/exploration.
- Scribe
- Students are expected to take turns scribing notes from lectures. A template will be provided.