Bounded Rationality (Posts about Bayesian)http://bjlkeng.github.io/enTue, 04 Jun 2024 00:49:16 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rssBayesian Learning via Stochastic Gradient Langevin Dynamics and Bayes by Backprophttp://bjlkeng.github.io/posts/bayesian-learning-via-stochastic-gradient-langevin-dynamics-and-bayes-by-backprop/Brian Keng<div><p>After a long digression, I'm finally back to one of the main lines of research
that I wanted to write about. The two main ideas in this post are not that
recent but have been quite impactful (one of the
<a class="reference external" href="https://icml.cc/virtual/2021/test-of-time/11808">papers</a> won a recent ICML
test of time award). They address two of the topics that are near and dear to
my heart: Bayesian learning and scalability. Dare I even ask who wouldn't be
interested in the intersection of these topics?</p>
<p>This post is about two techniques to perform scalable Bayesian inference. They
both address the problem using stochastic gradient descent (SGD) but in very
different ways. One leverages the observation that SGD plus some noise will
converge to Bayesian posterior sampling <a class="citation-reference" href="http://bjlkeng.github.io/posts/bayesian-learning-via-stochastic-gradient-langevin-dynamics-and-bayes-by-backprop/#welling2011" id="id1">[Welling2011]</a>, while the other generalizes the
"reparameterization trick" from variational autoencoders to enable non-Gaussian
posterior approximations <a class="citation-reference" href="http://bjlkeng.github.io/posts/bayesian-learning-via-stochastic-gradient-langevin-dynamics-and-bayes-by-backprop/#blundell2015" id="id2">[Blundell2015]</a>. Both are easily implemented in the modern deep
learning toolkit thus benefit from the massive scalability of that toolchain.
As usual, I will go over the necessary background (or refer you to my previous
posts), intuition, some math, and a couple of toy examples that I implemented.</p>
<p><a href="http://bjlkeng.github.io/posts/bayesian-learning-via-stochastic-gradient-langevin-dynamics-and-bayes-by-backprop/">Read more…</a> (53 min remaining to read)</p></div>Bayes by BackpropBayesianelboHMCLangevinmathjaxrmspropsgdSGLDvariational inferencehttp://bjlkeng.github.io/posts/bayesian-learning-via-stochastic-gradient-langevin-dynamics-and-bayes-by-backprop/Wed, 08 Feb 2023 23:25:40 GMTHamiltonian Monte Carlohttp://bjlkeng.github.io/posts/hamiltonian-monte-carlo/Brian Keng<div><p>Here's a topic I thought that I would never get around to learning because it was "too hard".
When I first started learning about Bayesian methods, I knew enough that I
should learn a thing or two about MCMC since that's the backbone
of most Bayesian analysis; so I learned something about it
(see my <a class="reference external" href="http://bjlkeng.github.io/posts/markov-chain-monte-carlo-mcmc-and-the-metropolis-hastings-algorithm/">previous post</a>).
But I didn't dare attempt to learn about the infamous Hamiltonian Monte Carlo (HMC).
Even though it is among the standard algorithms used in Bayesian inference, it
always seemed too daunting because it required "advanced physics" to
understand. As usual, things only seem hard because you don't know them yet.
After having some time to digest MCMC methods, getting comfortable learning
more maths (see
<a class="reference external" href="http://bjlkeng.github.io/posts/tensors-tensors-tensors/">here</a>,
<a class="reference external" href="http://bjlkeng.github.io/posts/manifolds/">here</a>, and
<a class="reference external" href="http://bjlkeng.github.io/posts/hyperbolic-geometry-and-poincare-embeddings/">here</a>),
all of a sudden learning "advanced physics" didn't seem so tough (but there
sure was a lot of background needed)!</p>
<p>This post is the culmination of many different rabbit holes (many much deeper
than I needed to go) where I'm going to attempt to explain HMC in simple and
intuitive terms to a satisfactory degree (that's the tag line of this blog
after all). I'm going to begin by briefly motivating the topic by reviewing
MCMC and the Metropolis-Hastings algorithm then move on to explaining
Hamiltonian dynamics (i.e., the "advanced physics"), and finally discuss the HMC
algorithm along with some toy experiments I put together. Most of the material
is based on [1] and [2], which I've found to be great sources for their
respective areas.</p>
<p><a href="http://bjlkeng.github.io/posts/hamiltonian-monte-carlo/">Read more…</a> (52 min remaining to read)</p></div>BayesianHamiltonianmathjaxMCMCMonte Carlohttp://bjlkeng.github.io/posts/hamiltonian-monte-carlo/Fri, 24 Dec 2021 00:07:05 GMTVariational Bayes and The Mean-Field Approximationhttp://bjlkeng.github.io/posts/variational-bayes-and-the-mean-field-approximation/Brian Keng<div><p>This post is going to cover Variational Bayesian methods and, in particular,
the most common one, the mean-field approximation. This is a topic that I've
been trying to understand for a while now but didn't quite have all the background
that I needed. After picking up the main ideas from
<a class="reference external" href="http://bjlkeng.github.io/posts/the-calculus-of-variations/">variational calculus</a> and
getting more fluent in manipulating probability statements like
in my <a class="reference external" href="http://bjlkeng.github.io/posts/the-expectation-maximization-algorithm/">EM</a> post,
this variational Bayes stuff seems a lot easier.</p>
<p>Variational Bayesian methods are a set of techniques to approximate posterior
distributions in <a class="reference external" href="https://en.wikipedia.org/wiki/Bayesian_inference">Bayesian Inference</a>.
If this sounds a bit terse, keep reading! I hope to provide some intuition
so that the big ideas are easy to understand (which they are), but of course we
can't do that well unless we have a healthy dose of mathematics. For some of the
background concepts, I'll try to refer you to good sources (including my own),
which I find is the main blocker to understanding this subject (admittedly, the
math can sometimes be a bit cryptic too). Enjoy!</p>
<p><a href="http://bjlkeng.github.io/posts/variational-bayes-and-the-mean-field-approximation/">Read more…</a> (24 min remaining to read)</p></div>BayesianKullback-Leiblermathjaxmean-fieldvariational calculushttp://bjlkeng.github.io/posts/variational-bayes-and-the-mean-field-approximation/Mon, 03 Apr 2017 13:02:46 GMTA Probabilistic Interpretation of Regularizationhttp://bjlkeng.github.io/posts/probabilistic-interpretation-of-regularization/Brian Keng<div><p>This post is going to look at a probabilistic (Bayesian) interpretation of
regularization. We'll take a look at both L1 and L2 regularization in the
context of ordinary linear regression. The discussion will start off
with a quick introduction to regularization, followed by a back-to-basics
explanation starting with the maximum likelihood estimate (MLE), then on to the
maximum a posteriori estimate (MAP), and finally playing around with priors to
end up with L1 and L2 regularization.</p>
<p><a href="http://bjlkeng.github.io/posts/probabilistic-interpretation-of-regularization/">Read more…</a> (9 min remaining to read)</p></div>Bayesianmathjaxprobabilityregularizationhttp://bjlkeng.github.io/posts/probabilistic-interpretation-of-regularization/Mon, 29 Aug 2016 12:52:33 GMTA Probabilistic View of Linear Regressionhttp://bjlkeng.github.io/posts/a-probabilistic-view-of-regression/Brian Keng<div><p>One thing that I always disliked about introductory material to linear
regression is how randomness is explained. The explanations always
seemed unintuitive because, as I have frequently seen it, they appear as an
after thought rather than the central focus of the model.
In this post, I'm going to try to
take another approach to building an ordinary linear regression model starting
from a probabilistic point of view (which is pretty much just a Bayesian view).
After the general idea is established, I'll modify the model a bit and end up
with a Poisson regression using the exact same principles showing how
generalized linear models aren't any more complicated. Hopefully, this will
help explain the "randomness" in linear regression in a more intuitive way.</p>
<p><a href="http://bjlkeng.github.io/posts/a-probabilistic-view-of-regression/">Read more…</a> (12 min remaining to read)</p></div>BayesianlogisticmathjaxPoissonprobabilityregressionhttp://bjlkeng.github.io/posts/a-probabilistic-view-of-regression/Sun, 15 May 2016 00:43:05 GMTNormal Approximation to the Posterior Distributionhttp://bjlkeng.github.io/posts/normal-approximations-to-the-posterior-distribution/Brian Keng<div class="cell border-box-sizing text_cell rendered"><div class="prompt input_prompt">
</div><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>In this post, I'm going to write about how the ever versatile normal distribution can be used to approximate a Bayesian posterior distribution. Unlike some other normal approximations, this is <em>not</em> a direct application of the central limit theorem. The result has a straight forward proof using Laplace's Method whose main ideas I will attempt to present. I'll also simulate a simple scenario to see how it works in practice.</p>
<p><a href="http://bjlkeng.github.io/posts/normal-approximations-to-the-posterior-distribution/">Read more…</a> (14 min remaining to read)</p></div></div></div>Bayesiannormal distributionposteriorpriorprobabilitysamplinghttp://bjlkeng.github.io/posts/normal-approximations-to-the-posterior-distribution/Sat, 02 Apr 2016 19:22:54 GMT