Bounded Rationality (Posts about latent variables)http://bjlkeng.github.io/enSat, 03 Aug 2024 01:42:49 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rssThe Expectation-Maximization Algorithmhttp://bjlkeng.github.io/posts/the-expectation-maximization-algorithm/Brian Keng<div><p>This post is going to talk about a widely used method to find the
maximum likelihood (MLE) or maximum a posteriori (MAP) estimate of parameters
in latent variable models called the Expectation-Maximization algorithm. You
have probably heard about the most famous variant of this algorithm called the
k-means algorithm for clustering.
Even though it's so ubiquitous, whenever I've tried to understand <em>why</em> this
algorithm works, I never quite got the intuition right. Now that I've taken
the time to work through the math, I'm going to <em>attempt</em> to explain the
algorithm hopefully with a bit more clarity. We'll start by going back to the
basics with latent variable models and the likelihood functions, then moving on
to showing the math with a simple Gaussian mixture model <a class="footnote-reference brackets" href="http://bjlkeng.github.io/posts/the-expectation-maximization-algorithm/#id5" id="id1">1</a>.</p>
<p><a href="http://bjlkeng.github.io/posts/the-expectation-maximization-algorithm/">Read moreā¦</a> (18 min remaining to read)</p></div>expectation-maximizationgaussian mixture modelslatent variablesmathjaxhttp://bjlkeng.github.io/posts/the-expectation-maximization-algorithm/Fri, 07 Oct 2016 12:47:47 GMT