Bounded Rationality (Posts about information)http://bjlkeng.github.io/enSat, 03 Aug 2024 01:42:51 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rssThe Logic Behind the Maximum Entropy Principlehttp://bjlkeng.github.io/posts/the-logic-behind-entropy/Brian Keng<div><p>For a while now, I've really enjoyed diving deep to understand
probability and related fundamentals (see
<a class="reference external" href="http://bjlkeng.github.io/posts/probability-the-logic-of-science/">here</a>,
<a class="reference external" href="http://bjlkeng.github.io/posts/maximum-entropy-distributions/">here</a>, and
<a class="reference external" href="http://bjlkeng.github.io/posts/an-introduction-to-stochastic-calculus/">here</a>).
Entropy is a topic that comes up all over the place from physics to information
theory, and of course, machine learning. I written about it in various
different forms but always taken it as a given as the "expected information".
Well I found a few of good explanations about how to "derive" it and thought
that I should share.</p>
<p>In this post, I'll be showing a few of derivations of the maximum entropy
principle, where entropy appears as part of the definition. These derivations
will show why it is a reasonable and natural thing to maximize, and how it is
determined from some well thought out reasoning. This post will be more math
heavy but hopefully it will give you more insight into this wonderfully
surprising topic.</p>
<p><a href="http://bjlkeng.github.io/posts/the-logic-behind-entropy/">Read moreā¦</a> (16 min remaining to read)</p></div>entropyinformationJaynesmathjaxShannonWallishttp://bjlkeng.github.io/posts/the-logic-behind-entropy/Sat, 03 Aug 2024 00:44:59 GMT