<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="../assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Bounded Rationality (Posts about label refinery)</title><link>http://bjlkeng.github.io/</link><description></description><atom:link href="http://bjlkeng.github.io/categories/label-refinery.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><lastBuildDate>Tue, 10 Mar 2026 20:54:59 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Label Refinery: A Softer Approach</title><link>http://bjlkeng.github.io/posts/label-refinery/</link><dc:creator>Brian Keng</dc:creator><description>&lt;div&gt;&lt;p&gt;This post is going to be about a really simple idea that is surprisingly effective
from a paper by Bagherinezhad et al. called &lt;a class="reference external" href="https://arxiv.org/abs/1805.02641"&gt;Label Refinery: Improving ImageNet
Classification through Label Progression&lt;/a&gt;.
The title pretty much says it all but I'll also discuss some intuition and show
some experiments on the CIFAR10 and SVHN datasets.  The idea is both simple and
surprising, my favourite kind of idea!  Let's take a look.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://bjlkeng.github.io/posts/label-refinery/"&gt;Read more…&lt;/a&gt; (10 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>CIFAR10</category><category>label refinery</category><category>mathjax</category><category>residual networks</category><category>svhn</category><guid>http://bjlkeng.github.io/posts/label-refinery/</guid><pubDate>Tue, 04 Sep 2018 11:26:02 GMT</pubDate></item></channel></rss>