<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<channel>
		<title>{ Nadja Rhodes }</title>
		<description>Nadja Rhodes is a software engineer who likes to code for the web. She blogs (mainly) as she dabbles with data, NLP, and machine learning. Opinions expressed are hers.</description>
		<link>https://iconix.github.io</link>
		<atom:link href="https://iconix.github.io/feed.xml" rel="self" type="application/rss+xml" />
		
			<item>
				<title>My work year in review</title>
				
				
					<description>&lt;p&gt;I celebrated my 1-year work anniversary at &lt;a href=&quot;https://www.asapp.com/&quot;&gt;ASAPP&lt;/a&gt; this past week. So now that I possess one year of professional experience as a &lt;em&gt;machine learning engineer&lt;/em&gt; (aka ML engineer or MLE) – what has this past year been like for me?&lt;/p&gt;

</description>
				
				<pubDate>Thu, 30 Jan 2020 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/career/2020/01/30/asapp-year-1</link>
				<guid isPermaLink="true">https://iconix.github.io/career/2020/01/30/asapp-year-1</guid>
			</item>
		
			<item>
				<title>Advice on OpenAI Scholars</title>
				
				
					<description>&lt;p&gt;I am taking some blogging advice that I’ve read elsewhere: if multiple people ask you similar questions offline, answer those questions in a blog post!&lt;/p&gt;

</description>
				
				<pubDate>Sun, 07 Oct 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/notes/2018/10/07/what-i-learned</link>
				<guid isPermaLink="true">https://iconix.github.io/notes/2018/10/07/what-i-learned</guid>
			</item>
		
			<item>
				<title>deephypebot&amp;#58; an overview</title>
				
				
					<description>&lt;h2 id=&quot;motivation&quot;&gt;Motivation&lt;/h2&gt;

</description>
				
				<pubDate>Fri, 31 Aug 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/dl/2018/08/31/deephypebot-final</link>
				<guid isPermaLink="true">https://iconix.github.io/dl/2018/08/31/deephypebot-final</guid>
			</item>
		
			<item>
				<title>Notes on topic modeling</title>
				
				
					<description>&lt;p&gt;I’ve &lt;a href=&quot;/notes/2017/12/07/topics-and-dim-reduction&quot;&gt;talked a bit about topic modeling&lt;/a&gt; on this blog before, pre-Scholars program. I revisit the subject now as a potential &lt;em&gt;automatic reward function&lt;/em&gt; for the &lt;a href=&quot;/dl/2018/07/28/lcgan&quot;&gt;LC-GAN&lt;/a&gt; I am using in &lt;a href=&quot;/dl/2018/08/03/deephypebot&quot;&gt;my final project&lt;/a&gt;. My hypothesis is that topic modeling can distill the sentences in my music commentary data set down into distinct sentence types; this information can then be used to teach the LC-GAN what types of sentences to encourage the downstream language model to generate.&lt;/p&gt;

</description>
				
				<pubDate>Fri, 24 Aug 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/dl/2018/08/24/project-notes-2</link>
				<guid isPermaLink="true">https://iconix.github.io/dl/2018/08/24/project-notes-2</guid>
			</item>
		
			<item>
				<title>Notes on genre and inspiration</title>
				
				
					<description>&lt;p&gt;My &lt;a href=&quot;/dl/2018/08/03/deephypebot&quot;&gt;final project&lt;/a&gt; is underway, and it feels like I get to return to my software engineering roots with these powerful, new deep learning techniques in my tool belt. This is a welcome opportunity to flex what I’ve learned this summer.&lt;sup id=&quot;fnref:flex&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:flex&quot; class=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:flex&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Perhaps this is the true meaning of &lt;a href=&quot;/dl/2018/06/20/arxiv-song-titles#best-of&quot;&gt;“Flexing To The Study”&lt;/a&gt; :slightly_smiling_face: &lt;a href=&quot;#fnref:flex&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
				
				<pubDate>Tue, 14 Aug 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/dl/2018/08/14/project-notes-1</link>
				<guid isPermaLink="true">https://iconix.github.io/dl/2018/08/14/project-notes-1</guid>
			</item>
		
			<item>
				<title>OpenAI Scholar&amp;#58; Final Project</title>
				
				
					<description>&lt;p&gt;&lt;em&gt;This post is a replica of my OpenAI Scholar final project proposal, also available &lt;a href=&quot;https://github.com/iconix/deephypebot/blob/master/proposal.md&quot;&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
				
				<pubDate>Fri, 03 Aug 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/dl/2018/08/03/deephypebot</link>
				<guid isPermaLink="true">https://iconix.github.io/dl/2018/08/03/deephypebot</guid>
			</item>
		
			<item>
				<title>Training an LC-GAN</title>
				
				
					<description>&lt;p&gt;I spent the last week before my 4-week final project understanding, implementing, and training a special kind of GAN. GAN is short for &lt;em&gt;generative adversarial network&lt;/em&gt;, a neural network that “simultaneously trains two models: a generative model &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;G&lt;/code&gt; that captures the data distribution, and a discriminative model &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;D&lt;/code&gt; that estimates the probability that a sample came from the training data rather than &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;G&lt;/code&gt;” (from the &lt;a href=&quot;https://arxiv.org/abs/1406.2661&quot;&gt;original GAN paper&lt;/a&gt;).&lt;/p&gt;

</description>
				
				<pubDate>Sat, 28 Jul 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/dl/2018/07/28/lcgan</link>
				<guid isPermaLink="true">https://iconix.github.io/dl/2018/07/28/lcgan</guid>
			</item>
		
			<item>
				<title>Interpreting Latent Space and Bias</title>
				
				
					<description>&lt;p&gt;For week 7, and my second week on model interpretability (see &lt;a href=&quot;/dl/2018/07/13/interpret-attn&quot;&gt;first week post&lt;/a&gt;), I focused in on one particularly cool VAE-based visualization example from Ha &amp;amp; Schmidhuber’s &lt;a href=&quot;https://worldmodels.github.io/&quot;&gt;World Models&lt;/a&gt; work. I also did some broader thinking around &lt;em&gt;selection bias&lt;/em&gt; in my song review training data.&lt;/p&gt;

</description>
				
				<pubDate>Sat, 21 Jul 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/dl/2018/07/21/bias-and-space</link>
				<guid isPermaLink="true">https://iconix.github.io/dl/2018/07/21/bias-and-space</guid>
			</item>
		
			<item>
				<title>Interpreting with Attention and More</title>
				
				
					<description>&lt;p&gt;This week, I picked up from &lt;a href=&quot;/dl/2018/07/06/not-enough-attention#understanding-attention&quot;&gt;last week’s study of attention&lt;/a&gt; and applied it towards &lt;strong&gt;model interpretability&lt;/strong&gt;, or the ability for humans to understand a model. I also explored some other &lt;a href=&quot;https://distill.pub/2018/building-blocks/&quot;&gt;“attribution-based”&lt;/a&gt; methods of interpretability that credit different parts of the model input with the prediction.&lt;/p&gt;

</description>
				
				<pubDate>Fri, 13 Jul 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/dl/2018/07/13/interpret-attn</link>
				<guid isPermaLink="true">https://iconix.github.io/dl/2018/07/13/interpret-attn</guid>
			</item>
		
			<item>
				<title>Not Enough Attention</title>
				
				
					<description>&lt;p&gt;&lt;em&gt;Attention week: what an ironic week to get distracted.&lt;/em&gt;&lt;/p&gt;

</description>
				
				<pubDate>Fri, 06 Jul 2018 00:00:00 +0000</pubDate>
				<link>https://iconix.github.io/dl/2018/07/06/not-enough-attention</link>
				<guid isPermaLink="true">https://iconix.github.io/dl/2018/07/06/not-enough-attention</guid>
			</item>
		
	</channel>
</rss>
