<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>korbonits | Math ∩ Data</title>
    <description>Born and raised in Seattle&lt;br&gt; Currently working at Amazon Web Services, Inc.
</description>
    <link>http://korbonits.github.io/</link>
    <atom:link href="http://korbonits.github.io/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Fri, 18 Nov 2022 06:33:50 +0000</pubDate>
    <lastBuildDate>Fri, 18 Nov 2022 06:33:50 +0000</lastBuildDate>
    <generator>Jekyll v3.9.2</generator>
    
      <item>
        <title>In re: Seattle's New Theory of Crime</title>
        <description>&lt;h1 id=&quot;november-14-weekend-edition&quot;&gt;November 14 Weekend Edition&lt;/h1&gt;

&lt;p&gt;Recently I became a subscriber of the online edition of the WSJ. They were offering a sweet deal and I like to follow their business news. I tacked on a weekend delivery too.&lt;/p&gt;

&lt;p&gt;On the last page of the first section of November 14th’s paper, I came across an article that made my blood boil. I immediately responded via submission of a letter to the editor. Having not heard back, I will assume that my response will remain unpublished. Therefore, I am publishing my response here on my blog now.&lt;/p&gt;

&lt;p&gt;The thesis of their editorial is simple: Law and Order should rule the day and Seattle’s proposed legislation to make poverty a suitable defense for certain misdemeanors is descent into anarchy and proof of Seattle being an “anarchist jurisdiction” (so-called by former Atty. Gen. Bill Barr following unrest in 2020 and Seattle’s well-known Capital Hill Autonomous Zone). It was an article meant to fire up conservatives. It had the effect (in my mind) of firing up #45’s base, generating racism, exacerbating white Seattle’s deep fears of poverty and people of color, and moralizing against the poor. It should be a mirror we’re holding up to ourselves, questioning why there is poverty amidst plenty. The proposed legislation is a step towards the decriminalization of poverty – a noble goal.&lt;/p&gt;

&lt;h1 id=&quot;quick-primer-of-the-wall-street-journal-editorial-board&quot;&gt;Quick primer of the Wall Street Journal editorial board&lt;/h1&gt;

&lt;p&gt;&lt;a href=&quot;https://www.wsj.com/articles/seattles-new-theory-of-crime-11605309361?st=njn5ejeyzgc1rtp&amp;amp;reflink=desktopwebshare_permalink&quot;&gt;Seattle’s New Theory of Crime&lt;/a&gt; is an article whose byline is &lt;a href=&quot;https://www.wsj.com/news/author/editorial-board&quot;&gt;The Editorial Board&lt;/a&gt; (and whose twitter handle is &lt;a href=&quot;https://twitter.com/WSJopinion&quot;&gt;@WSJopinion&lt;/a&gt;). Their biography linked to above reads like a mini manifesto courtesy of John Galt. But the board’s biography doesn’t even begin to touch on racism, crime, or poverty, which, this being the editorial board of the Wall Street Journal, has strong opinions indeed.&lt;/p&gt;

&lt;h1 id=&quot;my-unpublished-response--unedited&quot;&gt;My unpublished response – unedited&lt;/h1&gt;

&lt;p&gt;Regarding “Seattle’s New Theory of Crime” (Review, Nov. 14): the characterization by the author of the realities of life in Seattle are baseless and polemical. The author, who parrots the Atty. Gen. Barr’s abhorrently political distinction of Seattle as an “anarchist jurisdiction”, paints a mythical canvas of Seattle that, while dramatic, is farther from fact than Renaissance painters depicting scenes from the Trojan Wars. Not only is it offensive to call on business owners to “keep their plywood up” when many small businesses have been struggling due to COVID-19 – thanks to Sen. Mitch McConnell’s obstructionism in the Senate to pass additional relief to business owners and wage-earners – but it is a thinly-veiled dog whistle to right wing groups and corporate America that reifies racist views of inner-cities as full of crime and lacking in so-called “Law and Order”.&lt;/p&gt;

&lt;p&gt;The riots in Seattle have been Police riots, where police waged chemical warfare against peaceful protesters for months in the wake of the disgusting murder of George Floyd in May. Neighbors near the East Precinct, including many with pre-existing conditions, elderly, and pregnant mothers, have been breathing pepper spray and tear gas all summer, as it wafted down streets and up into their windows. The violence perpetrated by the Seattle Police Department has led to open federal investigation for years.&lt;/p&gt;

&lt;p&gt;The theory behind recent proposed legislature in Seattle is to provide some legal relief to the most disadvantaged people in our city and our society. Their circumstances are in no uncertain terms caused by the machinations of the unbridled worship of raw capitalism that has devastated working people in neighborhoods and cities all across America. The WSJ excitedly reports on record highs in the S&amp;amp;P 500 and in the back pages of the same section, calls those who have had it worst “lawbreakers” and rioters. Isn’t crime profitable for S&amp;amp;P 500 companies? Is that not what we’re celebrating when the index breaks new highs when 20 million Americans have lost their jobs and 240,000 have died because we’ve put “the economy” first?&lt;/p&gt;

&lt;p&gt;Seattle is thriving. The city council is trying to make it slightly less illegal to be poor and underserved by public resources. When it’s safe to travel, I invite you to visit and see how well Seattle is doing compared to the rest of America.&lt;/p&gt;

&lt;h1 id=&quot;commentary&quot;&gt;Commentary&lt;/h1&gt;

&lt;p&gt;What an emotionally charged response! Now, this would not pass PR muster, undedited, if I were a CEO or elected official. I think that some of the pro-Seattle bias towards the end is natural hometown pride after reading an article whose sole purpose was to denigrate my city, but I didn’t intend to do so at the expense of other cities/parts of America that are struggling quite a lot worse. Especially in places where it’s not the fault of locals but of gerrymandered representation and political gridlock that has led to stagnant progress for decades. Seattle has been incredibly lucky overall. However, that very success has bred a myriad of problems such as discplacement of locals, increased segregation, and unlivably expensive rents and stagnated wages for many. We can do better.&lt;/p&gt;
</description>
        <pubDate>Sat, 26 Dec 2020 00:00:00 +0000</pubDate>
        <link>http://korbonits.github.io/2020/12/26/In-re-Seattle-New-Theory-of-Crime.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2020/12/26/In-re-Seattle-New-Theory-of-Crime.html</guid>
        
        
      </item>
    
      <item>
        <title>Le goût</title>
        <description>&lt;p&gt;food [delights]&lt;/p&gt;

&lt;p&gt;experience (so) special&lt;/p&gt;

&lt;p&gt;artful bouquets balanced, with care&lt;/p&gt;

&lt;p&gt;unexpected, combinatorial&lt;/p&gt;

&lt;p&gt;sichuanese cuisine. my favorite chinese restaurant. the nuanced sichuan pepper. its numbing aspect, bouquet of flowers, ground potpourri. vegetal&lt;/p&gt;

&lt;p&gt;pepper segway: the fresh flesh of the habanero, seductive, fruity, aromatic. Tantalizing yet deadly heat. Yearning for the burn 🔥&lt;/p&gt;

&lt;p&gt;gyro (euro, not GYRO[scope]): savory lamb souvlaki, tarty-salty feta, the crunch of lettuce, acidity of tomato, bitterness of onion. bite. pita is merely a conveyance, i have no allegiance (je suis fou)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;je aime le pâté&lt;/em&gt;. intense and exciting. layers of &lt;em&gt;goût&lt;/em&gt;. weird body parts (livers &amp;amp;tc). langue de boeuf: excellent as charcuturie goes, bien sûr, but perhaps not a pâté… mais ouai, c’est… c’est vraiment delicieux&lt;/p&gt;

&lt;p&gt;PETRICHOR. earthy. smoky. umami. leather, (smoked, cold or hot, it doesn’t matter!) salmon, a fire crackling in the room, a wood burning stove (at home or encountered on a walk, blocks away at twilight in a sleepy neighborhood), a good cigar, pipe tobacco, even a decent cigarette for the right occasion (the drink before and the smoke after)&lt;/p&gt;

&lt;p&gt;hunkering down. inside in autumn, or en hiver, storms approaching, under the warm (scritchy-scratchy) blanket (with a loved one? kisses, magical taste &amp;lt;3). it gets cold. add a mug of something warm. like tea (smoky, grassy, vegetal), or a hot toddy (sweet). or a hot body. eggplant emoji.&lt;/p&gt;

&lt;p&gt;baba ganoush. baba-que, barbeque, brisket, ribs. albiet slightly carcinogenic (i often dream of it)&lt;/p&gt;

&lt;p&gt;aroma of lapsang souchong. unparalleled – stronger varieties of russian caravan &lt;em&gt;notwithstanding&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;the scotches of islay – dried over peat fires, said peat dug up from the very ground upon which it is burnt&lt;/p&gt;

&lt;p&gt;the texture(smoked salmon): savory, sweet, tangy, smoke gently flirting with its edges –&amp;gt; &lt;em&gt;brain zinger!&lt;/em&gt;. the way the salmon can go so oh well, just like so, with the &lt;em&gt;right&lt;/em&gt; company {perhaps a well-paired cheese} PINKY UP&lt;/p&gt;

&lt;p&gt;smoked bleu from a rogue creamerie, as it were&lt;/p&gt;

&lt;p&gt;smoked salt on a chocolate caramel from a local confectionery&lt;/p&gt;

&lt;p&gt;a proper connection between flavor(s) and smell(s) on a single manifold of olfactory phenomenae. an equivalency relation ~ between scotch, leather, and good dirt, satisfying reflexivity, symmetry, and transitivity&lt;/p&gt;

&lt;p&gt;o(m(g))&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;img src=&quot;http://www.napowrimo.net/wp-content/uploads/2020/03/napo2020button1-1.png&quot; alt=&quot;NaPoWriMo&quot; /&gt;&lt;/p&gt;

&lt;h1 id=&quot;national-poetry-writing-month&quot;&gt;National Poetry (Writing) Month&lt;/h1&gt;

&lt;p&gt;This is a poem in a series of poems during April, 2020, for #NationalPoetryMonth and #NaPoWriMo #NaPoWriMo2020&lt;/p&gt;

&lt;p&gt;My &lt;a href=&quot;/2020/03/31/national-poetry-writing-month.html&quot;&gt;NationalPoetryWritingMonth post&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Thu, 02 Apr 2020 00:00:00 +0000</pubDate>
        <link>http://korbonits.github.io/2020/04/02/go%C3%BBt.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2020/04/02/go%C3%BBt.html</guid>
        
        
      </item>
    
      <item>
        <title>Q(uarantine) P(hysics)</title>
        <description>&lt;p&gt;All day we’re solving a system of equations
A three body problem, f(you, me, the cat)&lt;/p&gt;

&lt;p&gt;Time has all but ended as we flirt with each other’s event horizons&lt;/p&gt;

&lt;p&gt;Dipping, dipping lower into an unstoppable force&lt;/p&gt;

&lt;p&gt;Together forever!&lt;/p&gt;

&lt;p&gt;Time has no meaning. The limit of love as our time together increases goes to infinity&lt;/p&gt;

&lt;p&gt;Power. Force&lt;/p&gt;

&lt;p&gt;What pulls us together? What unites us?&lt;/p&gt;

&lt;p&gt;I love you with the force of a black hole&lt;/p&gt;

&lt;p&gt;It’s all-consuming. The more of you I have the more of you I want. Am I addicted?&lt;/p&gt;

&lt;p&gt;Robert Palmer would suggest that we might as well face it&lt;/p&gt;

&lt;p&gt;Sometimes we merge, emitting a chirp of gravitational waves that echo across all of spacetime&lt;/p&gt;

&lt;p&gt;Our love changes the very fabric of Minkoski Space&lt;/p&gt;

&lt;p&gt;Waves of love resonating with all who encounter it ❤️&lt;/p&gt;

&lt;p&gt;It moves with the speed of light&lt;/p&gt;

&lt;p&gt;When we merge, it’s hot. It’s billions of atomic bombs, releasing several suns of energy and pressure&lt;/p&gt;

&lt;p&gt;Infinities and zeroes, the notion of distance shatters as our theory breaks down, there’s no limit to what we can do, there’s no answer to the question, to the equation&lt;/p&gt;

&lt;p&gt;No one can ever hope to understand us&lt;/p&gt;

&lt;p&gt;It’s ephemeral, a blip on the cosmic timescale, but right now, it’s everything, it’s like all of time all the time every time. I’ve forgotten time. The only time I know, the only time I want, the time I need, is with you&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;img src=&quot;http://www.napowrimo.net/wp-content/uploads/2020/03/napo2020button1-1.png&quot; alt=&quot;NaPoWriMo&quot; /&gt;&lt;/p&gt;

&lt;h1 id=&quot;national-poetry-writing-month&quot;&gt;National Poetry (Writing) Month&lt;/h1&gt;

&lt;p&gt;This is a poem in a series of poems during April, 2020, for #NationalPoetryMonth and #NaPoWriMo #NaPoWriMo2020&lt;/p&gt;

&lt;p&gt;My &lt;a href=&quot;/2020/03/31/national-poetry-writing-month.html&quot;&gt;NationalPoetryWritingMonth post&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Wed, 01 Apr 2020 00:00:00 +0000</pubDate>
        <link>http://korbonits.github.io/2020/04/01/quarantine-physics.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2020/04/01/quarantine-physics.html</guid>
        
        
      </item>
    
      <item>
        <title>National Poetry (Writing) Month</title>
        <description>&lt;p&gt;Happy #NationalPoetryMonth #NaPoWriMo #NaPoWriMo2020!&lt;/p&gt;

&lt;p&gt;This month I’m going to try to publish &amp;gt;= 1 new poems every day in celebration of National Poetry Month and its lesser-known cousin, National Poetry Writing Month.&lt;/p&gt;

&lt;p&gt;Inspired by &lt;a href=&quot;http://www.napowrimo.net/&quot;&gt;NaPoWriMo&lt;/a&gt; and Twitter ;).&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;http://www.napowrimo.net/wp-content/uploads/2020/03/napo2020button1-1.png&quot; alt=&quot;NaPoWriMo&quot; /&gt;&lt;/p&gt;

&lt;h1 id=&quot;poems-written&quot;&gt;Poems Written&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;April 1, 2020: &lt;a href=&quot;/2020/04/01/quarantine-physics.html&quot;&gt;Q(uarantine) P(hysics)&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;April 2, 2020: &lt;a href=&quot;/2020/04/02/go%C3%BBt.html&quot;&gt;Le goût&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;poems-read&quot;&gt;Poems Read&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;April 1, 2020: &lt;a href=&quot;https://poets.org/poem/having-coke-you&quot;&gt;Having a Coke with You, by Frank O’Hara&lt;/a&gt;&lt;sup id=&quot;fnref:0&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:0&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; &amp;lt;– &lt;em&gt;One of my favorite love poems&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;April 2, 2020: &lt;a href=&quot;https://www.poetryfoundation.org/poetrymagazine/poems/49493/i-carry-your-heart-with-mei-carry-it-in&quot;&gt;[i carry your heart with me(i carry it in], by E.E. Cummings&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; &amp;lt;– &lt;em&gt;Another of my favorite love poems&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;hr /&gt;

&lt;h1 id=&quot;bibliography&quot;&gt;Bibliography&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:0&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The Collected Poems of Frank O’Hara by Frank O’Hara, copyright © 1971 by Maureen Granville-Smith, Administratrix of the Estate of Frank O’Hara, copyright renewed 1999 by Maureen O’Hara Granville-Smith and Donald Allen. Used by permission of Alfred A. Knopf, an imprint of the Knopf Doubleday Publishing Group, a division of Penguin Random House LLC. All rights reserved. &lt;a href=&quot;#fnref:0&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;“[i carry your heart with me(i carry it in]” Copyright 1952, © 1980, 1991 by the Trustees for the E. E. Cummings Trust, from Complete Poems: 1904-1962 by E. E. Cummings, edited by George J. Firmage. Used by permission of Liveright Publishing Corporation. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
        <pubDate>Tue, 31 Mar 2020 00:00:00 +0000</pubDate>
        <link>http://korbonits.github.io/2020/03/31/national-poetry-writing-month.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2020/03/31/national-poetry-writing-month.html</guid>
        
        
      </item>
    
      <item>
        <title>2019 Retrospective</title>
        <description>&lt;h2 id=&quot;2019-retrospective&quot;&gt;2019 Retrospective&lt;/h2&gt;

&lt;p&gt;Well, &lt;a href=&quot;http://korbonits.github.io/2019/01/01/New-Years-Resolutions-2019.html&quot;&gt;that was ambitious&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;All in all, I did pretty well. My sailing skill increased dramatically and I continued to write albeit with less zeal. I read a lot of books, and traveled to one new country (Japan) and new parts of others (Montréal). I learned how to learn better than I had for a while.&lt;/p&gt;

&lt;p&gt;The MOST important update of 2019 is that I met the love of my life (first week of 2019!), we fell in love, and planned a trip for early 2020 to propose to each other. Wasn’t even on the list. Definitely the best surprise of the year (and of my life). Renders 2019 a success no matter how you look at it. Looking forward to writing about this in a subsequent blog post :).&lt;/p&gt;

&lt;h3 id=&quot;grading-myself&quot;&gt;Grading myself&lt;/h3&gt;

&lt;p&gt;At a high level, for 2019 I had goals in the following categories:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Reading: success/failure&lt;/li&gt;
  &lt;li&gt;Writing: success/failure&lt;/li&gt;
  &lt;li&gt;Travel: success&lt;/li&gt;
  &lt;li&gt;Language: failure&lt;/li&gt;
  &lt;li&gt;Fitness: failure&lt;/li&gt;
  &lt;li&gt;Sailing: success/failure&lt;/li&gt;
  &lt;li&gt;Organization: success&lt;/li&gt;
  &lt;li&gt;Use less technology: success&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;reading-successfailure&quot;&gt;Reading: success/failure&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;Read 50 books/15,000+ pages: success&lt;/li&gt;
  &lt;li&gt;Great Books, year one: failure&lt;/li&gt;
  &lt;li&gt;Classic machine learning papers: failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In 2019 I read 16,918 pages of &lt;strong&gt;completed&lt;/strong&gt; books on Goodreads, which is 50 books: first time hitting my goal! Here are &lt;a href=&quot;https://www.goodreads.com/review/list/49504536-alex-korbonits?read_at=2019&quot;&gt;my 2019 books&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Great Books: didn’t make any headway here. Ironically, I picked up a full set (pristine!) of the Harvard Classics, so, Hutchins and Adler have some competition for my reading attention.&lt;/p&gt;

&lt;p&gt;Classic machine learning papers: this just wasn’t high on the interest list in 2019. Dropping it until further notice. I did however spend real quality time dissecting the latest NLP papers, math and all, on whiteboards or on Google Hangouts. BERT, Attention is All You Need, ELMo, etc., they were great reads but by far the better part was the sharing of knowledge with curious colleagues and friends.&lt;/p&gt;

&lt;h4 id=&quot;writing-successfailure&quot;&gt;Writing: success/failure&lt;/h4&gt;

&lt;p&gt;In early 2019, fresh off the huge NaNoWriMo tailwinds of hitting 50,000 words in late 2018, I started one of two fiction courses at &lt;a href=&quot;https://hugohouse.org/&quot;&gt;Hugo House&lt;/a&gt;: Fiction II and Fiction III.&lt;/p&gt;

&lt;p&gt;I was very invested in Fiction II but conflicts with sailing and other obligations had me missing out on Fiction III and I unfortunately withdrew from participation after workshopping a piece of flash fiction derived from one of my NaNoWriMo scenes.&lt;/p&gt;

&lt;p&gt;After reading a lot of Poetry Magazine, I decided I’d like to take the trilogy of poetry courses at Hugo House when I can. Poetry suits the way I think better than the punishing detail and convention that prose requires to be readable. Writing prose is very difficult.&lt;/p&gt;

&lt;h4 id=&quot;travel-success&quot;&gt;Travel: success&lt;/h4&gt;

&lt;p&gt;Started off strong with entering 2019 in Puerto Vallarta, Mexico.&lt;/p&gt;

&lt;p&gt;Here’s a list of destinations I went to in 2019:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Puerto Vallarta, Mexico (January)&lt;/li&gt;
  &lt;li&gt;Vancouver, BC, Canada (April)&lt;/li&gt;
  &lt;li&gt;Japan (Osaka, Kobe, Kyoto, Himeji, Tokyo) (May)&lt;/li&gt;
  &lt;li&gt;Portland, OR (June)&lt;/li&gt;
  &lt;li&gt;Lopez Island, WA (July)&lt;/li&gt;
  &lt;li&gt;Los Angeles, Santa Monica, Malibu (x3 for each) (August, October, November)&lt;/li&gt;
  &lt;li&gt;Newport, OR (August)&lt;/li&gt;
  &lt;li&gt;Chicago (September)&lt;/li&gt;
  &lt;li&gt;New York City (September)&lt;/li&gt;
  &lt;li&gt;Troy, NY; Lake George; Lake Champlain (September)&lt;/li&gt;
  &lt;li&gt;Montréal, QC, Canada (September/October)&lt;/li&gt;
  &lt;li&gt;San Francisco (November)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Wow. This is perhaps the most travel I’ve done in a single calendar year. Two continents (Asia (first time!) and North America), four countries (Japan, USA, Canada, Mexico), five states (Washington, Oregon, California, Illinois, New York), two provinces (British Columbia, Quebec), and 16 flights. I traveled for weddings, conferences, romantic adventures, Thanksgiving, family birthdays, and of course sailing.&lt;/p&gt;

&lt;p&gt;By far Japan was the most memorable trip (unfair competition). Can’t wait to go back. Would be nice to have gone for longer or to have gone without the sailing festivities to focus purely on traveling. Would love to spend more time in Kyoto and Tokyo, specifically. One day in Kyoto and three days in Tokyo were simply not enough, although I did manage to do and see quite a lot in those days with friends as well as on my own. Could not have done it without one of my college friends acting as the Virgil to my Dante helping me carve out a semi-optimal path through these complex cities. Food was the main anchor point followed by Shinto Shrines and long walks.&lt;/p&gt;

&lt;h4 id=&quot;language-failure-with-computer-language-success&quot;&gt;Language: failure (with computer language success)&lt;/h4&gt;

&lt;p&gt;Knowing at the outset of 2019 that I’d be in Japan for a fortnight in May, I had a lot of motivation to learn Japanese. I tried duolingo for awhile but I felt like I was missing a lot of structure and it was a steep learning curve. I think lessons or Rosetta Stone would have been a better choice. In retrospect, it was so easy to get around Japan because everything is so LOGICAL there. Train stations in particular. Even in places where English signage was de minimus or nonexistent, it was easy to navigate. Having a lot of wifi to use Google Maps helped too, obviously, but English wasn’t really necessary for that either. Place names and the ability to interpret symbols on maps or signs was all one needed to be successful. Most practically, knowing how to politely greet, thank, and ask simple questions was important. Food names second. I tried hard not to take advantage of the privilege of being a native English speaker and an American too much in order to get around and ask for help. In being able to communicate so little I had this wonderful experience of having to communicate what was absolutely essential in a minimalistic way. It forced me to be thoughtful about information, expression, and voice. It was isolating &lt;strong&gt;and&lt;/strong&gt; liberating. It made me crave learning language so I could talk to locals and share human connection. It made traveling to Japan (which is obviously a very safe place with best-in-the-world ameninites and infrastructure) feel so much more foreign, new, and different, but still within my comfort zone. I loved it. It made me feel more open to more foreign-feeling places in the future.&lt;/p&gt;

&lt;p&gt;That being said, I &lt;em&gt;did&lt;/em&gt; spend a lot of time studying computer science in 2019 and learned how to write a lot of cool algorithms from scratch. That was dollar-per-minute some of the best time I’ve invested in myself. Not all self-investment has to be monetary but it helped me make a quantum leap.&lt;/p&gt;

&lt;p&gt;Computer languages are much easier than natural languages but I will give myself some partial credit.&lt;/p&gt;

&lt;h4 id=&quot;fitness-failure-w-sailing-success&quot;&gt;Fitness: failure (w/ sailing success)&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;Run 365 miles in 2019: failure&lt;/li&gt;
  &lt;li&gt;Stretch goal 1: enter some races, starting with a 5k: failure&lt;/li&gt;
  &lt;li&gt;Stretch goal 2: new PRs for distance, mile, and 5k.: failure&lt;/li&gt;
  &lt;li&gt;Yoga: try it out, see if it sticks: failure&lt;/li&gt;
  &lt;li&gt;Lose 10lbs sustainably: failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2019 was not my strong suit for fitness. I probably spent the most focus on steps/day. My average steps/day in 2019 were 7,898 which is quite remarkable. It was my default form of exercise and I often went on walks after work for an hour or more, listening to books. Plus walking to/from work. And walking during all that travel. I did not work on aerobic fitness and certainly didn’t push myself in 2019. Work was stressful and walking was relaxing/calming. I also used the sauna quite a bit and regularly completed the executive triathlon (dry sauna + steam room + jacuzzi) with religious zeal.&lt;/p&gt;

&lt;h4 id=&quot;sailing-success&quot;&gt;Sailing: success&lt;/h4&gt;

&lt;p&gt;Racing:&lt;/p&gt;

&lt;p&gt;2019 was a banner year for sailing. It was hard to get onto fast boats as a crew person (lack of sailing resume, preparation, and networking) but I did do some fantastic networking and proved myself as capable rail meat, spinnaker trimmer, and “mast” person on some of those fast boat races. I really honed my main trim and jib/spinnaker trimming abilities on J22s and J24s. Got a lot of practice on other boats too in those positions. All that being said I also enjoyed myself a lot. And stretched myself. Did some white-knuckle racing in squalls and almost gale-force winds on a very tippy Tiger 10 – got to hang on to stay on.&lt;/p&gt;

&lt;p&gt;Specifically, I trained for the Takarabune Regatta, which is a regatta between the Suma Yacht Club of Kobe, Japan, and the Seattle Yacht Club of Seattle, WA (of which I am a member). It was an honor to be chosen to crew on a boat in this regatta. Unforgettable trip to Japan and raced against great sailors (and our hosts) in Kobe. Can’t wait to see them again in 2022 in Seattle.&lt;/p&gt;

&lt;p&gt;Cruising:&lt;/p&gt;

&lt;p&gt;Didn’t really do this. s/v Wind Child is not in good shape for cruising, and frankly, I don’t have the skills nor interest in DIY’ing her into cruising shape. Frankly I think I’m destined to outgrow her and find a larger, newer sailboat with more comfortable amenities. The idea of cruising for months in the summer with a WiFi hotspot to do work sounds really great.&lt;/p&gt;

&lt;h4 id=&quot;organization-2019-success&quot;&gt;Organization, 2019: success&lt;/h4&gt;

&lt;p&gt;Throughout the year I became progressively more organized. I had many more demands on my time from a variety of existing and new sources and had to stay organized to keep it all together. I failed to read Part II of Getting Things Done, but I still kept up the spirit of Part I by using my personal Trello as a kanban board. If you’re not wasting time it’s amazing how much you can get done if you keep your time well-organized.&lt;/p&gt;

&lt;h4 id=&quot;use-less-technology-2019-success&quot;&gt;Use less technology, 2019: success&lt;/h4&gt;

&lt;p&gt;Speaking of wasting less time, I’d say that’s definitely true of 2019. Screen time was down. I spent more time reading, more time outside, more time traveling (and without internet in foreign places), and more time spending time with others &amp;lt;3. At some point in the year I read Cal Newport’s &lt;a href=&quot;https://www.amazon.com/Digital-Minimalism-Choosing-Focused-Noisy/dp/0525536515&quot;&gt;Digital Minimalism&lt;/a&gt; and deleted Twitter, Instagram, and Facebook from my phone. Though sometimes I continued to log in via Chrome or Safari, the lack of push notifiactions alone drastically reduced my usage and the increased friction/reduced UX also helped reduce my time spent doomscrolling. I would highly recommend this approach. Just removing one’s access altogether or making it slightly more cumbersome. Time is the most valuable commodity I have and choosing to spend less of it on social media has made me feel happier.&lt;/p&gt;

</description>
        <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
        <link>http://korbonits.github.io/2020/01/01/Retrospective-on-2019.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2020/01/01/Retrospective-on-2019.html</guid>
        
        
      </item>
    
      <item>
        <title>New Year's Resolutions 2019</title>
        <description>&lt;h2 id=&quot;a-year-back-a-year-ahead&quot;&gt;A year back, a year ahead.&lt;/h2&gt;

&lt;p&gt;Below, you’ll find a 2018 retrospective.&lt;/p&gt;

&lt;p&gt;Here, you’ll find my new goals for 2019. I looked forward to making them all year. It’s fun to look back on the last year and my crazy goals. It’s amazing to see not only what I’ve accomplished, but what I’ve failed to do. That is clarifying. The last year shows what mattered to me and what didn’t. Goals, of course, are not all-encompassing of life. This is not the Truman show. These are all measurable, attainable goals mostly related to learning and staying in shape – they are not meant to represent professional or interpersonal goals.&lt;/p&gt;

&lt;p&gt;High-level summary of 2019 resolutions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Reading&lt;/li&gt;
  &lt;li&gt;Writing&lt;/li&gt;
  &lt;li&gt;Travel&lt;/li&gt;
  &lt;li&gt;Language&lt;/li&gt;
  &lt;li&gt;Fitness&lt;/li&gt;
  &lt;li&gt;Sailing&lt;/li&gt;
  &lt;li&gt;Organization&lt;/li&gt;
  &lt;li&gt;Use less technology&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;reading-2019&quot;&gt;Reading, 2019&lt;/h2&gt;

&lt;p&gt;Last year’s reading goals were only &lt;em&gt;slightly&lt;/em&gt; overprescribed. Lol.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/le_bar_bibliotheque.jpg&quot; alt=&quot;St. James, Paris&quot; /&gt;&lt;/p&gt;

&lt;p&gt;A photo of me in the library bar at &lt;a href=&quot;https://www.saint-james-paris.com/&quot;&gt;St. James&lt;/a&gt; in Paris, reading &lt;a href=&quot;https://en.wikipedia.org/wiki/Notes_from_Underground&quot;&gt;&lt;em&gt;Notes from Underground&lt;/em&gt;&lt;/a&gt;, by Dostoevsky.&lt;/p&gt;

&lt;p&gt;Shortlist of goals:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Read 50 books, again (OKR-style)&lt;/li&gt;
  &lt;li&gt;The Great Books, continued&lt;/li&gt;
  &lt;li&gt;Classic machine learning papers&lt;/li&gt;
  &lt;li&gt;Short stories; poetry&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;read-50-books&quot;&gt;Read 50 books&lt;/h3&gt;

&lt;p&gt;By OKR standards, I knocked this one out of the park in 2018. By that logic, I should consider upping my goal for next year. That being said, I don’t need to read any &lt;em&gt;more&lt;/em&gt; books in 2019 – I spent time reading at the expense of other activities (especially fitness). Rather than lowering the goal, I will keep it at 50. In the same way that the Economist has 50 issues per year (and takes off 2 weeks at the end of every year), the goal of 50 is a reasonable “one book per week” with some wiggle-room to take a couple of weeks off.&lt;/p&gt;

&lt;p&gt;By wasting less time in 2019, I think I can still balance fitness and reading goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2019-01-04 update:&lt;/strong&gt; This should go without saying, but, as a privileged white cis-het male, I feel it is my duty to represent the following. In 2019 I will commit to reading books/stories/poetry written by women, POC, nonbinary folks, and other underrepresented author identities or authors from less privileged backgrounds. These works will stand for themselves and I will not review them as conveniently fitting into some sort of diversity category/genre, as if I am checking off a box. I think this is important and I think everyone else should do the same.&lt;/p&gt;

&lt;h3 id=&quot;great-books-year-one-and-two&quot;&gt;Great Books, year one and two&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/great_books.jpg&quot; alt=&quot;The Great Books&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Here is a Great Books &lt;a href=&quot;https://gbwwblog.wordpress.com/reading-plan/&quot;&gt;reading plan&lt;/a&gt;. Year two doesn’t look so bad. One thing I like about this breakdown is that it includes page counts (though it’s really double this count since they are large double-column pages in tiny print)&lt;/p&gt;

&lt;p&gt;By the end of 2019, I’d like to commit to having read all of year one and year two of the great books. This sounds ridiculous since I only got halfway through year one in 2018, but I’m willing to sacrifice a few books from my Goodreads list this year if it means completing this goal &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#tradeoffs&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;classic-machine-learning-papers&quot;&gt;Classic machine learning papers&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/lstm.png&quot; alt=&quot;LSTM&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This is a pictorial representation of an &lt;a href=&quot;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.676.4320&amp;amp;rep=rep1&amp;amp;type=pdf&quot;&gt;LSTM&lt;/a&gt;&lt;sup id=&quot;fnref:0&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:0&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; cell.&lt;/p&gt;

&lt;p&gt;Refer to &lt;a href=&quot;http://korbonits.github.io/2018/01/05/New-Years-Resolutions.html&quot;&gt;last year’s paper list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In addition to keeping up with the state of the art in machine learning in 2019, I would like to fill in some of the foundational gaps in my knowledge by reading classic papers. Over the years, I have also acquired a fairly comprehensive library of machine learning references texts and textbooks.&lt;/p&gt;

&lt;p&gt;Stretch goal: read &amp;gt;= 1 machine learning textbook in 2019.&lt;/p&gt;

&lt;h3 id=&quot;short-stories-poetry&quot;&gt;Short stories; poetry&lt;/h3&gt;

&lt;p&gt;This year, I’d like to gain exposure to forms of writing that are newer to me. Instead of focusing on scientific papers (i.e., in &lt;em&gt;Nature&lt;/em&gt;) or timely long-reads on foreign relations (i.e., in &lt;em&gt;Foreign Affairs&lt;/em&gt;), I’d like to read more contemporary short stories and poetry. This is also to help me with my writing goals – a double win. I’m a subscriber of &lt;a href=&quot;https://www.poetryfoundation.org/poetrymagazine&quot;&gt;&lt;em&gt;Poetry Magazine&lt;/em&gt;&lt;/a&gt; and intend to subscribe to one or more additional (usually quarterly) literary magazines (probably &lt;a href=&quot;https://www.theparisreview.org/&quot;&gt;&lt;em&gt;The Paris Review&lt;/em&gt;&lt;/a&gt;). Staying current with what is being published today will be crucial for my writing career. The measurable goal for 2019 is to read short stories/poetry monthly.&lt;/p&gt;

&lt;h2 id=&quot;writing-2019&quot;&gt;Writing, 2019&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/nanowrimo.png&quot; alt=&quot;nanowrimo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;NaNoWriMo was so fun in 2018, I’d like to do it again in 2019. That’s the first goal.&lt;/p&gt;

&lt;p&gt;Second goal is to continue taking writing courses at Hugo House, time permitting. I’m taking a fiction-writing course this winter and would like to take a revision course in the spring.&lt;/p&gt;

&lt;p&gt;Third goal is to keep writing all year. Doesn’t need to be in the form of blog posts, but just needs to be a regular (ideally daily) habit.&lt;/p&gt;

&lt;p&gt;Stretch goals for 2019:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Finish manuscript for novel based on nanowrimo 2018&lt;/li&gt;
  &lt;li&gt;Get published (could be short story, poetry, essay – something less ambitious than a novel)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;travel-2019&quot;&gt;Travel, 2019&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/tokyo.jpg&quot; alt=&quot;Tokyo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Travel in 2019 is off to a good start. I began the new year in Puerto Vallarta, Mexico. Travel goal for 2019 is visiting at least one new country. Right now I’m going to Japan in May to go sailing (double win!).&lt;/p&gt;

&lt;p&gt;Note to friends: get hitched in exotic locations – I’ll be there.&lt;/p&gt;

&lt;h2 id=&quot;language-2019&quot;&gt;Language, 2019&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/hiragana.jpg&quot; alt=&quot;kanji&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This is a new goal. Last year, it was just to “practice French”. I am going to focus on learning a new language in 2019.&lt;/p&gt;

&lt;p&gt;After much consideration (and great conversations on Facebook), I will commit to learning Japanese ahead of my trip in May.&lt;/p&gt;

&lt;p&gt;Following Japan, unless I want to continue learning Japanese, I think I will focus on the &lt;a href=&quot;http://www.un.org/en/sections/about-un/official-languages/&quot;&gt;six official languages&lt;/a&gt; of the United Nations. Perhaps in the future my preferences will change, but there’s already a lifetime of language learning with that list.&lt;/p&gt;

&lt;p&gt;Having just acquired a copy of &lt;a href=&quot;https://www.amazon.com/Babel-Around-World-Twenty-Languages/dp/0802128793/&quot;&gt;&lt;em&gt;Babel: Around the World in Twenty Languages&lt;/em&gt;&lt;/a&gt;, by Gaston Dorren, I’d need to know the twenty that Dorren describes to be able to converse in the mother tongues of just half of the world’s population. I’d like to know how many (and which) languages would allow me to talk to a majority of humanity even if it’s someone’s 2nd, 3rd, or Nth language (this will likely dramatically reduce the number from twenty to the single digits).&lt;/p&gt;

&lt;h2 id=&quot;fitness-2019&quot;&gt;Fitness, 2019&lt;/h2&gt;

&lt;p&gt;Going to keep this one simple:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Run 365 miles in 2019&lt;/li&gt;
  &lt;li&gt;Stretch goal 1: enter some races, starting with a 5k&lt;/li&gt;
  &lt;li&gt;Stretch goal 2: new PRs for distance, mile, and 5k.&lt;/li&gt;
  &lt;li&gt;Yoga: try it out, see if it sticks&lt;/li&gt;
  &lt;li&gt;Lose 10lbs sustainably&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;sailing-2019&quot;&gt;Sailing, 2019&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/spinnaker.jpg&quot; alt=&quot;spinnaker&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Racing:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Skip/crew for some casual races (e.g., Goosebumps, Kirkland summer sailing series)&lt;/li&gt;
  &lt;li&gt;Crew for some serious races (Tri-Island, Swiftsure, Grand Prix, Round the County)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cruising:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Sail to the San Juans&lt;/li&gt;
  &lt;li&gt;Sail to nearby ports of call (e.g., Poulsbo)&lt;/li&gt;
  &lt;li&gt;Stretch goal: sail to Canada&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Last year I was just aiming to get out on the water. Thanks to the wonderful, welcoming people in Seattle’s sailing community, I blew last year’s goal out of the water, as it were. This year, I am going to do some cruising and do some serious racing.&lt;/p&gt;

&lt;p&gt;Vic/Maui 2020, anyone?&lt;/p&gt;

&lt;h2 id=&quot;organization-2019&quot;&gt;Organization, 2019&lt;/h2&gt;

&lt;p&gt;Implementing Part I of &lt;a href=&quot;https://www.amazon.com/Getting-Things-Done-Stress-Free-Productivity/dp/0143126563&quot;&gt;Getting Things Done&lt;/a&gt; in 2018 went well, but I can do better.&lt;/p&gt;

&lt;p&gt;This year, let’s start by finishing it and using organization as a tool to not just keep track of things, but to save time planning and create more time to achieve my goals. Yesterday I was reading &lt;a href=&quot;https://www.theparisreview.org/&quot;&gt;&lt;em&gt;The Paris Review&lt;/em&gt;&lt;/a&gt; and in an author interview, the topic of plotting vs. pantsing was discussed. In her career, the author has been a pantser, but she said that a little bit of planning can save a lot of time during composition (first draft) and subsequent revisions. Organization to save time. That’s my goal for 2019.&lt;/p&gt;

&lt;h2 id=&quot;use-less-technology-2019&quot;&gt;Use less technology, 2019&lt;/h2&gt;

&lt;p&gt;By use less technology, I really mean waste less time. The screen time app is a blessing and a curse, because it creates guilt. Let’s eliminate that in 2019. I want to spend less time watching television, movies, and wasting time on the internet. Scrolling through social media, etc. Time is something you can’t buy.&lt;/p&gt;

&lt;p&gt;Therefore, I will commit to using technology less in 2019 and commit to spending my time taking care of my body, exploring the world, reading, writing, and being with people. I can track this with screen time, and try to create habits that disincentivize using technology.&lt;/p&gt;

&lt;h2 id=&quot;2018-retrospective&quot;&gt;2018 Retrospective&lt;/h2&gt;

&lt;p&gt;Well, &lt;a href=&quot;http://korbonits.github.io/2018/01/05/New-Years-Resolutions.html&quot;&gt;that was ambitious&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;All in all, I did pretty well. Two major new hobbies emerged: writing prose and sailing. I read a lot of books, and traveled to two new countries (England and Portugal). I learned how to learn better than I had for a while.&lt;/p&gt;

&lt;h3 id=&quot;grading-myself&quot;&gt;Grading myself&lt;/h3&gt;

&lt;p&gt;At a high level, for 2018 I had goals in the following categories:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Reading: success/failure&lt;/li&gt;
  &lt;li&gt;Writing: success&lt;/li&gt;
  &lt;li&gt;Travel: success&lt;/li&gt;
  &lt;li&gt;Fitness: failure (w/ sailing success)&lt;/li&gt;
  &lt;li&gt;Less television: success&lt;/li&gt;
  &lt;li&gt;Optimize volunteering: failure&lt;/li&gt;
  &lt;li&gt;Themes (organization): success/failure&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;reading-successfailure&quot;&gt;Reading: success/failure&lt;/h4&gt;

&lt;p&gt;This was the toughest goal (other than fitness).&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Read 50 books/15,000+ pages: success&lt;/li&gt;
  &lt;li&gt;Great Books, year one: failure&lt;/li&gt;
  &lt;li&gt;Classic machine learning papers: failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In 2018 I read 14,564 pages of &lt;strong&gt;completed&lt;/strong&gt; books on Goodreads, which is 47 books – definitely a success by OKR standards, and including The Great Books I read well over 15,000 pages. Hot damn. I pretty much read things in all of the categories I mentioned, but I mostly stuck to fiction. Here are &lt;a href=&quot;https://www.goodreads.com/review/list/49504536-alex-korbonits?read_at=2018&quot;&gt;my 2018 books&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Great Books: I got really bored in the middle of Saint Augustine and sort of failed to continue. I’ll hold over this goal from last year.&lt;/p&gt;

&lt;p&gt;Classic machine learning papers: I read some of them, but not methodically. I did read a lot of machine learning papers in 2018, in particular the word embedding canon since 2013. I’ll carry this goal over into 2019.&lt;/p&gt;

&lt;h4 id=&quot;writing-success&quot;&gt;Writing: success&lt;/h4&gt;

&lt;p&gt;Can you say NaNoWriMo ten times quickly?&lt;/p&gt;

&lt;p&gt;This is a goal I put off until August. I ostensibly failed at this given the explicit subgoals, but I succeeded beyond my wildest dreams of a year ago. In late August/early September, I started taking a &lt;a href=&quot;https://hugohouse.org/classes/the-writers-welcome-kit/&quot;&gt;Writer’s Welcome Kit&lt;/a&gt; class at &lt;a href=&quot;http://hugohouse.org&quot;&gt;Hugo House&lt;/a&gt;, both a class and an institution I can’t recommend highly enough. This began when I was feeling a bit behind on my writing goal for the year. I’m so glad I took the plunge. I spend almost two months writing about 1000 words per diem on single word prompts, and used a website I found via HackerNews called &lt;a href=&quot;http://writingstreak.io&quot;&gt;writingstreak.io&lt;/a&gt; to help track my progress. It was great for establishing a writing habit, however, I’m thinking of switching to Scrivener for larger projects or Evernote for everyday writing. Right now I’m using &lt;a href=&quot;https://www.sublimetext.com/&quot;&gt;Sublime&lt;/a&gt; and the command line for most of my writing. Sublime is the editor in which I write all of my code, including the markdown for this post, so for me it is a natural writing setting.&lt;/p&gt;

&lt;p&gt;In November I took a class on Novel Immersion through Hugo House, which was contemporaneous with &lt;a href=&quot;http://nanowrimo.com&quot;&gt;NaNoWriMo&lt;/a&gt;, short for National Novel Writing Month. A “winner” is someone who, in the month of November, completes (or begins, really) a 50,000 word manuscript of fiction. Anecdotally this is around 200-250 pages depending on the amount of dialogue. In short: I won! I learned a lot about getting words OUT in the months preceding NaNoWriMo (and I’m so glad I practiced), but the class really helped me focus on how to think about story, plot, character, dialogue, and voice in an immersive, accelerated fashion.&lt;/p&gt;

&lt;p&gt;I’m a &lt;a href=&quot;https://www.urbandictionary.com/define.php?term=Pantser&quot;&gt;pantser&lt;/a&gt; – it’s the only thing I know.&lt;/p&gt;

&lt;p&gt;Next year I’m going to learn how to edit, learn more about poetry, and try to complete the novel I began in November. Maybe make some submissions and try out other forms of writing.&lt;/p&gt;

&lt;h4 id=&quot;travel-success&quot;&gt;Travel: success&lt;/h4&gt;

&lt;p&gt;Not only did I spend two weeks in France (with a 3-day jaunt to jolly old England), but I spent another week in Portugal (two days in Lisbon and four in the Algarve). So I went to 2 new countries. I also practiced French before I went. I was complimented by the French on my French deux fois during my trip there. That’s all I needed to hear.&lt;/p&gt;

&lt;h4 id=&quot;fitness-failure-w-sailing-success&quot;&gt;Fitness: failure (w/ sailing success)&lt;/h4&gt;

&lt;p&gt;As for personal training, lifting, and running, I did pretty well during the first half of 2018. But when I cut personal training to focus on sailing, that’s when my fitness went downhill. For 2019 I’ll cut down on my goals to make them more attainable and focused. It wasn’t all for naught, however.&lt;/p&gt;

&lt;p&gt;Sailing: great success. This really deserves to be under its own heading. Who would have known that sailing would become such a passion? I started racing in the bitter cold (and snow) of winter in January and February. I started sailing almost weekly on Lake Union/Lake Washington throughout the spring. I crewed in many races (Frostbite, Goosebumps, Tri-Island, Dock Dodge, Kirkland). I acquired a sailboat! WTF. I learned so much about sailing in 2018. There’s so much more. My goal of entering one race and sailing 1x/month was blown out of the water before the first half of 2018. Most importantly, I made great friends in the sailing community. Sailors are the best.&lt;/p&gt;

&lt;p&gt;Racing/tracking/autocross: failure, but let’s call it “deferred”. I optimized my weekends (especially in the summer) around sailing. I can do this in 2019. It’ll be a carry-over, but not an explicit goal for 2019. It just wasn’t a priority.&lt;/p&gt;

&lt;h4 id=&quot;less-television-success&quot;&gt;Less television: success&lt;/h4&gt;

&lt;p&gt;I did watch less television (and fewer movies) in 2018 but it was not as though I eliminated it. Things I watched in 2018:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Seinfeld&lt;/li&gt;
  &lt;li&gt;Frasier (8/11 of it)&lt;/li&gt;
  &lt;li&gt;Billions&lt;/li&gt;
  &lt;li&gt;Silicon Valley&lt;/li&gt;
  &lt;li&gt;Mozart in the Jungle&lt;/li&gt;
  &lt;li&gt;Vanity Fair (miniseries)&lt;/li&gt;
  &lt;li&gt;Bleak House (BBC miniseries).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many audiobooks were missed during the viewing of these TV shows.&lt;/p&gt;

&lt;p&gt;Two GREAT movies I saw in 2018 (not new) were both at &lt;a href=&quot;https://cinerama.com/&quot;&gt;Cinerama&lt;/a&gt; as part of their 70mm film festival. Lawrence of Arabia. 2001: A Space Odyssey. Amazing experiences.&lt;/p&gt;

&lt;h4 id=&quot;optimize-volunteering-failure&quot;&gt;Optimize volunteering: failure&lt;/h4&gt;

&lt;p&gt;I failed at this. Here’s hope for 2019. Not a complete failure. I became proficient on 2 committees on which I serve, which is great. In 2019 I need to say no to too much and work with others to take my place. It’s time to move on from a couple of engagements through which I’ve learned much and have made many great friends.&lt;/p&gt;

&lt;h4 id=&quot;themes-successfailure&quot;&gt;Themes: success/failure&lt;/h4&gt;

&lt;p&gt;What the hell is “themes,” you ask? Organization was the main theme. Even though I didn’t finish reading &lt;a href=&quot;https://www.amazon.com/Getting-Things-Done-Stress-Free-Productivity/dp/0143126563&quot;&gt;Getting Things Done&lt;/a&gt;, I got part one down quite well. I used Trello all year to keep track of things. Not perfectly… but that wasn’t the point. I tracked things all year on boards, and used them to track many of the things on my 2018 resolutions. &lt;strong&gt;I made my 2018 New Years Resolutions last all year&lt;/strong&gt; – I kept them in mind throughout 2018. That’s a success for organization. It’s not all roses. I failed in some respects. At times my Trello boards became bloated and/or outdated. I am still a procrastinator for things like making appointments for things that take place during the week – seems like a waste of time to be out of the office. In 2019, I will do better.&lt;/p&gt;

&lt;h2 id=&quot;bibliography&quot;&gt;Bibliography&lt;/h2&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:0&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hochreiter, Sepp, and Jürgen Schmidhuber. “Long short-term memory.” Neural computation 9.8 (1997): 1735-1780. &lt;a href=&quot;#fnref:0&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
        <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
        <link>http://korbonits.github.io/2019/01/01/New-Years-Resolutions-2019.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2019/01/01/New-Years-Resolutions-2019.html</guid>
        
        
      </item>
    
      <item>
        <title>New Year's Resolutions 2018</title>
        <description>&lt;h2 id=&quot;hold-me-accountable&quot;&gt;Hold me accountable&lt;/h2&gt;

&lt;p&gt;This year, I propose the following set of New Year’s Resolutions. I am posting this subset of my goals publically so you can hold me accountable. I’ll do a retrospective at the end of 2018, and try to keep this list updated with progress throughout the year. I’m using Trello to keep track in real time.&lt;/p&gt;

&lt;p&gt;Importantly, in this post I focus on personal, not-strictly-professional goals. Undoubtedly, many of my 2018 goals are professional, and many of my outside-of-work activities are professional in nature and directly or indirectly benefit my professional life. To me, it goes without saying that I intend to craft, measure, track, and achieve professional goals throughout the year. In terms of specifics, most of those are internal to where I work. This post is about “fun stuff outside of work that I can share” :). Furthermore, I’m not going to include things that I’ll do anyway like see new art exhibits, go to concerts, or other cultural events.&lt;/p&gt;

&lt;p&gt;Here is my Goodreads profile if you want to see what I finished &lt;a href=&quot;https://www.goodreads.com/user/year_in_books/2017/49504536&quot;&gt;in 2017&lt;/a&gt; and &lt;a href=&quot;https://www.goodreads.com/user/year_in_books/2016/49504536&quot;&gt;in 2016&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;High level summary of 2018 resolutions:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Reading&lt;/li&gt;
  &lt;li&gt;Writing&lt;/li&gt;
  &lt;li&gt;Travel&lt;/li&gt;
  &lt;li&gt;Fitness&lt;/li&gt;
  &lt;li&gt;Less television&lt;/li&gt;
  &lt;li&gt;Optimize volunteering&lt;/li&gt;
  &lt;li&gt;Themes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;reading&quot;&gt;Reading&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Read 50 books/15,000-20,000 pages (tracking with Goodreads)&lt;/li&gt;
  &lt;li&gt;Read year 1 of the 10 year reading plan of Great Books of the Western World&lt;/li&gt;
  &lt;li&gt;Read all papers in Microsoft Research’s “Paper Legend: Artificial Intelligence Edition” card game&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This year I am going to change it up in terms of my reading goals. Last year I took an ambitious goal of reading 50 books. By Google OKR standards, I hit 76%, which is quite a success! I liked this goal because it was easy to track and kept me motivated throughout the year. I may have hit 50 had I spent the first half of 2017 on-target, second, I spent about a month without finishing a book in Q4 as I transitioned from one job to another.&lt;/p&gt;

&lt;p&gt;“Number of books read” is not the best metric, in my opinion. Example: &lt;em&gt;War and Peace&lt;/em&gt; != &lt;em&gt;Hamlet&lt;/em&gt;. This metric is easy to game by reading smaller books. I recall making some tradeoffs this fall where, trying to select a next book to read, I looked for the shortest book in my library I hadn’t read yet. Essentially, this metric penalizes you for reading longer works and rewards you for reading shorter works.&lt;/p&gt;

&lt;p&gt;“Number of pages read”, a metric that Goodreads shows (based on the public information about the books you shelve), is much better for comparing &lt;em&gt;War and Peace&lt;/em&gt; with &lt;em&gt;Hamlet&lt;/em&gt;. In 2017, I read (i.e., finished) 38 books. This turns out to be about 14,500 pages according to Goodreads. That’s about 382 pages/book and a reading pace of about 40 pages/day. This metric is still not perfect, however. It doesn’t take into account the heterogeneity of writing styles and, perhaps more importantly, speed of comprehension. For example, some contemporary writing reads so well it could be a movie screenplay. Many such books are adapted into films. Some books, like &lt;em&gt;Ready Player One&lt;/em&gt;, are captivating page-turners that one can read in a single sitting. However, some books, like a mathematics textbook (with proofs) or philosophy, read much, much slower. One of the reasons I find fiction so easy to read is that, by suspending disbelief, it’s easy to get sucked in and follow along in a very immersive manner. One &lt;em&gt;could&lt;/em&gt; do a close reading of any fiction and spend much more time dissecting it, but for pure entertainment, it’s easy to proceed quickly. Reading math, philosophy, or scientific papers is much more demanding on the reader. Not only is the information contained in these formats much higher in terms of density (e.g., mathematical notation), but following the logic is intrinsically a very demanding task. Reading in these formats is highly non-linear (in terms of progressing from beginning to end). One is constantly looking up terms, searching in indices, reading cited papers, re-reading paragraphs, stopping to think, etc. “Number of pages” is still not the perfect metric for measuring reading.&lt;/p&gt;

&lt;p&gt;What’s a perfect metric? I’m not really sure. I think a metric along the lines of “elapsed time reading in a flow-like state”&lt;sup id=&quot;fnref:0&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:0&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; is probably a democratic one. That’s harder to track unless you make tracking it a very strong habit. For 2018, I think I’ll stick to number of books read as a high-level proxy and formulate multiple reading subgoals to round things out.&lt;/p&gt;

&lt;h3 id=&quot;50-books15000-20000-pages&quot;&gt;50 books/15,000-20,000 pages&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/2017-books.png&quot; alt=&quot;Some 2017 books&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Usually this goal is haphazard in its selection (c.f. my Goodreads of 2016 and 2017). Should I be more prescriptive this year? Let’s do this: I’ll suggest several focus areas but won’t commit to specific books.&lt;/p&gt;

&lt;p&gt;Areas of particular focus:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;ML textbooks (e.g., &lt;em&gt;ESL&lt;/em&gt;)&lt;/li&gt;
  &lt;li&gt;Programming textbooks (e.g., &lt;em&gt;SICP&lt;/em&gt;)&lt;/li&gt;
  &lt;li&gt;Math textbooks (e.g., &lt;a href=&quot;https://www.amazon.com/dp/B06XHZ82GF&quot;&gt;&lt;em&gt;Category Theory in Context&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Classic literature (e.g., &lt;em&gt;Moby Dick&lt;/em&gt;)&lt;/li&gt;
  &lt;li&gt;Contemporary literary fiction (e.g., &lt;a href=&quot;https://www.amazon.com/Commonwealth-Novel-Ann-Patchett/dp/0062491830&quot;&gt;&lt;em&gt;Commonwealth&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;History (e.g., &lt;a href=&quot;https://press.princeton.edu/titles/10302.html&quot;&gt;&lt;em&gt;The Amazons: Lives and Legends of Warrior Women across the Ancient World&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Economics (e.g., &lt;a href=&quot;https://www.amazon.com/dp/B01CJUV2J6&quot;&gt;&lt;em&gt;The Economic History of China: From Antiquity to the Nineteenth Century&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Philosophy (e.g., &lt;a href=&quot;https://www.amazon.com/Reasons-Persons-Derek-Parfit/dp/019824908X&quot;&gt;&lt;em&gt;Reasons and Persons&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Scientific biographies (e.g., &lt;a href=&quot;https://www.amazon.com/dp/B01M5IJN1P&quot;&gt;&lt;em&gt;A Mind at Play: How Claude Shannon Invented the Information Age&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.nature.com/nature/&quot;&gt;Nature&lt;/a&gt; (print subscription)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.foreignaffairs.com/&quot;&gt;Foreign Affairs&lt;/a&gt; (print subscription)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Non-goals (these go without saying – I will read these to some extent anyway):&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Reading the news&lt;/li&gt;
  &lt;li&gt;Reading  ML conference papers (or prominent non-conference papers on arXiv)&lt;/li&gt;
  &lt;li&gt;Reading papers that aren’t ML-related and aren’t published in Nature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additional thoughts: one of the tools I used to help me through certain books in 2017 was by listening to audiobooks on Audible. Audible, while representing a small minority of books I “read” in 2017, was significant because it exposed me to my first audiobook experiences. I was able to “read” more in 2017 than I otherwise would have because I could multi-task and listen to an audiobook, say, while I was washing my dishes or going for a walk. Reading actively is still more enjoyable to me than listening passively. It depends on the book, but in general I’d say I recall more from books I read than from books I listened to. Audible does have some gems such as Ian McKellan reading &lt;em&gt;The Odyssey&lt;/em&gt;. You are in for a treat.&lt;/p&gt;

&lt;h3 id=&quot;the-great-books-year-1&quot;&gt;The Great Books, Year 1&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/plato-academy.jpg&quot; alt=&quot;Plato's academy&quot; /&gt;&lt;/p&gt;

&lt;p&gt;One of my reading sub-goals in 2018 is to start systematically attacking my copy of the Great Books. In volume 1, “The Great Conversation”, there’s a 10 year reading plan laid out by the main editors, Robert Maynard Hutchins and Mortimer Adler. I’ve read many of the works suggested for the first year before but I will willingly re-read them.&lt;/p&gt;

&lt;p&gt;Here is the recommended reading list for year one of the Great Books:&lt;/p&gt;

&lt;ul class=&quot;task-list&quot;&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;Plato: Apology, Crito&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;Aristophanies: Clouds, Lysistrata&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;Plato: Republic [Book I, II]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;Aristotle Ethics [Book I]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;Aristotle Politics [Book I]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;Plutarch: Lycurgus, Numa Pompilius, Lycurgus and Numa compared, Alexander, Caesar&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;New Testament [Matthew &amp;amp; Acts of Apostles]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;St. Augustine: Confessions [Books I-VIII]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Machiavelli: The Prince&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Rabelais: Gargantua and Pantagruel [Book I-II]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Montaigne: Essays [Of Custom, and That We should not easily change a law received; of pedantry; of the education of children; that it is folly to measure truth and error by our own capacity; of cannibals; that the relish of good and evil depends in a great measure upon the opinion we have of them; upon some verses of virgil&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Hamlet&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Locke: Concerning Civil Government [Second Essay]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Rousseau: The Social Contract [Book I-II]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Gibbon: The Decline and Fall of the Roman Empire [Ch. 15-16]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;The Declaration of Independence, the Constitution of the United States, The Federalist [Nos. 1-10, 15, 31, 47, 51, 68-71]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Smith: The Wealth of Nations [Introduction-Book I, Ch. 9]&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;Marx-Engels: Manifesto of the Communist Party&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of these, I’ve read Plato, Aristotle, half of Aristophanies, 2/5 of Plutarch (Caesar and Alexander, obvi.), the new testament, St. Augustine, Machiavelli, Shakespeare, Lock, Rousseau, American founding documents (but not the Federalist papers), and the Communist Manifesto. It has been many years since I’ve read these, however, and I’m eagerly looking forward to re-reading all of them as I enter a new decade of life and prepare for year 2 of The Great Books.&lt;/p&gt;

&lt;h3 id=&quot;paper-legend-artifical-intelligence-edition&quot;&gt;Paper Legend: Artifical Intelligence Edition&lt;/h3&gt;

&lt;p&gt;This resolution is inspired by my attendance at NIPS 2017. Microsoft was a sponsor and gave out these nerdy-looking card packs as swag. I’ve read some of these papers already but in 2018 I will make a concerted effort to “catch them all” especially seeing that I’ve learned a lot of ML since I began reading some of the papers on this list (c.f. AlexNet).&lt;/p&gt;

&lt;p&gt;Each card looks like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/batch-norm.png&quot; alt=&quot;Batch Norm&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Here’s the list of citations:&lt;/p&gt;

&lt;p&gt;Ioffe, Sergey, and Christian Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift.” In International Conference on Machine Learning, pp. 448-456. 2015.&lt;/p&gt;

&lt;p&gt;He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.&lt;/p&gt;

&lt;p&gt;Dahl, George E., Dong Yu, Li Deng, and Alex Acero. “Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition.” IEEE Transactions on audio, speech, and language processing 20, no. 1 (2012): 30-42.&lt;/p&gt;

&lt;p&gt;Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. “Sequence to sequence learning with neural networks.” In Advances in neural information processing systems, pp. 3104-3112. 2014.&lt;/p&gt;

&lt;p&gt;Bishop, Chris M. “Training with noise is equivalent to Tikhonov regularization.” Training 7, no. 1 (2008).&lt;/p&gt;

&lt;p&gt;Hochreiter, Sepp, and Jürgen Schmidhuber. “Long short-term memory.” Neural computation 9, no. 8 (1997): 1735-1780.&lt;/p&gt;

&lt;p&gt;Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller. “Nonlinear component analysis as a kernel eigenvalue problem.” Neural computation 10, no. 5 (1998): 1299-1319.&lt;/p&gt;

&lt;p&gt;Bell, Anthony J., and Terrence J. Sejnowski. “An information-maximization approach to blind separation and blind deconvolution.” Neural computation 7, no. 6 (1995): 1129-1159.&lt;/p&gt;

&lt;p&gt;Bengio, Yoshua, Patrice Simard, and Paolo Frasconi. “Learning long-term dependencies with gradient descent is difficult.” IEEE transactions on neural networks 5, no. 2 (1994): 157-166.&lt;/p&gt;

&lt;p&gt;Srivastava, Nitish, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. “Dropout: a simple way to prevent neural networks from overfitting.” Journal of machine learning research 15, no. 1 (2014): 1929-1958.&lt;/p&gt;

&lt;p&gt;Long, Jonathan, Evan Shelhamer, and Trevor Darrell. “Fully convolutional networks for semantic segmentation.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440. 2015.&lt;/p&gt;

&lt;p&gt;Snoek, Jasper, Hugo Larochelle, and Ryan P. Adams. “Practical bayesian optimization of machine learning algorithms.” In Advances in neural information processing systems, pp. 2951-2959. 2012.&lt;/p&gt;

&lt;p&gt;Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves et al. “Human-level control through deep reinforcement learning.” Nature 518, no. 7540 (2015): 529-533.&lt;/p&gt;

&lt;p&gt;Blei, David M., Andrew Y. Ng, and Michael I. Jordan. “Latent dirichlet allocation.” Journal of machine Learning research 3, no. Jan (2003): 993-1022.&lt;/p&gt;

&lt;p&gt;Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).&lt;/p&gt;

&lt;p&gt;Dechter, Rina, Itay Meiri, and Judea Pearl. “Temporal constraint networks.” Artificial intelligence 49, no. 1-3 (1991): 61-95.&lt;/p&gt;

&lt;p&gt;LeCun, Yann, Léon Bottou, Yoshua Bengio, and Patrick Haffner. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86, no. 11 (1998): 2278-2324.&lt;/p&gt;

&lt;p&gt;Sutton, Richard S. “Learning to predict by the methods of temporal differences.” Machine learning 3, no. 1 (1988): 9-44.&lt;/p&gt;

&lt;p&gt;Boser, Bernhard E., Isabelle M. Guyon, and Vladimir N. Vapnik. “A training algorithm for optimal margin classifiers.” In Proceedings of the fifth annual workshop on Computational learning theory, pp. 144-152. ACM, 1992.&lt;/p&gt;

&lt;p&gt;Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” In Advances in neural information processing systems, pp. 1097-1105. 2012.&lt;/p&gt;

&lt;p&gt;Hinton, Geoffrey E., and Drew Van Camp. “Keeping the neural networks simple by minimizing the description length of the weights.” In Proceedings of the sixth annual conference on Computational learning theory, pp. 5-13. ACM, 1993.&lt;/p&gt;

&lt;p&gt;Dayan, Peter, Geoffrey E. Hinton, Radford M. Neal, and Richard S. Zemel. “The helmholtz machine.” Neural computation 7, no. 5 (1995): 889-904.&lt;/p&gt;

&lt;p&gt;Murphy, Kevin P., Yair Weiss, and Michael I. Jordan. “Loopy belief propagation for approximate inference: An empirical study.” In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pp. 467-475. Morgan Kaufmann 
blishers Inc., 1999.&lt;/p&gt;

&lt;p&gt;Neal, Radford M., and Geoffrey E. Hinton. “A view of the EM algorithm that justifies incremental, sparse, and other variants.” In Learning in graphical models, pp. 355-368. Springer Netherlands, 1998.&lt;/p&gt;

&lt;p&gt;Roweis, Sam, and Zoubin Ghahramani. “A unifying review of linear Gaussian models.” Neural computation 11, no. 2 (1999): 305-345.&lt;/p&gt;

&lt;p&gt;Minka, Tom. Divergence measures and message passing. Technical report, Microsoft Research, 2005.&lt;/p&gt;

&lt;p&gt;Sutton, Richard S., David A. McAllester, Satinder P. Singh, and Yishay Mansour. “Policy gradient methods for reinforcement learning with function approximation.” In Advances in neural information processing systems, pp. 1057-1063. 2000.&lt;/p&gt;

&lt;p&gt;Cortes, Corinna, and Vladimir Vapnik. “Support-vector networks.” Machine learning 20, no. 3 (1995): 273-297.&lt;/p&gt;

&lt;p&gt;Zeiler, Matthew D., and Rob Fergus. “Visualizing and understanding convolutional networks.” In European conference on computer vision, pp. 818-833. Springer, Cham, 2014.&lt;/p&gt;

&lt;p&gt;MacKay, David JC. “A practical Bayesian framework for backpropagation networks.” Neural computation 4, no. 3 (1992): 448-472.&lt;/p&gt;

&lt;p&gt;Fei-Fei, Li, and Pietro Perona. “A bayesian hierarchical model for learning natural scene categories.” In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2, pp. 524-531. IEEE, 2005.&lt;/p&gt;

&lt;p&gt;Tenenbaum, Joshua B., Vin De Silva, and John C. Langford. “A global geometric framework for nonlinear dimensionality reduction.” science 290, no. 5500 (2000): 2319-2323.&lt;/p&gt;

&lt;p&gt;Taskar, Ben, Carlos Guestrin, and Daphne Koller. “Max-margin Markov networks.” In Advances in neural information processing systems, pp. 25-32. 2004.&lt;/p&gt;

&lt;p&gt;Jaakkola, Tommi, and David Haussler. “Exploiting generative models in discriminative classifiers.” In Advances in neural information processing systems, pp. 487-493. 1999. Harvard&lt;/p&gt;

&lt;p&gt;Watkins, Christopher JCH, and Peter Dayan. “Q-learning.” Machine learning 8, no. 3-4 (1992): 279-292.&lt;/p&gt;

&lt;h2 id=&quot;writing&quot;&gt;Writing&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/Rosetta_Stone.JPG&quot; alt=&quot;Heterogeneous writing&quot; /&gt;&lt;/p&gt;

&lt;p&gt;In 2018, I am going to take on writing goals. Here’s my list:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Blogging: &amp;gt;= 12 blog posts (1x/month) [1/12]&lt;/li&gt;
  &lt;li&gt;Speaking: &amp;gt;= 1 public speaking event (under writing because it’s putting together a talk) [0/1]&lt;/li&gt;
  &lt;li&gt;NIPS workshop submission (this is pretty ambitious, I need to scope)&lt;/li&gt;
  &lt;li&gt;Music composition (string quartet – likely a single movement), to be premiered for my 30th birthday.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I think that I would like to do some other writing-related things like successfully contribute to an open source software project or begin to write a book, but those need more scoping before I could take such goals on.&lt;/p&gt;

&lt;p&gt;One of the best things I did in 2017 was attend NIPS. I am inspired by all of the brilliant research occuring in machine learning and I want to contribute. I think if I had to pick a workshop to contribute to this year it would be for the creativity workshop. I think a workshop submission would be a great gateway for a 2019+ goal of successfully submitting and being chosen as part of the NIPS conference proceedings. Who’s with me?&lt;/p&gt;

&lt;h2 id=&quot;travel&quot;&gt;Travel&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/Monaco_City_001.jpg&quot; alt=&quot;Monaco&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This year I will be heading back to Paris for the first time since I was 17. I’d like to bone up on my French, which is rusty as of 2006, and I’d like to visit a new country… perhaps the one shown in the photo above?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Visit &amp;gt;= 1 new country&lt;/li&gt;
  &lt;li&gt;Practice French&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;fitness&quot;&gt;Fitness&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/Long_Distance_Runners,_Ancient_Greece,_Amphora.png&quot; alt=&quot;Running&quot; /&gt;&lt;/p&gt;

&lt;p&gt;As I enter a new decade of life, I want to add a focus on physical fitness. In 2017 I started regularly working with a personal trainer and it has been a great start to being fit. That being said, it isn’t enough. I need to take some attainable yet challenging and measurable goals. Challenging especially if I fall behind (see running goal).&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Personal training 1x/week [1/52]&lt;/li&gt;
  &lt;li&gt;Lifting 1x/week (in addition to training) [1/52]&lt;/li&gt;
  &lt;li&gt;Go sailing &amp;gt;= 1x/month [0/12]&lt;/li&gt;
  &lt;li&gt;Enter &amp;gt;= 1 sailboat race [0/1]&lt;/li&gt;
  &lt;li&gt;Run a 5k race (I have subgoals to get to this) [0/1]&lt;/li&gt;
  &lt;li&gt;Run 365 miles in 2018 (&amp;gt;= 1 mile/day) [1/365]&lt;/li&gt;
  &lt;li&gt;Lose 10 lbs., sustainably&lt;/li&gt;
  &lt;li&gt;Do yoga &amp;gt;= 1x in January; &amp;gt;= 1x/month after if I enjoy it&lt;/li&gt;
  &lt;li&gt;Climb Mt. Pilchuk [0/1]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stretch goals:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Win &amp;gt;= 1 sailboat race&lt;/li&gt;
  &lt;li&gt;Take (car) racing lessons&lt;/li&gt;
  &lt;li&gt;Enter &amp;gt;= 1 autocross&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why include sailing and autocross into fitness? I think because they’re both sports, they both require skill, and they frankly require one to be physically fit to perform well. Why put autocross under stretch goals? 2018 is going to be busy, and there are some racing prerequisites I’d like to take care of before I do autocross. Let’s just say I need to take some racing lessons. My high-school obsession, &lt;em&gt;Gran Tourismo&lt;/em&gt;, is likely inadequate for tracking my own car.&lt;/p&gt;

&lt;h2 id=&quot;less-television&quot;&gt;Less television&lt;/h2&gt;

&lt;p&gt;Looking back on 2017, perhaps the main hindrance to achieving my full reading goal (truly my only 2017 resolution) was due to the number of hours I spent streaming television shows on Amazon Prime, Netflix, and Hulu.&lt;/p&gt;

&lt;p&gt;Clearly, this time wasn’t spent &lt;em&gt;too&lt;/em&gt; memorably because I can’t remember all of the shows I watched in 2017. Notably, I caught up with some of my favorites:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Game of Thrones (all 7 seasons, hadn’t seen any as of July 2017)&lt;/li&gt;
  &lt;li&gt;Billions&lt;/li&gt;
  &lt;li&gt;Mozart in the Jungle&lt;/li&gt;
  &lt;li&gt;Silicon Valley&lt;/li&gt;
  &lt;li&gt;Will &amp;amp; Grace (original and new)&lt;/li&gt;
  &lt;li&gt;The Handmaid’s Tale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Game of Thrones&lt;/em&gt; and &lt;em&gt;Will &amp;amp; Grace&lt;/em&gt; are each &amp;gt; 7 seasons, so this is a lot of television in 2017. I don’t plan on binge-watching anything in 2018 except for perhaps new seasons of &lt;em&gt;Billions&lt;/em&gt;, &lt;em&gt;Mozart in the Jungle&lt;/em&gt;, &lt;em&gt;Silicon Valley&lt;/em&gt;, and &lt;em&gt;Stranger Things&lt;/em&gt; if I can be persuaded.&lt;/p&gt;

&lt;h2 id=&quot;optimize-volunteering&quot;&gt;Optimize volunteering&lt;/h2&gt;

&lt;p&gt;Optimize is the key word here. I don’t intend to eliminate, reduce, or increase volunteer activities in 2018, rather, I intend to optimize them. Right now I am stretched fairly thin on volunteer activities, most of which are new as of 2017. They’re all important to me, but some take a disproportionate amount of time to manage performantly, and I need to reallocate some of that time to others. That will require learning and evolving skills such as handing off responsibilities and relinquishing control. Overall, 2017 was a great year in terms of volunteer opportunities for me. I experienced new kinds of volunteering for 3 different organizations, all of which have been rewarding and unique. For specifics you can refer to &lt;a href=&quot;https://www.linkedin.com/in/korbonits/&quot;&gt;my LinkedIn profile&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;themes&quot;&gt;Themes&lt;/h2&gt;

&lt;p&gt;Dude, you’ve got a lot of goals for 2018. What’s the unifying theme? You’re all over the place?&lt;/p&gt;

&lt;p&gt;One of the unifying themes for my 2018 goals is &lt;strong&gt;organization&lt;/strong&gt;. Merely writing down my main (albeit varied) interests is a good starting point. Second, trying to come up with reasonable but not-attainable-without-effort goals for them is crucial because it forces me to do more than the status quo. Third, plurality. It’s easy to take a couple of goals that one can game, but singular focus is boring (to me). Fourth, accountability. Fifth, doing/trying new things: I’ve never been much of a runner or a hiker (while I have gone through spurts, they’ve never been long-lived), so these goals (like run &amp;gt;= 365 miles or climb Mt. Pilchuk) are new and designed to create habits rather than stand in as mere bucket-list items.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#gettingthingsdone&lt;/code&gt;
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#thepowerofhabit&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;By the end of 2018, I’d love sing (in a major key): &lt;strong&gt;QED&lt;/strong&gt;.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:0&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For more on flow, check out https://en.wikipedia.org/wiki/Flow_(psychology) &lt;a href=&quot;#fnref:0&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
        <pubDate>Fri, 05 Jan 2018 00:00:00 +0000</pubDate>
        <link>http://korbonits.github.io/2018/01/05/New-Years-Resolutions.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2018/01/05/New-Years-Resolutions.html</guid>
        
        
      </item>
    
      <item>
        <title>NIPS 2017 Summary</title>
        <description>&lt;p&gt;&lt;img src=&quot;/assets/nips.png&quot; alt=&quot;NIPS&quot; /&gt;
&lt;!-- {: .center-image } --&gt;&lt;/p&gt;

&lt;h2 id=&quot;tldr&quot;&gt;tl;dr&lt;/h2&gt;

&lt;h4 id=&quot;takeaways&quot;&gt;takeaways&lt;/h4&gt;

&lt;ol&gt;
  &lt;li&gt;Deep learning is influencing bayesian methods and vice versa: deep bayesian learning and bayesian deep learning are two slightly different subfields with extremely rapid progress this year. Expect to see a lot more developement in this area. There was a great workshop on these topics and I predict there will soon be benchmark datasets and a growing base of open source software for folks to use.&lt;/li&gt;
  &lt;li&gt;Not only is model interpretability, bias, and fairness on people’s minds, but there’s also a lot of attention and research in this rapidly-developing area right now. Kate Crawford gave an excellent invited talk about bias and fairness in machine learning which paved the conversation for the rest of the week, including in several papers, symposia, and workshops. This is not just an area where the expertise of computer scientists is needed: it’s inter-disciplinary and research would be served best by decades of research by sociologists, anthropologists, pyschologists, historians, etc.&lt;/li&gt;
  &lt;li&gt;There is general acknowledgement that more theory is needed to understand the state-of-the-art empirical performance across many areas of deep learning. Rahimi and Recht’s test-of-time award talk called for simple theorems and for simple, easily reproducible experiments.&lt;/li&gt;
  &lt;li&gt;Deep reinforcement learning is all the rage. We’ve beaten humans at poker, a great entree into imperfect information games, and we’ve beaten humans at perfect-information games (Go, Shogi, and Chess) with no prior knowledge other than rules (i.e., no training data) – using the exact same set hyperparameters.&lt;/li&gt;
  &lt;li&gt;GANs are hot – but we’re still not sure how they work or how to make them more usable. Ian Goodfellow suggests we give them a few more years.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4 id=&quot;must-see-papers&quot;&gt;must-see papers&lt;/h4&gt;
&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;http://papers.nips.cc/paper/7213-poincare-embeddings-for-learning-hierarchical-representations&quot;&gt;Poincaré Embeddings for Learning Hierarchical Representations&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://papers.nips.cc/paper/6698-self-normalizing-neural-networks&quot;&gt;Self-Normalizing Neural Networks&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;neural-information-processing-systems-2017&quot;&gt;Neural Information Processing Systems 2017&lt;/h2&gt;

&lt;h3 id=&quot;main-takeaways-from-nips2017&quot;&gt;Main takeaways from #NIPS2017&lt;/h3&gt;

&lt;p&gt;Note: throughout the week I intend to post deeper dives on tutorials, talks, symposia, and workshops.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Deep learning is influencing bayesian methods and vice versa: deep bayesian learning and bayesian deep learning are two slightly different subfields with extremely rapid progress this year. Expect to see a lot more developement in this area. There was a great workshop on these topics and I predict there will soon be benchmark datasets and a growing base of open source software for folks to use. Neural networks can make brittle predictions – how can we quantify their uncertainty better? Gaussian processes can be composed together to create richer models – how do we leverage tools used in deep learning to make using GPs easier and more performant?
1.1 See &lt;a href=&quot;https://www.stats.ox.ac.uk/~teh/&quot;&gt;Yee Whye Teh’s&lt;/a&gt; talk, &lt;a href=&quot;https://nips.cc/Conferences/2017/Schedule?showEvent=8726&quot;&gt;On Bayesian Deep Learning and Deep Bayesian Learning&lt;/a&gt; for more info. &lt;a href=&quot;https://www.facebook.com/nipsfoundation/videos/1555493854541848/&quot;&gt;Video link&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Not only is model interpretability, bias, and fairness on people’s minds, but there’s also a lot of attention and research in this rapidly-developing area right now. Kate Crawford gave an excellent invited talk about bias and fairness in machine learning which paved the conversation for the rest of the week, including in several papers, symposia, and workshops. This is not just an area where the expertise of computer scientists is needed: it’s inter-disciplinary and research would be served best by decades of research by sociologists, anthropologists, pyschologists, historians, etc. Can we create some benchmark datasets over which we can define methods and problems associated with bias and fairness? Can we develop theoretical guarantees around these concepts as a foundation for further research? How do we nurture interdisciplinary research across fields whose methods and language are almost equally important but almost completely divergent (e.g. mathematical proofs vs. qualitative prose)?
2.1 See &lt;a href=&quot;http://www.katecrawford.net/&quot;&gt;Kate Crawford’s&lt;/a&gt; talk, &lt;a href=&quot;https://nips.cc/Conferences/2017/Schedule?showEvent=8742&quot;&gt;The Trouble with Bias&lt;/a&gt; for more info. &lt;a href=&quot;https://www.facebook.com/nipsfoundation/videos/1553500344741199/&quot;&gt;Video link&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;There is general acknowledgement that more theory is needed to understand the state-of-the-art empirical performance across many areas of deep learning. Rahimi and Recht’s test-of-time award talk called for simple theorems and for simple, easily reproducible experiments. Why does deep learning work so well? How do these models generalize so well when, according to VC theory, they have so many parameters? Can I repeat the results of someone’s experiment from reading their paper?
3.1 See the talk by &lt;a href=&quot;https://keysduplicated.com/~ali/&quot;&gt;Ali Rahimi&lt;/a&gt; and &lt;a href=&quot;http://people.eecs.berkeley.edu/~brecht/&quot;&gt;Benjamin Recht&lt;/a&gt;, the &lt;a href=&quot;https://youtu.be/Qi1Yry33TQE&quot;&gt;test-of-time-award talk&lt;/a&gt;, for more info.&lt;/li&gt;
  &lt;li&gt;Deep reinforcement learning is all the rage. We’ve beaten humans at poker, a great entree into imperfect information games, and we’ve beaten humans at perfect-information games (Go, Shogi, and Chess) with no prior knowledge other than rules (i.e., no training data) – using the exact same set hyperparameters. We’re nowhere near DeepMind’s mission of AGI but in 2017 we’ve made some undeniably cool breakthroughs. How do we design reward functions? How do we get systems to learn new tasks more efficiently? How do we make RL systems interacting with the real world (e.g., self-driving cars) safely robust to new/rare states (avoiding an accident with a car flipping in mid-air), outliers (stop signs missing), or adversarial environments (stickers placed on signs to fool self-driving cars)?
4.1 See &lt;a href=&quot;https://people.eecs.berkeley.edu/~pabbeel/&quot;&gt;Pieter Abbeel’s&lt;/a&gt; talk, &lt;a href=&quot;https://www.youtube.com/watch?v=TyOooJC_bLY&quot;&gt;Deep Learning for Robotics&lt;/a&gt; for more info.&lt;/li&gt;
  &lt;li&gt;GANs are hot – but we’re still not sure how they work or how to make them more usable. Ian Goodfellow suggests we can take GANs more seriously in a few years once we work some of the foundational kinks out. In essence, they’re hard to train and the learning process is somewhat unstable. Can we draw on related insights from dynamical systems theory to create more robust training objectives/regimes? How do we avoid situations where we get close to the nash equilibrium and then the parameters diverge?
5.1 See &lt;a href=&quot;http://www.iangoodfellow.com/&quot;&gt;Ian Goodfellow’s&lt;/a&gt; talk, Bridging Theory and Practice of GANs, for more info.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;practical-talks&quot;&gt;Practical talks&lt;/h3&gt;

&lt;p&gt;There were a few talks that stood out to me for immediate application with my own work.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;http://papers.nips.cc/paper/7213-poincare-embeddings-for-learning-hierarchical-representations&quot;&gt;Poincaré Embeddings for Learning Hierarchical Representations&lt;/a&gt;
1.1 This paper seems to implement an elegant solution that vastly outperforms earlier word2vec work and does a good job of characterizing hierarchical language relationships. Hooray for non-euclidean geometry!&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://papers.nips.cc/paper/6698-self-normalizing-neural-networks&quot;&gt;Self-Normalizing Neural Networks&lt;/a&gt;
2.1 This paper seems to implement a simple way to avoid manually injecting batch norm, needing to be picky about activiation functions, and achieves superior results via a new activation function and provably optimal activation function parameter settings (I wouldn’t really call alpha a hyperparameter if the setting is optimal – why would you change it).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;reflections-and-daily-summaries&quot;&gt;Reflections and daily summaries&lt;/h3&gt;

&lt;p&gt;Overall, NIPS 2017 is easily one of the best conferences I’ve ever attended. It was my first NIPS and my first academic, week-long, machine-learning conference, and therefore hard to compare to other, shorter, more industry-focused conferences. It was overwhelming in terms of content. I think that next year I’ll spend weeks/months preparing by reading more of the proceedings and symposia/workshop submissions so that I can get more out of NIPS. Right now, I’m writing everything down to avoid catastrophic forgetting. Having attended NIPS, I think I’ll set my sights on other similar conferences such as ICML or ICLR.&lt;/p&gt;

&lt;p&gt;At least two aspects of attending NIPS would improve with additional preparation:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Poster sessions.&lt;/li&gt;
  &lt;li&gt;Comprehension of talks/knowing what points are most important thereto.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because the talks are about state-of-the-art results, unless one is intimately familiar with the topic, it’s hard to know what’s important. Further, I think I could learn a lot more about being a practitioner by talking with authors of papers about their implementations and what some of their advice is for either using their methods or coding them up – what are some good hyperparameter settings? How many computational resources do I need for a reasonable implementation? What were some of your built-in assumptions about the data that weren’t explicit in your paper (e.g., perhaps known aspects of benchmark datasets that would not apply to an industrial, proprietary dataset).&lt;/p&gt;

&lt;p&gt;Here’s a short overview of each day. I will link to my other posts covering my attendance of the conference. There were so many parallel tracks that I saw far fewer than half of NIPS but I will attempt to convey my experience.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Monday, December 4: tutorials; invited talk; opening remarks&lt;/li&gt;
  &lt;li&gt;Tuesday, December 5: invited talks; first day of talks of main NIPS proceedings&lt;/li&gt;
  &lt;li&gt;Wednesday, December 6: invited talks; second day of talks of main NIPS proceedings&lt;/li&gt;
  &lt;li&gt;Thursday, December 7: invited talks; final talks of main NIPS processings; symposia&lt;/li&gt;
  &lt;li&gt;Friday, December 8: workshops&lt;/li&gt;
  &lt;li&gt;Saturday, December 9: workshops and NIPS closing party&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;fun-stuff&quot;&gt;Fun stuff&lt;/h3&gt;

&lt;h4 id=&quot;deep-learning-textbook-anniversary&quot;&gt;Deep Learning textbook anniversary&lt;/h4&gt;

&lt;p&gt;MIT Press held a one-year anniversary party for the print edition of the &lt;a href=&quot;https://mitpress.mit.edu/books/deep-learning&quot;&gt;Deep Learning&lt;/a&gt; textbook. Ian Goodfellow, Aaron Courville, and Yoshua Bengio were all there to cut a cake on whose surface was printed the cover of the textbook – a Google deep-dream photograph from the Strawberry Fields in Central Park. Is this the first example of the output of a neural network being rendered in food? (Edit: Google handed out cookies whose recipes were learned via a neural network).&lt;/p&gt;

&lt;p&gt;Someone made the inevitable joke asking how many “layers” were in the cake. It looked and tasked like a fully connected MLP of about 6-7 layers to me. Not good enough for ImageNet but good enough for an afternoon snack.&lt;/p&gt;

&lt;p&gt;Lots of people got photos or took selfies with the authors, or brought their books to this event to get them signed.&lt;/p&gt;

&lt;p&gt;NVidia CEO Jensen Huang was excited too and spent some time hanging out with everyone. No wonder, since the huge demand for GPUs and NVidia’s sustained success over the last few years has been fueled by the cambrian explosion of deep learning – in no small part due to the authors and even the crowd at the book signing.&lt;/p&gt;

&lt;h4 id=&quot;nips-closing-party&quot;&gt;NIPS Closing Party&lt;/h4&gt;

&lt;p&gt;&lt;a href=&quot;http://imposteriors.wixsite.com/imposteriors&quot;&gt;The Imposteriors&lt;/a&gt;, an improbably good band. The only band with 50,000 citations, amiright?&lt;/p&gt;

&lt;p&gt;It was a blast to bring everyone together at the end and have the nerdiest rockstars (all rockstar scholars in their own right as well) give a great concert.&lt;/p&gt;

&lt;p&gt;Toward the end they had special guest &lt;a href=&quot;http://www.cs.columbia.edu/~blei/&quot;&gt;David Blei&lt;/a&gt; on accordian playing polka. This led to the audience self-assembling into giant concentric markov-chains: fun was had by all.&lt;/p&gt;

&lt;h3 id=&quot;corporate-presence&quot;&gt;Corporate presence&lt;/h3&gt;

&lt;p&gt;As this was my first NIPS, I’m not much of a stakeholder in terms of what NIPS &lt;em&gt;should&lt;/em&gt; be. Second, I’m an engineer and therefore already somewhat in the “enemy” camp for many of the attendees.&lt;/p&gt;

&lt;p&gt;Zeitgeist: it seems like the general feeling is that the success of deep learning since 2012 has led to the corporate takeover of NIPS. Many or most attendees are not scholars but are in industry (would love exact stats on that); corporate sponsors steal a lot of attention from NIPS (both from talks and poster sessions) via expensive, flashy booths, free swag, heavy recruiting, and parties with open bars and other gimmicks.&lt;/p&gt;

&lt;p&gt;This kind of attention is a double-edged sword. It’s great – look how many more people are interested in the field compared to a few years ago. It’s terrible – the hype will lead to the next AI winter, and technology is progressing too quickly to keep track of all of the potentially societally detrimental innovations such as generating realistic fake videos with geopolitical consequences or violating privacy with improved identity detection and automatic surveillance.&lt;/p&gt;

&lt;p&gt;Personally, I felt 100% uninfluenced by the recruiting and generally did not attend the corporate parties. Overall, I thought it was great that all of the poster sessions were sponsored with food and drinks, encouraging folks to stick around late into the evenings.&lt;/p&gt;

&lt;p&gt;Corporate party highlights:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Intel held a Flo Rida concert and announced their latest “AI” chip, derived from their acquisition of Nervana Systems. This is exactly the plot of the pilot episode of HBO’s Silicon Valley, either a perfect self-aware example of life imitating art imitating life, or a cringe-worthy coincidence.&lt;/li&gt;
  &lt;li&gt;NVidia, not to be outdone, had an orchestra play an “AI-generated” piece, unveiled a new GPU, and handed out over a dozen to the audience (MSRP ~$3k).&lt;/li&gt;
  &lt;li&gt;Tesla, who did not sponsor NIPS, held an invite-only event with Elon Musk, Andrej Karpathy, and Jim Keller in an old-school mansion – they let attendees drive brand-new Model S’s to the event.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of the deep-pocketed sponsors held similar events – some open, some invite-only – but those 3 are the top in terms of gimmicks that I know of.&lt;/p&gt;

&lt;p&gt;Other corporate stuff:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;During a lunch event, Apple announced they’d be open-sourcing turi-create, which is a fantastic move on their part. As a former user of graphlab-create, I am pleased that they are open-sourcing it and I’m looking forward to using it again.&lt;/li&gt;
  &lt;li&gt;Intel had a quantum computer at their booth. It was awesome.&lt;/li&gt;
  &lt;li&gt;Rennaissance Technologies had a booth with no sign on it. So badass.&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Mon, 11 Dec 2017 00:00:00 +0000</pubDate>
        <link>http://korbonits.github.io/2017/12/11/NIPS-2017-Summary.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2017/12/11/NIPS-2017-Summary.html</guid>
        
        
      </item>
    
      <item>
        <title>Caffe: Brew your first DNN</title>
        <description>&lt;h2 id=&quot;caffe&quot;&gt;Caffe&lt;/h2&gt;

&lt;p&gt;You can find some background for this post here: &lt;a href=&quot;/2015/05/04/Deep-Learning-with-Python.html&quot;&gt;Deep Learning with Python&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Caffe has its strengths and its weaknesses. For example, there are some outstanding issues regarding using multiple GPUs in parallel during training. According to a wonderful write-up by &lt;a href=&quot;https://plus.google.com/+TomaszMalisiewicz/posts&quot;&gt;Tomasz Malisiewicz&lt;/a&gt; titled &lt;a href=&quot;http://www.computervisionblog.com/2015/06/deep-down-rabbit-hole-cvpr-2015-and.html&quot;&gt;Deep down the rabbit hole: CVPR 2015 and beyond&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Caffe is much more popular that Torch, but when talking to some power users of Deep Learning (like &lt;a href=&quot;https://plus.google.com/100209651993563042175&quot;&gt;+Andrej Karpathy&lt;/a&gt; and other DeepMind scientists), a certain group of experts seems to be migrating from Caffe to Torch.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I read somewhere else that Caffe : Torch :: Applications : Research. If you want to quickly iterate on datasets with the aim of building applications, Caffe gives you a flexible framework with a lot of built-in tools to do so; with Python bindings to boot. Additionally, one of its great features is that you can essentially specify all of the layers and layer parameters of a neural network with a simple config file. I know that there are some out there (you know who you are! :-P) who do not like this feature.&lt;/p&gt;

&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;Caffe has some prerequisites, which, unless you’ve already got a CUDA driver installed, will prevent you from getting started in just minutes.&lt;/p&gt;

&lt;p&gt;Please go to the Caffe &lt;a href=&quot;http://caffe.berkeleyvision.org/installation.html#prerequisites&quot;&gt;installation page&lt;/a&gt; for more details. Be sure to also follow the &lt;a href=&quot;http://caffe.berkeleyvision.org/install_osx.html&quot;&gt;OS X Installation&lt;/a&gt; page very closely if you’re on a Mac like I am.&lt;/p&gt;

&lt;p&gt;For going against your GPU, you’ll need:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://brew.sh/&quot;&gt;Homebrew&lt;/a&gt; if you don’t already have it.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://developer.nvidia.com/cuda-zone&quot;&gt;CUDA&lt;/a&gt; (if you want to use CAFFE in GPU mode, which in itself requires an NVIDIA GPU and an NVIDIA Developer login)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://developer.nvidia.com/cuDNN&quot;&gt;cuDNN&lt;/a&gt; (accelerated CUDA, in a nutshell)&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;brew install boost-python&lt;/code&gt; (simply doing &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;brew install boost&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;brew install boost --with-python&lt;/code&gt; didn’t do the trick)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are plenty of instructions on the caffe site for getting these prerequisites installed.&lt;/p&gt;

&lt;p&gt;Since we’re using Python, pay &lt;strong&gt;EXTRA SPECIAL ATTENTION&lt;/strong&gt; to the Makefile.config instructions. Particularly, know where your Python and Numpy live.&lt;/p&gt;

&lt;p&gt;Mine’s a little complicated since I use Homebrew for lots of things, but here’s what my config info looks like. This will save you from pulling some hair out:&lt;/p&gt;

&lt;p&gt;Add these to your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/.bash_profile&lt;/code&gt; or just &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;export&lt;/code&gt; them in your session.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;PATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/usr/local/cuda/bin:&lt;span class=&quot;nv&quot;&gt;$PATH&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DYLD_LIBRARY_PATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/usr/local/cuda/lib:&lt;span class=&quot;nv&quot;&gt;$DYLD_LIBRARY_PATH&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Cool. Now, in your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Makefile.config&lt;/code&gt; file:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;PYTHON_INCLUDE := /usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/include/python2.7 \
		/usr/local/Cellar/numpy/1.9.2/lib/python2.7/site-packages/numpy/core/include

PYTHON_LIB := /usr/local/lib /usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Moving on!
In your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/caffe/&lt;/code&gt; directory:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;make all
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;make &lt;span class=&quot;nb&quot;&gt;test&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;make runtest
...
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;make pycaffe&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;When all of this is done, start Python in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/caffe/python/&lt;/code&gt;.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;caffe&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Awesome! Finally, we can let the real fun begin.&lt;/p&gt;

&lt;h2 id=&quot;model-zoo&quot;&gt;Model Zoo&lt;/h2&gt;

&lt;p&gt;Caffe, having the great ecosystem that it does, has a special place called the &lt;a href=&quot;https://github.com/BVLC/caffe/wiki/Model-Zoo&quot;&gt;“Model Zoo”&lt;/a&gt; where various reference models and variations thereof are curated in one place with code and citations. Be sure to check it out for Caffe implementations of some of the most recent cutting-edge research, such as GoogLeNet and models from the CVPR2015 DeepVision workshop, which occurred after this blog post began.&lt;/p&gt;

&lt;h2 id=&quot;deep-dreaming&quot;&gt;Deep Dreaming&lt;/h2&gt;

&lt;p&gt;If you’re interested in some light background reading regarding GoogLeNet, the deep learning model that Google Deep Dream uses as its default, you should check out the following arXiv preprint: &lt;a href=&quot;http://arxiv.org/abs/1409.4842&quot;&gt;Going Deeper with Convolutions&lt;/a&gt;&lt;sup id=&quot;fnref:0&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:0&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;. Note that GoogLeNet is an homage to the pioneering work &lt;a href=&quot;http://yann.lecun.com/exdb/lenet/&quot;&gt;LeNet&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, built by Yann LeCun in the 80’s for handwritten digit recognition.&lt;/p&gt;

&lt;p&gt;In fact, &lt;a href=&quot;http://caffe.berkeleyvision.org/gathered/examples/mnist.html&quot;&gt;here&lt;/a&gt; is a great Caffe tutorial building LeNet by hand and training it on MNIST. But I thought you’d find Google Deep Dream more interesting as a blog post :-).&lt;/p&gt;

&lt;p&gt;In &lt;a href=&quot;http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html&quot;&gt;this&lt;/a&gt; Google Research blog post, Google describes its notion of Inceptionism, and how it visualizes going deeper into neural networks. A couple of weeks later, they published &lt;a href=&quot;http://googleresearch.blogspot.com/2015/07/deepdream-code-example-for-visualizing.html&quot;&gt;this&lt;/a&gt; follow-up blog post, which links you to the DeepDream &lt;a href=&quot;https://github.com/google/deepdream&quot;&gt;repo&lt;/a&gt;, conveniently hosted on GitHub and available as an IPython Notebook &lt;a href=&quot;https://github.com/google/deepdream/blob/master/dream.ipynb&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;deep-dreaming-prerequisites&quot;&gt;Deep Dreaming Prerequisites&lt;/h2&gt;

&lt;p&gt;It’s simple enough to follow the IPython Notebook, but here are some instructions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Make sure you have installed the following Python libraries: NumPy, SciPy, PIL, and IPython (just use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;pip&lt;/code&gt;).&lt;/li&gt;
  &lt;li&gt;Make sure you have installed Caffe (i.e., that you have read the beginning of this blog post and didn’t skip ahead!).&lt;/li&gt;
  &lt;li&gt;Google’s &lt;a href=&quot;https://developers.google.com/protocol-buffers/&quot;&gt;protobuf&lt;/a&gt; library (I will go through this since it was a little tricky).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;installing-protobuf&quot;&gt;Installing Protobuf&lt;/h2&gt;

&lt;p&gt;If you’re a nerd like me, you’re insane enough to go straight to the source: Google’s protobuf GitHub repo, found &lt;a href=&quot;https://github.com/google/protobuf&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The installation instructions are fairly simple. Unless you are using a Mac. You can follow their instructions using MacPorts, or you can join me in 2015 and use Homebrew :-) (or the package manager of your choice).&lt;/p&gt;

&lt;p&gt;Here’s the flow for a Mac user:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;brew &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;autoconf
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;brew &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;automake
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;brew &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;libtool&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Nice. Now you can install protobuf. Go to a directory where you’d like to put the protobuf repo.&lt;/p&gt;

&lt;p&gt;Then do the following:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;git clone https://github.com/google/protobuf.git
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;protobuf/
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;./autogen.sh
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;./configure
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;make
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;make check
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;make &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Assuming that works, you’re ready to deep dream!&lt;/p&gt;

&lt;p&gt;Go to a directory where you’d like to put the Deep Dream IPython notebook and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git clone&lt;/code&gt; it!&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;git clone https://github.com/google/deepdream.git
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;deepdream/
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;ipython notebook&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Click on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dream.ipynb&lt;/code&gt;, et voilà, you’re in.&lt;/p&gt;

&lt;p&gt;Suffice to say, I won’t copy the whole notebook here. That being said, let’s take a closer look.&lt;/p&gt;

&lt;p&gt;Did you run into an error when you tried to load in GoogLeNet? I did. Just because you’ve downloaded and installed Caffe doesn’t mean that you’re ready to brew!&lt;/p&gt;

&lt;p&gt;Go to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/caffe/&lt;/code&gt;, and do the following:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;./scripts/download_model_binary.py models/bvlc_googlenet&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Once that has finished, you should be able to run the block of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dream.ipynb&lt;/code&gt; that loads in a pretrained GoogLeNet. Make sure you set your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/caffe/models/bvlc_googlenet&lt;/code&gt; properly. Now you’re ready to deep dream.&lt;/p&gt;

&lt;p&gt;Get down a few lines to where you assign:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;span class=&quot;n&quot;&gt;img&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;float32&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;PIL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;your_own_image.jpg&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;seurat&quot;&gt;Seurat&lt;/h2&gt;

&lt;p&gt;I’m picking my favorite work by pointillist George Seurat, his epic chef d’oeuvre, &lt;em&gt;Un dimanche après-midi à l’Île de la Grande Jatte&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;You don’t need to be an art history buff to appreciate this work. Perhaps you’re a fan of &lt;a href=&quot;http://www.imdb.com/title/tt0091042/&quot;&gt;Ferris Bueller’s Day Off&lt;/a&gt;? Who can forget the moment when Cameron sees this work from afar and becomes fixated on its perfection? Here’s a clip from their visit to the Art Institute of Chicago (click &lt;a href=&quot;https://www.youtube.com/watch?v=ubpRcZNJAnE&quot;&gt;here&lt;/a&gt; if you want to see the whole thing):&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;a href=&quot;https://youtu.be/ubpRcZNJAnE?t=1m23s&quot;&gt;&lt;img src=&quot;/assets/cameron-staring.png&quot; alt=&quot;Cameron staring at Seurat&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Et Voilà:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/seurat.jpg&quot; alt=&quot;Un dimanche après-midi à l'Île de la Grande Jatte&quot; /&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;prepare-for-your-mind-to-be-blown&quot;&gt;Prepare for your mind to be blown&lt;/h2&gt;

&lt;p&gt;Take a deep breath. This is it. This is the moment. It’s so intense.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;deepdream&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;net&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;img&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;img src=&quot;/assets/seurat_dream.jpeg&quot; alt=&quot;Un deep dream dimanche après-midi à l'Île de la Grande Jatte&quot; /&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;BOOM. Holy mackerel. What are all of these strange animals and pagodas and faces popping in and out of mes amis Parisiens?&lt;/p&gt;

&lt;p&gt;Let’s take a look at what else we can do:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;deepdream&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;net&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;img&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; 
            &lt;span class=&quot;n&quot;&gt;octave_n&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;12&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;img src=&quot;/assets/seurat_dream_12octaves.jpeg&quot; alt=&quot;Un deep dream dimanche après-midi à l'Île de la Grande Jatte&quot; /&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;OK, WOW, THIS IS DEFINITELY WEIRDER.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;octave_n&lt;/code&gt; parameter default is 4. When you run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deepdream(net,img)&lt;/code&gt; for the first time, you will see an output that ranges from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0 0&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;3 9&lt;/code&gt;. The image is redrawn &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;iter_n&lt;/code&gt; times per octave, so &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;3 9&lt;/code&gt; here indicates (octave) 4 (interation) 10, since we’re counting from 0. Interesting. In music theory, octaves abstractly represent notes whose frequencies are double the previous octave, i.e., you have sung &lt;em&gt;do re me fa so la ti do&lt;/em&gt; and both &lt;em&gt;do&lt;/em&gt; notes are exactly one octave apart (and whose frequency ratio between the first and second &lt;em&gt;do&lt;/em&gt; is 1:2). Here in the notebook you can follow the code to see that the definition of the octave is subject to your own experimentation!&lt;/p&gt;

&lt;p&gt;Let’s do one more:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;deepdream&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;net&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; 
            &lt;span class=&quot;n&quot;&gt;img&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; 
            &lt;span class=&quot;n&quot;&gt;octave_n&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;13&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; 
            &lt;span class=&quot;n&quot;&gt;iter_n&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;13&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; 
            &lt;span class=&quot;n&quot;&gt;octave_scale&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mf&quot;&gt;1.5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; 
            &lt;span class=&quot;n&quot;&gt;end&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'inception_3a/output'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;What is the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;end='inception_3a/output'&lt;/code&gt; parameter? Check it out: submit &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;net.blobs.keys()&lt;/code&gt; and pick a layer, any layer (except the splits, I think).&lt;/p&gt;

&lt;p&gt;Here’s what we get:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;img src=&quot;/assets/seurat_dream_13o_13i_3a_output.jpeg&quot; alt=&quot;Un deep dream dimanche après-midi à l'Île de la Grande Jatte&quot; /&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;How cool is that? Does it seem at all Kandinsky-esque, or is it just me?&lt;/p&gt;

&lt;h2 id=&quot;what-to-do-next&quot;&gt;What to do next&lt;/h2&gt;

&lt;p&gt;You’ve got the power of Deep Dreaming in your hands. What do you want to do? There are a couple of interesting helper functions in the notebook that I am intentionally not covering so that you can explore them yourself!&lt;/p&gt;

&lt;p&gt;One loops over and over, so that the final output of calling &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deepdream(net, img)&lt;/code&gt; is the input – i.e., &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;img = deepdream(net, img)&lt;/code&gt; – to another iteration of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deepdream(net, img)&lt;/code&gt;. This leads to a lot of interesting compositional weirdness. Definitely what I think of as &lt;em&gt;inception&lt;/em&gt;: dreams within dreams.&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;img src=&quot;/assets/inception.jpg&quot; alt=&quot;We need to go deeper&quot; /&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The second gives you the ability to use another image as the &lt;em&gt;objective guide&lt;/em&gt; of your deep dreaming. Put another way, you can kind of think of this helper function as mashing up your deep dreaming with one image that tries to align with the other.&lt;/p&gt;

&lt;p&gt;Last, if you’re feeling particularly full of free time, try building a tree of such deep dreaming. Maybe you want to train an image on a hierarchy of different images, each one dreamed from a layer below. Moreover, don’t just follow my example and deep dream with oil on canvas. Try using natural images instead. Depending on your parameters, your deep dream results could be very far out indeed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WARNING:&lt;/strong&gt; Deep dreaming photos of people whom you love is definitely nighmare-inducing. Don’t say I didn’t warn you.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:0&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. “Going deeper with convolutions.” arXiv preprint arXiv:1409.4842 (2014). &lt;a href=&quot;#fnref:0&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Comput., 1(4):541–551, December 1989. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
        <pubDate>Wed, 29 Jul 2015 11:49:40 +0000</pubDate>
        <link>http://korbonits.github.io/2015/07/29/Caffe-brew-your-first-DNN.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2015/07/29/Caffe-brew-your-first-DNN.html</guid>
        
        
      </item>
    
      <item>
        <title>Torch: bleeding edge DNN research</title>
        <description>&lt;h2 id=&quot;torch&quot;&gt;Torch&lt;/h2&gt;

&lt;p&gt;You can find some background for this post here: &lt;a href=&quot;/2015/05/04/Deep-Learning-with-Python.html&quot;&gt;Deep Learning with Python&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Torch has its strengths and its weaknesses. According to a wonderful write-up by &lt;a href=&quot;https://plus.google.com/+TomaszMalisiewicz/posts&quot;&gt;Tomasz Malisiewicz&lt;/a&gt; titled &lt;a href=&quot;http://www.computervisionblog.com/2015/06/deep-down-rabbit-hole-cvpr-2015-and.html&quot;&gt;Deep down the rabbit hole: CVPR 2015 and beyond&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Caffe is much more popular that Torch, but when talking to some power users of Deep Learning (like &lt;a href=&quot;https://plus.google.com/100209651993563042175&quot;&gt;+Andrej Karpathy&lt;/a&gt; and other DeepMind scientists), a certain group of experts seems to be migrating from Caffe to Torch.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I read somewhere else that Caffe : Torch :: Applications : Research. If you want to do serious research in deep learning, I would suggest using Torch given the level of current interest in the ecosystem, as well as Torch’s flexibility and platform. Facebook AI and Google DeepMind use Torch.&lt;/p&gt;

&lt;p&gt;The adept reader may be thinking, “wait a second… I thought this was a series of blog posts about doing deep learning with Python… but Torch is all Lua”, and yes, you are right, Torch is not a Python tool (though some parts of Torch have Python bindings): Torch, from a user’s perspective, is mostly Lua.&lt;/p&gt;

&lt;p&gt;For someone well-acquainted with Python, Lua isn’t so different. If doing deep learning is more important to you than what language you use, use Torch. If using Python is more important than using all of the potential horsepower available to you at your fingertips, then don’t use Torch, but just know that – as of this blog post – it’s increasingly the research tool of choice.&lt;/p&gt;

&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;The documentation for Torch is great. They make installation as easy as a few lines of bash! Here’s a link to &lt;a href=&quot;http://torch.ch&quot;&gt;Torch&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s get started. Assuming you don’t mind installing torch into &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/torch&lt;/code&gt;, you can just use the following bash commands to get started.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;curl &lt;span class=&quot;nt&quot;&gt;-sk&lt;/span&gt; https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;git clone https://github.com/torch/distro.git ~/torch &lt;span class=&quot;nt&quot;&gt;--recursive&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ~/torch&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; ./install.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now Torch, LuaJIT, LuaRocks (package manager akin to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;pip&lt;/code&gt;), and some packages (installed via LuaRocks) are installed. Type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;th&lt;/code&gt; to use the REPL, and in the REPL, you can type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;os.exit()&lt;/code&gt; to quit.&lt;/p&gt;

&lt;p&gt;At the end you’ll be prompted to add Torch to your PATH environment variable. Type ‘yes’ to complete everything. Just do a quick &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;source ~/.bashrc&lt;/code&gt; to update your environment.&lt;/p&gt;

&lt;p&gt;Head to Torch’s &lt;a href=&quot;http://torch.ch/docs/getting-started.html&quot;&gt;getting started&lt;/a&gt; page for more.&lt;/p&gt;

&lt;p&gt;Now we’re ready to begin. Fasten your seltbelt: some crazy s*** is about to go down.&lt;/p&gt;

&lt;h2 id=&quot;extending-torch-to-generate-joycean-prose&quot;&gt;Extending Torch to generate Joycean prose&lt;/h2&gt;

&lt;p&gt;What the… what does that even mean?&lt;/p&gt;

&lt;p&gt;I’ll show you.&lt;/p&gt;

&lt;p&gt;Andrej Karpathy’s very detailed and extremely interesting blog post, &lt;em&gt;&lt;a href=&quot;http://karpathy.github.io/2015/05/21/rnn-effectiveness/&quot;&gt;The Unreasonable Effectiveness of Recurrent Neural Networks&lt;/a&gt;&lt;/em&gt;, goes through several examples that harness code that he very kindly open-sourced &lt;a href=&quot;https://github.com/karpathy/char-rnn&quot;&gt;here&lt;/a&gt;, to implement a “multi-layer Recurrent Neural Network (RNN, LSTM, and GRU) for training/sampling from character-level language models.”&lt;/p&gt;

&lt;p&gt;To do that, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git clone&lt;/code&gt; the repo wherever you like. Then:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;luarocks &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;nngraph 
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;luarocks &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;optim
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;luarocks &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;cutorch &lt;span class=&quot;c&quot;&gt;# for GPU use&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;luarocks &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;cunn    &lt;span class=&quot;c&quot;&gt;# for GPU use&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;OK. Awesome. Now that we’ve got that out of the way, let’s get some data:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;img src=&quot;/assets/big_data.png&quot; alt=&quot;Big Data&quot; /&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Lulz. Let’s skip the big data for now and just start with something small: the full text of my favorite novel, James Joyce’s 1922 masterpiece, Ulysses. Full text available &lt;a href=&quot;https://www.gutenberg.org/files/4300&quot;&gt;here&lt;/a&gt; via one of the files types of your choosing (I chose .txt for this project). In a text editor, I removed the beginning/end of the file what I considered to be unreflective of Joyce, namely, the Project Gutenberg boilerplate :-). My file begins, famously:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;– I –&lt;/p&gt;

  &lt;p&gt;Stately, plump Buck Mulligan came from the stairhead, bearing a bowl of
lather on which a mirror and a razor lay crossed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And ends, famously:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;yes I said yes I will Yes.&lt;/p&gt;

  &lt;p&gt;Trieste-Zurich-Paris 1914-1921&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sweet. Now navigate to your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;char-rnn&lt;/code&gt; directory, and move your Ulysses text file to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/char-rnn/data/ulysses/input.txt&lt;/code&gt; (obviously doing &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mkdir &amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/char-rnn/data/ulysses&lt;/code&gt; first).&lt;/p&gt;

&lt;p&gt;If you want to look at some settings, you can type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;th train.lua -help&lt;/code&gt;, otherwise, let’s start training on Ulysses.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;th train.lua &lt;span class=&quot;nt&quot;&gt;-data_dir&lt;/span&gt; data/ulysses &lt;span class=&quot;nt&quot;&gt;-gpuid&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-1&lt;/span&gt; &lt;span class=&quot;c&quot;&gt;# this goes against your CPU&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;th train.lua &lt;span class=&quot;nt&quot;&gt;-data_dir&lt;/span&gt; data/ulysses 		&lt;span class=&quot;c&quot;&gt;# this goes against your GPU&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If sucessful, you should see some output like this:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;1897/28850 &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;epoch 3.288&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, train_loss &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 1.76713313, grad/param norm &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 9.8094e-02, &lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;/batch &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 0.21s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Let’s sample from a training checkpoint and see what kind of text we generate.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;## When sampling, be consistent w.r.t. whether or not you trained/are training with your CPU or GPU.&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;th sample.lua cv/some_checkpoint.t7 &lt;span class=&quot;nt&quot;&gt;-gpuid&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-1&lt;/span&gt; &lt;span class=&quot;c&quot;&gt;# if you trained against your CPU&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;th sample.lua cv/some_checkpoint.t7 			&lt;span class=&quot;c&quot;&gt;# if you trained against your GPU&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Note that there are some markdown-esque characters in Project Gutenburg files, denoting common fomatting styles such as underline, &lt;em&gt;italics&lt;/em&gt;, etc.&lt;/p&gt;

&lt;p&gt;Here’s our first sample:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;th sample.lua cv/lm_lstm_epoch1.73_1.9188.t7
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;blockquote&gt;
  &lt;p&gt;nned posted, his bind. He are so had. Mishing not eumal dy saye
gap, Jesmotition, Hefleston foum his isence, Bloom, the peptlemer and callidant or yame of cersainitien. With Redellosy
Wisco oum for con. Maldrear sailly of exchochened liaty that in throum munders anutetoH icatiped _Koumban of falles aroumedupacelly)_ Jal. Noceping fer scontactrents?&lt;/p&gt;

  &lt;p&gt;–Comanen, felliits.
 Shourd comentlaned
on or whal onverfoul of wappen in that blinking awdactire of like
a bancaserable with m. Joy, E! I,, dlodnce good thet? Stubre he owald few of cloum. THy and more of
the
varss spewing how. What?&lt;/p&gt;

  &lt;p&gt;ut the brom in Bock Murigens, what earte up  vore.
Herrom Goloonhy
crarks of the time he burth for me fleeterelfs him.&lt;/p&gt;

  &lt;p&gt;–Claper I saum. Learked of thit?&lt;/p&gt;

  &lt;p&gt;–On a silthing by smolled in Dra0 Comes beard.&lt;/p&gt;

  &lt;p&gt;Bliest _Ceven te moune, Frambly, sears have the druck, turt, some, Manch Cire u’blow. I house I west and yes? I’res
babladgow. Jneess of combolast and meeye._ Maloraga_.&lt;/p&gt;

  &lt;p&gt;You cipet dought
who
ca
sumper herd claused. Lyformselting tumper. Ithere.&lt;/p&gt;

  &lt;p&gt;He your after urot!
Swort up he siblar cappitites. Quains_ life to a sude her coucting then
feose, it wattersing thinsarding oot
of
Dostle_)_ Who one sporial
sp. Butnen it the sapined by Gulleruust pursan, Muss? Mome Ponain’s. Jesoliy, _10 Excudis amored he’s yel. _Thobe and pricty.&lt;/p&gt;

  &lt;p&gt;I movery’s to.&lt;/p&gt;

  &lt;p&gt;moned her have coman 4Dakit them man her are to yeard took to Detrarn yound more, Woundackel. And the bgoinalius Parman’s bushove bifferly,
larging toost)_ Goine of the nothing any suppencede_ lictedy groveby)_&lt;/p&gt;

  &lt;p&gt;_(Werrical mentovatubaly alking flames of conson
is was diys.
Hat, they’ke of dest jegcises corsay:&lt;/p&gt;

  &lt;p&gt;_Wemstan’s naks 107rearminal gruttell and here to gusrouted or shunonil on that the to in temhord beasing hay Lovely, Mn Purninat.&lt;/p&gt;

  &lt;p&gt;U telloss aster. Dewained. Setherades)_ Hishur hand, Drisim, Hell Twander thack
_Dousfuar prosy, doneson. Mound deatingsed, that pibst it on melughands and smul I make to enel in the comuty he and butterelan
and&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I recognize some words in there! Wow, this almost looks like Finnegans Wake&lt;sup id=&quot;fnref:0&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:0&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and we’re just getting started. Or perhaps some form of primordial English/Anglo-Saxon. Leopold Bloom is in there, “Jesmotition” seems like a play on “Jesuit”, and it’s occasionally formatting italics properly in markdown with two &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_&lt;/code&gt; (though it did not close its parentheses properly… yet).&lt;/p&gt;

&lt;p&gt;Here’s the best sample, after 50 training epochs. Ulysses is about ~1.5mb in size, so fairly small. Smaller than Karpathy’s Shakespeare contactenated training/validation file of ~4.4mb. The lowest validation error that was captured was after 17 epochs (after which it started to slightly overfit… the default parameters are small!).&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;th sample.lua data/ulysses/cv/lm_lstm_epoch17.33_1.5834.t7 &lt;span class=&quot;nt&quot;&gt;-temperature&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;blockquote&gt;
  &lt;p&gt;Power, Kinch, an, his dead reformed, for the churches hand.&lt;/p&gt;

  &lt;p&gt;They were namebox: a kitchen and perhage his sight on his canes deep any outwas, life
stands. Clesser. A fellows her last firing. And beneather to him,
they give me 39: then he was brilliging bying. A lair unde for paper so
fresh strangy gallous flashing at the crassies and
thit about a son of their God’s kind. His arm.&lt;/p&gt;

  &lt;p&gt;She curaces you much interracied that common of yours. Passenear and he toteher. There
and in I live for near them it spouched hers.&lt;/p&gt;

  &lt;p&gt;Becual left, her wall.&lt;/p&gt;

  &lt;p&gt;He is Lounata, the curtor, white hoaryses that gave Coimband, looked by
a hum, he
wouldn distraction of Drwaphur, the drinken causing out for everybody holy
gloriamed and stone.&lt;/p&gt;

  &lt;p&gt;Died’s patther pleaser, tomberful jung bless that on the door and
grunting for Pletic laudancy, signorian doing to the would. One a hard
he avaded him explaid, music hazebrakes vala oberous inquisition,
and ruges grauts with special pupped letters in which      Buck Poile starts were up to them
upon his great gizzard exchbumminesses:
the ebit passed pounds. Insaces. Molly, fallonerly, box to intertails.&lt;/p&gt;

  &lt;p&gt;Bloom works. Quick! Pollyman. An a lot it was seeming, mide, says, up and the rare borns at
Leopolters! Cilleynan’s face. Childs hell my milk by their
doubt in thy last, unhall sit attracted with source
     The door of Kildan
and the followed their stowabout over that of three constant
trousantly Vinisis Henry Doysed and let up to a man with hands in surresses afraid quarts to here over
someware as cup to a whie yellow accept thicks answer to me.&lt;/p&gt;

  &lt;p&gt;Hopping, solanist, cheying and they all differend and wears, widewpackquellen
cumanstress, greets. Chrails her droken looked musicles reading and reason descorning
for the by Bloomford, swelling a scrarsuit by breed we mouth,
the past much turned by Borne.&lt;/p&gt;

  &lt;p&gt;Makers hear than, Moormar there, the first porter filsions.&lt;/p&gt;

  &lt;p&gt;What player well happened the last. A field stones,
halling shutualar of anylings, Abbo&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Wow, that’s stunning. “Brillig” makes me think of Jabberwocky, and there are some humorous gems in there. Notice how real sentence structures are starting to take shape and notice the references to multiple characters, including their nicknames. And the various portmonteaus and &lt;em&gt;jeu de mots&lt;/em&gt;. Remember that this model trains at the character level and at the start of training didn’t know a thing about English or anything else relating to the structure of language or prose. Fascinating.&lt;/p&gt;

&lt;p&gt;How did we get this?&lt;/p&gt;

&lt;p&gt;One of the command-line arguments used for sampling is called &lt;em&gt;temperature&lt;/em&gt;, whose flag is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-t&lt;/code&gt;. This can be a float between 0 and 1 (must be strictly &amp;gt; 0). To give some intuition, temperature is in a sense the amount of creative license you are giving to your trained RNN while sampling from it. A temperature of 1 allows for the most creative license. This will give perhaps the most interesting results, but said results may not resemble (depending on the size of your network, the size of your data, and how it fits) something that is as readable as something sampled at a lower temperature. Conversely, lower temperature samples are more conservative: sampling at lower temperatures is more likely to result in samples that behave very nicely, yet they can even be &lt;em&gt;boring&lt;/em&gt;. In some cases extremely low temperatures will cause self-repeating loops. Here is a very sad example where I have set the temperature = 0.1 (sad because Ulysses is so characteristically original and unrepetitive).&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;th sample.lua data/ulysses/cv/lm_lstm_epoch17.33_1.5834.t7 &lt;span class=&quot;nt&quot;&gt;-temperature&lt;/span&gt; 0.1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;blockquote&gt;
  &lt;p&gt;nears, the street of the
constinction of the same of the same of the course of the street of the
constinical constinion of the constituting the sun and the priest of the
constinions of the best bearded the stage of the same of the same
bright the state of the stage of the barrels and the barrels and
the street of the constituting the same of the stage of the same of the
constinction of the street of the course of the constituting the stairs
of the course of the street of the barrels and the
states of the street of the same of the course of the course of the
bell discussion of the course of the stage of the street of the
course of the street of the barrels and the street of the stage of the
constinction of the course of the same of the course of the constituting the
second and the same of the course of the same of the course of the
last of the street of the constable of the constituted the stairs of the
course of the course of the last setter of the course of the same thing
and the street of the course of the street of the course of the face of the
constinction of the course of the same of the stage of the street of the
constinction of the course of the barrels and the steps of the same of the
constinions of the course of the constituting the sea and the stage of the
last stranger of the stage of the street of the street and the street of the
constinction of the course of the street of the same of the state of the
constinical sure of the course of the same of the course of the stage of the
constinction of the same breath of the police of the same the same of the
construction of the same of the course of the course of the last the
standance of the course of the street of the same of the darkers
of the constable of the construction of the same of the street of the
bellaman with the street of the course of the same of the course of the stairs
of the course of the street of the course of the beauty of the
construction of the constituting the same states of the course of&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Indeed this isn’t quite as fun. Notice how conservative our RNN has become w.r.t. sampling new words and structures. It’s not perfectly repetitive, but it sure reads like it. Remember, this is being sampled character-by-character.&lt;/p&gt;

&lt;h2 id=&quot;what-to-do-next&quot;&gt;What to do next&lt;/h2&gt;

&lt;p&gt;You may be inspired to mashup your favorite authors. You may be inspired to train an RNN on Finnegans Wake.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; &lt;em&gt;Klikkaklakkaklaskaklopatzklatschabattacreppycrottygraddaghsemmihsammihnouithappluddyappladdypkonpkot&lt;/em&gt;. You may be inspired to mashup texts in more than one language. You may be inspired to train an RNN on code, or perhaps sheet music. Perhaps you may be inspired to train one on audio or video files.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Try training on a larger corpus. Try increasing the size of your network layers, as well as the number of layers. To paraphrase &lt;a href=&quot;http://www.andrewng.org/&quot;&gt;Andrew Ng&lt;/a&gt;, if your training error is too high, then add more rocket fuel (data), and if your test error is high, then add more rockets (i.e., increase the size of your deep neural network).&lt;/p&gt;

&lt;p&gt;In this post, we have only explored a single deep learning model: a recurrent neural network with a long short term memory. Torch is extremely flexible and can be used for (as far as I know) neural networks topologies represented by arbitrary directed acyclic graphs (DAGs) – though bi-directional RNNs and other valid DNN architectures seem to violate the DAG requirement – I need to learn more. If you’re a graph theory nerd like I am, that is pretty cool. In fact, that should be its own blog post.&lt;/p&gt;

&lt;p&gt;Another thing you can do is do a deep dive into the literature! Here is an excellent technical &lt;a href=&quot;http://www.cs.toronto.edu/~graves/preprint.pdf&quot;&gt;primer&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:0&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;To be fair, Finnegans Wake is more readable than this, but when I think of Finnegans Wake in the back of my mind, it is almost no different than this first example. Let’s see how successive Epochs bring our RNN closer to Ulysses. Next I’ll train an RNN on Finnegans Wake itself. Yikes! &lt;a href=&quot;#fnref:0&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;You are a brave, brave soul. Finnegans Wake itself looks like the output of an RNN with high temperature. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Note: I am attempting to train on audio files right now and am seeing mixed results, which I think is due to the character-level training that this code is performing, and how that syncs up against very specific file types such as MIDI (especially multi-channel). &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Graves, Alex. Supervised sequence labelling with recurrent neural networks. Vol. 385. Heidelberg: Springer, 2012. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
        <pubDate>Sun, 28 Jun 2015 11:49:40 +0000</pubDate>
        <link>http://korbonits.github.io/2015/06/28/Torch-bleeding-edge-DNN-research.html</link>
        <guid isPermaLink="true">http://korbonits.github.io/2015/06/28/Torch-bleeding-edge-DNN-research.html</guid>
        
        
      </item>
    
  </channel>
</rss>
