Hold me accountable

This year, I propose the following set of New Year’s Resolutions. I am posting this subset of my goals publically so you can hold me accountable. I’ll do a retrospective at the end of 2018, and try to keep this list updated with progress throughout the year. I’m using Trello to keep track in real time.

Importantly, in this post I focus on personal, not-strictly-professional goals. Undoubtedly, many of my 2018 goals are professional, and many of my outside-of-work activities are professional in nature and directly or indirectly benefit my professional life. To me, it goes without saying that I intend to craft, measure, track, and achieve professional goals throughout the year. In terms of specifics, most of those are internal to where I work. This post is about “fun stuff outside of work that I can share” :). Furthermore, I’m not going to include things that I’ll do anyway like see new art exhibits, go to concerts, or other cultural events.

Here is my Goodreads profile if you want to see what I finished in 2017 and in 2016.

High level summary of 2018 resolutions:

  • Reading
  • Writing
  • Travel
  • Fitness
  • Less television
  • Optimize volunteering
  • Themes

Reading

  • Read 50 books/15,000-20,000 pages (tracking with Goodreads)
  • Read year 1 of the 10 year reading plan of Great Books of the Western World
  • Read all papers in Microsoft Research’s “Paper Legend: Artificial Intelligence Edition” card game

This year I am going to change it up in terms of my reading goals. Last year I took an ambitious goal of reading 50 books. By Google OKR standards, I hit 76%, which is quite a success! I liked this goal because it was easy to track and kept me motivated throughout the year. I may have hit 50 had I spent the first half of 2017 on-target, second, I spent about a month without finishing a book in Q4 as I transitioned from one job to another.

“Number of books read” is not the best metric, in my opinion. Example: War and Peace != Hamlet. This metric is easy to game by reading smaller books. I recall making some tradeoffs this fall where, trying to select a next book to read, I looked for the shortest book in my library I hadn’t read yet. Essentially, this metric penalizes you for reading longer works and rewards you for reading shorter works.

“Number of pages read”, a metric that Goodreads shows (based on the public information about the books you shelve), is much better for comparing War and Peace with Hamlet. In 2017, I read (i.e., finished) 38 books. This turns out to be about 14,500 pages according to Goodreads. That’s about 382 pages/book and a reading pace of about 40 pages/day. This metric is still not perfect, however. It doesn’t take into account the heterogeneity of writing styles and, perhaps more importantly, speed of comprehension. For example, some contemporary writing reads so well it could be a movie screenplay. Many such books are adapted into films. Some books, like Ready Player One, are captivating page-turners that one can read in a single sitting. However, some books, like a mathematics textbook (with proofs) or philosophy, read much, much slower. One of the reasons I find fiction so easy to read is that, by suspending disbelief, it’s easy to get sucked in and follow along in a very immersive manner. One could do a close reading of any fiction and spend much more time dissecting it, but for pure entertainment, it’s easy to proceed quickly. Reading math, philosophy, or scientific papers is much more demanding on the reader. Not only is the information contained in these formats much higher in terms of density (e.g., mathematical notation), but following the logic is intrinsically a very demanding task. Reading in these formats is highly non-linear (in terms of progressing from beginning to end). One is constantly looking up terms, searching in indices, reading cited papers, re-reading paragraphs, stopping to think, etc. “Number of pages” is still not the perfect metric for measuring reading.

What’s a perfect metric? I’m not really sure. I think a metric along the lines of “elapsed time reading in a flow-like state”1 is probably a democratic one. That’s harder to track unless you make tracking it a very strong habit. For 2018, I think I’ll stick to number of books read as a high-level proxy and formulate multiple reading subgoals to round things out.

50 books/15,000-20,000 pages

Some 2017 books

Usually this goal is haphazard in its selection (c.f. my Goodreads of 2016 and 2017). Should I be more prescriptive this year? Let’s do this: I’ll suggest several focus areas but won’t commit to specific books.

Areas of particular focus:

Non-goals (these go without saying – I will read these to some extent anyway):

  • Reading the news
  • Reading ML conference papers (or prominent non-conference papers on arXiv)
  • Reading papers that aren’t ML-related and aren’t published in Nature

Additional thoughts: one of the tools I used to help me through certain books in 2017 was by listening to audiobooks on Audible. Audible, while representing a small minority of books I “read” in 2017, was significant because it exposed me to my first audiobook experiences. I was able to “read” more in 2017 than I otherwise would have because I could multi-task and listen to an audiobook, say, while I was washing my dishes or going for a walk. Reading actively is still more enjoyable to me than listening passively. It depends on the book, but in general I’d say I recall more from books I read than from books I listened to. Audible does have some gems such as Ian McKellan reading The Odyssey. You are in for a treat.

The Great Books, Year 1

Plato's academy

One of my reading sub-goals in 2018 is to start systematically attacking my copy of the Great Books. In volume 1, “The Great Conversation”, there’s a 10 year reading plan laid out by the main editors, Robert Maynard Hutchins and Mortimer Adler. I’ve read many of the works suggested for the first year before but I will willingly re-read them.

Here is the recommended reading list for year one of the Great Books:

  • Plato: Apology, Crito
  • Aristophanies: Clouds, Lysistrata
  • Plato: Republic [Book I, II]
  • Aristotle Ethics [Book I]
  • Aristotle Politics [Book I]
  • Plutarch: Lycurgus, Numa Pompilius, Lycurgus and Numa compared, Alexander, Caesar
  • New Testament [Matthew & Acts of Apostles]
  • St. Augustine: Confessions [Books I-VIII]
  • Machiavelli: The Prince
  • Rabelais: Gargantua and Pantagruel [Book I-II]
  • Montaigne: Essays [Of Custom, and That We should not easily change a law received; of pedantry; of the education of children; that it is folly to measure truth and error by our own capacity; of cannibals; that the relish of good and evil depends in a great measure upon the opinion we have of them; upon some verses of virgil
  • Hamlet
  • Locke: Concerning Civil Government [Second Essay]
  • Rousseau: The Social Contract [Book I-II]
  • Gibbon: The Decline and Fall of the Roman Empire [Ch. 15-16]
  • The Declaration of Independence, the Constitution of the United States, The Federalist [Nos. 1-10, 15, 31, 47, 51, 68-71]
  • Smith: The Wealth of Nations [Introduction-Book I, Ch. 9]
  • Marx-Engels: Manifesto of the Communist Party

Of these, I’ve read Plato, Aristotle, half of Aristophanies, 2/5 of Plutarch (Caesar and Alexander, obvi.), the new testament, St. Augustine, Machiavelli, Shakespeare, Lock, Rousseau, American founding documents (but not the Federalist papers), and the Communist Manifesto. It has been many years since I’ve read these, however, and I’m eagerly looking forward to re-reading all of them as I enter a new decade of life and prepare for year 2 of The Great Books.

Paper Legend: Artifical Intelligence Edition

This resolution is inspired by my attendance at NIPS 2017. Microsoft was a sponsor and gave out these nerdy-looking card packs as swag. I’ve read some of these papers already but in 2018 I will make a concerted effort to “catch them all” especially seeing that I’ve learned a lot of ML since I began reading some of the papers on this list (c.f. AlexNet).

Each card looks like this:

Batch Norm

Here’s the list of citations:

Ioffe, Sergey, and Christian Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift.” In International Conference on Machine Learning, pp. 448-456. 2015.

He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.

Dahl, George E., Dong Yu, Li Deng, and Alex Acero. “Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition.” IEEE Transactions on audio, speech, and language processing 20, no. 1 (2012): 30-42.

Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. “Sequence to sequence learning with neural networks.” In Advances in neural information processing systems, pp. 3104-3112. 2014.

Bishop, Chris M. “Training with noise is equivalent to Tikhonov regularization.” Training 7, no. 1 (2008).

Hochreiter, Sepp, and Jürgen Schmidhuber. “Long short-term memory.” Neural computation 9, no. 8 (1997): 1735-1780.

Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller. “Nonlinear component analysis as a kernel eigenvalue problem.” Neural computation 10, no. 5 (1998): 1299-1319.

Bell, Anthony J., and Terrence J. Sejnowski. “An information-maximization approach to blind separation and blind deconvolution.” Neural computation 7, no. 6 (1995): 1129-1159.

Bengio, Yoshua, Patrice Simard, and Paolo Frasconi. “Learning long-term dependencies with gradient descent is difficult.” IEEE transactions on neural networks 5, no. 2 (1994): 157-166.

Srivastava, Nitish, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. “Dropout: a simple way to prevent neural networks from overfitting.” Journal of machine learning research 15, no. 1 (2014): 1929-1958.

Long, Jonathan, Evan Shelhamer, and Trevor Darrell. “Fully convolutional networks for semantic segmentation.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440. 2015.

Snoek, Jasper, Hugo Larochelle, and Ryan P. Adams. “Practical bayesian optimization of machine learning algorithms.” In Advances in neural information processing systems, pp. 2951-2959. 2012.

Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves et al. “Human-level control through deep reinforcement learning.” Nature 518, no. 7540 (2015): 529-533.

Blei, David M., Andrew Y. Ng, and Michael I. Jordan. “Latent dirichlet allocation.” Journal of machine Learning research 3, no. Jan (2003): 993-1022.

Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).

Dechter, Rina, Itay Meiri, and Judea Pearl. “Temporal constraint networks.” Artificial intelligence 49, no. 1-3 (1991): 61-95.

LeCun, Yann, Léon Bottou, Yoshua Bengio, and Patrick Haffner. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86, no. 11 (1998): 2278-2324.

Sutton, Richard S. “Learning to predict by the methods of temporal differences.” Machine learning 3, no. 1 (1988): 9-44.

Boser, Bernhard E., Isabelle M. Guyon, and Vladimir N. Vapnik. “A training algorithm for optimal margin classifiers.” In Proceedings of the fifth annual workshop on Computational learning theory, pp. 144-152. ACM, 1992.

Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” In Advances in neural information processing systems, pp. 1097-1105. 2012.

Hinton, Geoffrey E., and Drew Van Camp. “Keeping the neural networks simple by minimizing the description length of the weights.” In Proceedings of the sixth annual conference on Computational learning theory, pp. 5-13. ACM, 1993.

Dayan, Peter, Geoffrey E. Hinton, Radford M. Neal, and Richard S. Zemel. “The helmholtz machine.” Neural computation 7, no. 5 (1995): 889-904.

Murphy, Kevin P., Yair Weiss, and Michael I. Jordan. “Loopy belief propagation for approximate inference: An empirical study.” In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pp. 467-475. Morgan Kaufmann blishers Inc., 1999.

Neal, Radford M., and Geoffrey E. Hinton. “A view of the EM algorithm that justifies incremental, sparse, and other variants.” In Learning in graphical models, pp. 355-368. Springer Netherlands, 1998.

Roweis, Sam, and Zoubin Ghahramani. “A unifying review of linear Gaussian models.” Neural computation 11, no. 2 (1999): 305-345.

Minka, Tom. Divergence measures and message passing. Technical report, Microsoft Research, 2005.

Sutton, Richard S., David A. McAllester, Satinder P. Singh, and Yishay Mansour. “Policy gradient methods for reinforcement learning with function approximation.” In Advances in neural information processing systems, pp. 1057-1063. 2000.

Cortes, Corinna, and Vladimir Vapnik. “Support-vector networks.” Machine learning 20, no. 3 (1995): 273-297.

Zeiler, Matthew D., and Rob Fergus. “Visualizing and understanding convolutional networks.” In European conference on computer vision, pp. 818-833. Springer, Cham, 2014.

MacKay, David JC. “A practical Bayesian framework for backpropagation networks.” Neural computation 4, no. 3 (1992): 448-472.

Fei-Fei, Li, and Pietro Perona. “A bayesian hierarchical model for learning natural scene categories.” In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2, pp. 524-531. IEEE, 2005.

Tenenbaum, Joshua B., Vin De Silva, and John C. Langford. “A global geometric framework for nonlinear dimensionality reduction.” science 290, no. 5500 (2000): 2319-2323.

Taskar, Ben, Carlos Guestrin, and Daphne Koller. “Max-margin Markov networks.” In Advances in neural information processing systems, pp. 25-32. 2004.

Jaakkola, Tommi, and David Haussler. “Exploiting generative models in discriminative classifiers.” In Advances in neural information processing systems, pp. 487-493. 1999. Harvard

Watkins, Christopher JCH, and Peter Dayan. “Q-learning.” Machine learning 8, no. 3-4 (1992): 279-292.

Writing

Heterogeneous writing

In 2018, I am going to take on writing goals. Here’s my list:

  • Blogging: >= 12 blog posts (1x/month) [1/12]
  • Speaking: >= 1 public speaking event (under writing because it’s putting together a talk) [0/1]
  • NIPS workshop submission (this is pretty ambitious, I need to scope)
  • Music composition (string quartet – likely a single movement), to be premiered for my 30th birthday.

I think that I would like to do some other writing-related things like successfully contribute to an open source software project or begin to write a book, but those need more scoping before I could take such goals on.

One of the best things I did in 2017 was attend NIPS. I am inspired by all of the brilliant research occuring in machine learning and I want to contribute. I think if I had to pick a workshop to contribute to this year it would be for the creativity workshop. I think a workshop submission would be a great gateway for a 2019+ goal of successfully submitting and being chosen as part of the NIPS conference proceedings. Who’s with me?

Travel

Monaco

This year I will be heading back to Paris for the first time since I was 17. I’d like to bone up on my French, which is rusty as of 2006, and I’d like to visit a new country… perhaps the one shown in the photo above?

  • Visit >= 1 new country
  • Practice French

Fitness

Running

As I enter a new decade of life, I want to add a focus on physical fitness. In 2017 I started regularly working with a personal trainer and it has been a great start to being fit. That being said, it isn’t enough. I need to take some attainable yet challenging and measurable goals. Challenging especially if I fall behind (see running goal).

  • Personal training 1x/week [1/52]
  • Lifting 1x/week (in addition to training) [1/52]
  • Go sailing >= 1x/month [0/12]
  • Enter >= 1 sailboat race [0/1]
  • Run a 5k race (I have subgoals to get to this) [0/1]
  • Run 365 miles in 2018 (>= 1 mile/day) [1/365]
  • Lose 10 lbs., sustainably
  • Do yoga >= 1x in January; >= 1x/month after if I enjoy it
  • Climb Mt. Pilchuk [0/1]

Stretch goals:

  • Win >= 1 sailboat race
  • Take (car) racing lessons
  • Enter >= 1 autocross

Why include sailing and autocross into fitness? I think because they’re both sports, they both require skill, and they frankly require one to be physically fit to perform well. Why put autocross under stretch goals? 2018 is going to be busy, and there are some racing prerequisites I’d like to take care of before I do autocross. Let’s just say I need to take some racing lessons. My high-school obsession, Gran Tourismo, is likely inadequate for tracking my own car.

Less television

Looking back on 2017, perhaps the main hindrance to achieving my full reading goal (truly my only 2017 resolution) was due to the number of hours I spent streaming television shows on Amazon Prime, Netflix, and Hulu.

Clearly, this time wasn’t spent too memorably because I can’t remember all of the shows I watched in 2017. Notably, I caught up with some of my favorites:

  • Game of Thrones (all 7 seasons, hadn’t seen any as of July 2017)
  • Billions
  • Mozart in the Jungle
  • Silicon Valley
  • Will & Grace (original and new)
  • The Handmaid’s Tale

Game of Thrones and Will & Grace are each > 7 seasons, so this is a lot of television in 2017. I don’t plan on binge-watching anything in 2018 except for perhaps new seasons of Billions, Mozart in the Jungle, Silicon Valley, and Stranger Things if I can be persuaded.

Optimize volunteering

Optimize is the key word here. I don’t intend to eliminate, reduce, or increase volunteer activities in 2018, rather, I intend to optimize them. Right now I am stretched fairly thin on volunteer activities, most of which are new as of 2017. They’re all important to me, but some take a disproportionate amount of time to manage performantly, and I need to reallocate some of that time to others. That will require learning and evolving skills such as handing off responsibilities and relinquishing control. Overall, 2017 was a great year in terms of volunteer opportunities for me. I experienced new kinds of volunteering for 3 different organizations, all of which have been rewarding and unique. For specifics you can refer to my LinkedIn profile.

Themes

Dude, you’ve got a lot of goals for 2018. What’s the unifying theme? You’re all over the place?

One of the unifying themes for my 2018 goals is organization. Merely writing down my main (albeit varied) interests is a good starting point. Second, trying to come up with reasonable but not-attainable-without-effort goals for them is crucial because it forces me to do more than the status quo. Third, plurality. It’s easy to take a couple of goals that one can game, but singular focus is boring (to me). Fourth, accountability. Fifth, doing/trying new things: I’ve never been much of a runner or a hiker (while I have gone through spurts, they’ve never been long-lived), so these goals (like run >= 365 miles or climb Mt. Pilchuk) are new and designed to create habits rather than stand in as mere bucket-list items.

#gettingthingsdone #thepowerofhabit

By the end of 2018, I’d love sing (in a major key): QED.

  1. For more on flow, check out https://en.wikipedia.org/wiki/Flow_(psychology)