The Independent Resercher

The independent researcher: a thought-provoking short article by @nayafia. “It’s sort of odd that we assume you need someone’s permission to do research. There’s no reason that universities need to be the gatekeepers of exploring and developing new ideas.” https://t.co/WkynPtXd7x

— hardmaru (@hardmaru) June 30, 2018

“You don’t need to publish papers in academic journals to become respected. You just need a curious mind, a bankroll, and a commitment to learning. Producing work that makes other people think, and perhaps change their behavior, is the validation, and it’s enormously satisfying.”

— hardmaru (@hardmaru) June 30, 2018

Sometimes, a lack of formal training can produce even more interesting results: pic.twitter.com/KMs6V7fYPV

— hardmaru (@hardmaru) June 30, 2018

Research

Untrained RNNs Perform Just as Well

Unexpected finding while reproducing World Models: Untrained RNNs perform just as well.https://t.co/eugLm1gb4t pic.twitter.com/Nh8iE7Amdr

— Brandon Rohrer (@_brohrer_) June 28, 2018

SampleRNN

Our second paper, “Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands” published @ MuMe 2018 (ICCC 2018)
Paperhttps://t.co/WViBdgaVxR
Musichttps://t.co/BCqoF1IepC

— dadabots (@dadabots) June 28, 2018

Image generation from scene graphs

Image generation from scene graphs!! Check out our codes here! https://t.co/4BNQngOFyq

— Fei-Fei Li (@drfeifei) June 29, 2018

ResNet with One-neuron Hidden Layers

ResNet with one-neuron hidden layers is a Universal Approximator: The main contribution of this paper is to show that ResNet with one single neuron per hidden layer is enough to provide universal approximation as the depth goes to infinity. https://t.co/dujgHIVrbg pic.twitter.com/G9zOojqhtB

— hardmaru (@hardmaru) June 30, 2018

A toy example to illustrate the decision boundaries learned using a narrow vanilla fully connected network vs a narrow ResNet. pic.twitter.com/v8hmEYhIpJ

— hardmaru (@hardmaru) June 30, 2018

Visualization

#China’s nitrogen dioxide #pollution levels since 2005. #datavizhttps://t.co/nMuNF1vpu8 pic.twitter.com/BMvlE0Akbs

— Randy Olson (@randal_olson) June 30, 2018

📜 Beautiful and powerful
"A visual history of the U.S. Census" https://t.co/3ltLZBzHSr via @CityLab #infovis pic.twitter.com/WXPPoorHs0

— Mara Averick (@dataandme) June 30, 2018

Most Common Neural Net Mistaks

most common neural net mistakes: 1) you didn't try to overfit a single batch first. 2) you forgot to toggle train/eval mode for the net. 3) you forgot to .zero_grad() (in pytorch) before .backward(). 4) you passed softmaxed outputs to a loss that expects raw logits. ; others? :)

— Andrej Karpathy (@karpathy) July 1, 2018

oh: 5) you didn't use bias=False for your Linear/Conv2d layer when using BatchNorm, or conversely forget to include it for the output layer .This one won't make you silently fail, but they are spurious parameters

— Andrej Karpathy (@karpathy) July 1, 2018

6) thinking view() and permute() are the same thing (& incorrectly using view)

— Andrej Karpathy (@karpathy) July 1, 2018

These are important points! The fastai library handles 2, 3, & 4 for you, and we teach 1 as a key technique in the course: https://t.co/y1OKto7666

— Rachel Thomas (@math_rachel) July 1, 2018

Tutorials / Reviews

Measuring Punctuation in Literature

New blog post: Measuring punctuation ⁉️ use in literature with #rstats https://t.co/FGrwOlvg6F pic.twitter.com/av9wRr9rcQ

— Julia Silge (@juliasilge) June 30, 2018

Base R Cheat Sheet

🤝 for the bas[e]ics: “Base R Cheat Sheet” by @mhairihmcneill https://t.co/09s8ZrXhIo via @rstudio #rstats pic.twitter.com/jFN3In4XO0

— Mara Averick (@dataandme) July 1, 2018

Explaining Model Prediction using SHAP

How to tell what your tree classifier is doing? A really nice kernel just showed up on @kagglehttps://t.co/ArqRdCDfDp

— Radek (@radekosmulski) June 30, 2018

Multi-task Learning

We have a new post by @MannyMoss up on our blog about supercharging classification with multi-task learning: https://t.co/7WQ54sDCkM

— Fast Forward Labs (@FastForwardLabs) June 29, 2018

Tools

Bubble Chart

Type to make a bubble chart! https://t.co/4yXQig3tr4

— Mike Bostock (@mbostock) June 30, 2018

Greadability.js

Announcing Greadability.js, a JavaScript library for computing readability metrics on graph layout visualizations! It has many handy uses, like comparing the quality of different layouts from the @d3js_org force-directed algorithm. https://t.co/dUzFZ4Iclf pic.twitter.com/6TUtDNuDD4

— Robert Gove (@rpgove) July 1, 2018

Miscellaneous

Brush more often, you dirtbag.

I Am the Algorithm https://t.co/R7lOet4d5n

— Cathy O'Neil (@mathbabedotorg) June 30, 2018

Update on our (@sachin_ravi_, @chelseabfinn and me) AI-ON project, on few-shot music generation:
Thanks to our contributors, we now have 2 benchmarks and starter code for training. Want to collaborate on new models? Join our slack and see our github: https://t.co/Cqwx2xSVoa https://t.co/NXY4tm9nm5

— Hugo Larochelle (@hugo_larochelle) June 27, 2018

@ceshine_en

Inpired by @WTFJHT