Research

Top 20 Deep Learning Papers

Top 20 Deep Learning Papers, 2018 Edition https://t.co/xCjXFBc12v pic.twitter.com/6FAc4DLxcw

— KDnuggets (@kdnuggets) July 1, 2018

Gradient Acceleration in Activation Functions

While DropOut has been considered to prevent the co-adaption among hidden neurons to regularize the model, DropOut actually makes gradient flows even when the activation functions are saturated and helps the optimization converges to the flat minima. https://t.co/YEH33wWqRZ

— Daisuke Okanohara (@hillbig) July 2, 2018

Dropout has been a standard technique for years, but we’re still finding new ways to interpret it.

I tend to think of new techniques/architectures as an advancement in either regularization or optimization/gradient flow, but sometimes it’s difficult to tell which one it is. https://t.co/74lt0VtExG

— Denny Britz (@dennybritz) July 2, 2018

Tutorials / Reviews

Fasti.ai: How to Start?

Ask “HowTo start with @fastdotai?” may seem incongruous. Just watch the #videos, right? No. On the basis of my double experience with #Fastai since 2017 (student & instructor in #Brasilia), I publish this start-up guide for the new learners. https://t.co/9kvs4BPygK @jeremyphoward

— Pierre Guillou (@pierre_guillou) July 1, 2018

Abstract Art with ML

Abstract Art with ML using Compositional Pattern Producing Networks
By @janhuenermannhttps://t.co/RJAeo50u3w pic.twitter.com/cUcdu6KOqV

— ML Review (@ml_review) July 1, 2018

Design Space for #dataviz

👍 Sunday-morning read…
📄 "Using Typography to Expand the Design Space of Data Visualization" by @rkbrath & Ebad Banissihttps://t.co/P6cB0Cz3co #dataviz #infovis pic.twitter.com/v8ac5ds22V

— Mara Averick (@dataandme) July 1, 2018

French Wine Review #NLP #rstats

🍷 × tidytext code-through:
🗺 "Using French wine reviews to understand TF-IDF" ✍️ @aleszubajak https://t.co/EydHUzPSaK via @storybench #rstats #maps pic.twitter.com/Hk5WK74CfH

— Mara Averick (@dataandme) July 1, 2018

Debugging Neural Networks

Nice list! Our ML lab wrote up a few practical tips for debugging neural networks: https://t.co/BIURqZGnAi

— Matt Holt (@mholt6) July 1, 2018

Bayesian inference #dataviz

ICYMI, 💫 explorable:
“Bayesian inference - an interactive visualization” ✍️ @krstoffr https://t.co/ViF060gCGn #bayes #infovis pic.twitter.com/V1hlQxhURg

— Mara Averick (@dataandme) July 1, 2018

Miscellaneous

Saw this footnote in a paper and was very sympathetic. Then scrolled up to find it was a DeepMind paper. 🤔 pic.twitter.com/Nl5IJAo9uu

— Delip Rao (@deliprao) June 27, 2018

Hey @spacy_io people (@honnibal, @_inesmontani), those speed comparisons on https://t.co/0s2zkJxrRj are not only outdated—as you note—but the speed for the Stanford Tokenizer is just way wrong. Time to take them down? Here are our measurements: https://t.co/XAoVAFUAAJ #NLProc

— Christopher Manning (@chrmanning) July 1, 2018

In the ML research field there have been efforts to follow OSS model and attract people outside of academic institutions to participate. For example, in "AI-ON" there's serious projects there proposed by Google Brain, MILA, etc., looking for collaboration: https://t.co/TxriH0nwXd

— hardmaru (@hardmaru) July 1, 2018

Joshua: Long long ago, was also grad student in statistics at Stanford (BS,MS). Brad Efron and I were in medschool stat consulting seminar. Medical researcher came in seeking stat significance for n = 4 dogs. My mentor Lincoln Moses said “Only if one of them turned into a cat” https://t.co/TI2YgOrzQm

— Edward Tufte (@EdwardTufte) July 2, 2018

Here’s what one #statistician at @datacamp loves about #statistics https://t.co/PABDct0F5x @drob pic.twitter.com/PRx7EJgTTA

— This is Statistics (@ThisisStats) July 1, 2018

Automatic Essay Grading

This is not how AI works, or how writing works, or how education works. https://t.co/KBPnzkRTpU

— Ethan Plaut (@ethanplaut) July 1, 2018

Automated essay grading that takes into account factual accuracy and content coherence is several years away. In the meantime, NLP and AI researchers not paid by Pearson should push back against school systems that rely on this deeply flawed technology for standardized testing. https://t.co/cJnUZRRGwR

— James Bradbury (@jekbradbury) July 1, 2018

@ceshine_en

Inpired by @WTFJHT