Fast.ai library helped Radek win 1st in “iMaterialist Challenge (Fashion) at FGVC5” competition. It also helped Alexandr Kalinin, who got 4th place in Google Landmark Recognition Challenge. In additional to the outline of the solution, Radek also shared his inspiring story.

StackOverflow published 2018 Developer Survey on Kaggle. There are two $1000 awards for top Kernels on this dataset.

Neural Joking Machine

“Neural Joking Machine”: They trained a captioning model to optimize for a “Funny Score” 🤣 https://t.co/HQrgReCRBo pic.twitter.com/7WNFcYdYSi

— hardmaru (@hardmaru) May 31, 2018

iMaterialist Challenge (Kaggle)

1yr ago I gave up on ML. I didn't know what to learn nor how

After a 5 mths break I decided to give ML one last try. If it would not work out I would need to let it go to not continue to waste my time - maybe I am unable to learn this

I then signed up for the @fastdotai course pic.twitter.com/wjORNbkctx

— Radek (@radekosmulski) May 31, 2018

I wrote a few more words that you can read on the Kaggle forums here https://t.co/kgPMG2uUlk. Happy to answer any questions that you might have!

— Radek (@radekosmulski) May 31, 2018

Landmark Recognition (Kaggle)

(Note: We covered the 1st place solution of the Landmark Retrieval challenge in Day 10.)

4th place solution to Google Landmark Recognition Challenge #CVPR2018 https://t.co/rzjBSmaw3k
- multiple deep nets from @fastdotai
- binary classifiers for: is landmark / is from retrieval challenge
- few-shot learning
- kNN with features from local crops
- ensembling heuristics

— Alexandr Kalinin (@alxndrkalinin) May 30, 2018

2018 Developer Survey

New post for @StackOverflow on the public release of this year's Developer Survey data! ✨

We are partnering with @kaggle, who is awarding two $1000 awards over the next two weeks to authors of top Kernels on this dataset.https://t.co/Qrs1rh8M6s pic.twitter.com/TO2uKIuBpO

— Julia Silge (@juliasilge) May 30, 2018

We're thrilled to host @StackOverflow's newly released 2018 Developer Survey on @KaggleDatasets! 🎉📊 Analyze industry trends in the software development community using Kernels for a chance to win one of two $1k prizes.

Get started here ➡️https://t.co/uHAKRPqCWb pic.twitter.com/Ljxc9AkMgc

— Kaggle (@kaggle) May 30, 2018

GAN progress

Two years of GAN progress on class-conditional ImageNet-128 pic.twitter.com/wkkOs7nRfb

— Ian Goodfellow (@goodfellow_ian) May 30, 2018

VQ-VAE Open Sourced

VQ-VAE (https://t.co/PmwpVNevcl and https://t.co/AOk0G57KRP) is now open source in DM-Sonnet! Here's an example iPython notebook on how to use it for images: https://t.co/Ij1CSXXZEk pic.twitter.com/OpeaxCSf0b

— Aäron van den Oord (@avdnoord) May 30, 2018

The code also includes a version of VQ-VAE that updates the codebook with exponential moving averages (mentioned in the paper and will update the paper with more details). The EMA version often converges much faster and doesn't depend on the choice of optimizer.

— Aäron van den Oord (@avdnoord) May 30, 2018

QRNN in Fast.ai

Just finished adapting the QRNN (https://t.co/WXUNi86wpO) pytorch implementation of @smerity in the fastai library. Two to four times faster than LSTMs with the same results! More details here: https://t.co/8CmXYTfdxS

— Sylvain Gugger (@GuggerSylvain) May 30, 2018

Very excited to see @jekbradbury's QRNN is now available in fastai - enable it by just adding "qrnn=True" to your language model, and enjoy 2-4x the speed of LSTMs! :) https://t.co/5BnIBhnVcl

— Jeremy Howard (@jeremyphoward) May 30, 2018

(Bonus)

Here's a minimal PyTorch/fastai Serverless API (w/ AWS Lambda) example - could be a useful starting point for your web apps! https://t.co/5QVZbQgJtc

— Jeremy Howard (@jeremyphoward) May 30, 2018

Regularization

For the deep learning book, my co-authors and I chose to offer our own definition: "Regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error" https://t.co/oPI4e9HJ03

— Ian Goodfellow (@goodfellow_ian) May 30, 2018

Notables

New @acl2018 paper by @YenChunChen4 on "Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting" is out on arxiv now: https://t.co/kWlQIOspFY

CODE+MODELS all publicly available at: https://t.co/bWCXt9Kgay#NLProc #UNCNLP #ACL2018 https://t.co/z60Xh0FcxT

— Mohit Bansal (@mohitban47) May 30, 2018

Seems like really simple model-based RL methods can do just as well at convergence as model-free RL, if uncertainty is handled properly, with orders of magnitude fewer samples: https://t.co/jzuMw7b2M3

with Kurtland Chua, @RCalandra, Rowan McAllister

— Sergey Levine (@svlevine) May 31, 2018

Hamiltonian variational auto-encoder: https://t.co/jp8bW59tW5. Using results from SMC samplers, we provide guidelines for Salimans-Welling MCMC/Variational setup. Based on Hamiltonian importance sampling ideas, we then provide a data-informed normalizing flow. Tempering is key.

— Arnaud Doucet (@ArnaudDoucet1) May 30, 2018

This paper by @georgejtucker et al. is a good example of how to do robust experiments in RL (and ML more generally).https://t.co/IyV68gSXoU

Make sure you test whether the signal you're trying to exploit is really there, and whether your algorithm is able to pick it up.

— Roger Grosse (@RogerGrosse) May 27, 2018

Our paper on provably efficient algorithms for topic modeling finally appeared in CACM. https://t.co/rHsS6ZDkTi
Many people use these methods instead of older EM or MCMC approaches.

— Sanjeev Arora (@prfsanjeevarora) May 30, 2018

Here is a neat overview paper in which Judea Pearl outlines the specific tasks which one cannot solve with 'associational' reasoning and learning.https://t.co/zEGxy4WeER

— Ferenc Huszár (@fhuszar) May 29, 2018

What can we learn from 550,000 years of rain data? Read the research: ($) https://t.co/il9Hviw31Q pic.twitter.com/V6ewe4i4vx

— Science Magazine (@sciencemagazine) May 30, 2018

Miscellaneous

Only the traffic collisions of Great Britain. #datavizhttps://t.co/pcWusJHH2g pic.twitter.com/2cTvDWSrLV

— Randy Olson (@randal_olson) May 30, 2018

Love this example of the power of #dataviz in machine learning.

Data scientists were 🤔 when image classification model showed high error rates for labelling a certain type of big cat, so they did a cluster plot.

Boom.

(From excellent @petewarden post https://t.co/b6CUT80KYS) pic.twitter.com/qOto4OFIKK

— John Burn-Murdoch (@jburnmurdoch) May 30, 2018

PyCharm users can now run mypy super fast using a plugin written by @ILevkivskyi: https://t.co/qtHuMU8RrB #PyCharm #mypy

— Guido van Rossum (@gvanrossum) May 30, 2018

My universal directory structure, forged from years of bad organization:

/Projects
.../inProgress
....../ProjectName
.........../docs
.........../code
.........../data
.........../figures
.../published
.../submitted

You're welcome.

— Micah Allen (@micahgallen) May 29, 2018

The real scandal: why do all illustrations of articles about AI have to be horrible clichés?
At least they lost the bad habit of slapping a Terminator picture on every single article.... https://t.co/CtIRISGe4o

— Yann LeCun (@ylecun) May 31, 2018

Are CNNs like the brain?
"Several studies have directly compared the ability of CNNs and previous models of the visual system to capture neural activity. CNNs come out on top" https://t.co/tkpD2Zmfd6

— Jeremy Howard (@jeremyphoward) May 30, 2018

As I read through @chrisalbon's new book, I feel like I'm scrolling through my coding bookmarks folder. But more simplified and organized.

In other words, great reference! https://t.co/opo5Kgxuwk

— Data Science Renee (@BecomingDataSci) April 27, 2018

On AI Winter

Excellent post by @filippie509 on saturation of the deep learning revolution. The only thing I’d add is that user-facing AI inside the big companies is already failing us at scale (recommendations, ads, engagement). More depth won't fix these problems.https://t.co/TIs29IZgLw

— Ben Recht (@beenwrekt) May 30, 2018

Surprised normally rigorous @beenwrekt calls this blog post excellent—I’d say poorly argued. 1st argument for deep learning stalling: @AndrewYNg tweeting less. 🤔 I put his data in a chart—because #infovis. Anyway, does rate correlate with AI or Ng’s jobs? https://t.co/6QTh0pBxmI pic.twitter.com/cq8oCD7cDf

— Christopher Manning (@chrmanning) May 30, 2018

@ceshine_en

Inpired by @WTFJHT