Generative design
Generative design —- this could be promising, and certainly fun https://t.co/mFoUT1LA1P
— Nando de Freitas (@NandoDF) June 21, 2018
Visualization
ImageNet Class Hierarchy
ImageNet is hierarchical! When classifying images, we can show not just the leaves but the relevant subset of the entire tree. https://t.co/sCkJG9RaS1
— Mike Bostock (@mbostock) June 21, 2018
What Makes People Happy
I analyzed 100k happy moments to find out what makes people the most happy, and it made me happy https://t.co/gZ1RUTk5KW pic.twitter.com/BRKbp9oy8u
— Nathan Yau (@flowingdata) June 21, 2018
Notable Research
How can you teach a machine learning system with human language rather than “labels”? With a semantic parser & labeling functions! New #ACL2018 paper by @bradenjhancock @paroma_varma @stephtwang @bringmartino @percyliang & Chris Ré @HazyResearch https://t.co/HMTdY5i5TZ #NLProc pic.twitter.com/VUsUdF2RjM
— Stanford NLP Group (@stanfordnlp) June 21, 2018
New #ACL2018 paper on StructVAE, semi-supervised learning with structured latent variables: https://t.co/4vQieQk91p
— Graham Neubig (@gneubig) June 21, 2018
A new shift-reduce method for seq2tree neural models, semantic parsing results robust to small data, and nice analysis of why semi-supervised learning works! pic.twitter.com/9TKSkctOzC
GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations—learning a dependency graph to do deep transfer learning. Jake Zhao: “Perhaps this can also be seen as encoding some relational inductive bias into the machinery” 🤔 https://t.co/TkN7y3XDQb HT @ylecun
— Stanford NLP Group (@stanfordnlp) June 22, 2018
We won the Best Paper Award at the #CVPR2018 Deep Learning for Visual SLAM workshop (https://t.co/QhujldJKMY) for our paper on
— Devendra Chaplot (@dchaplot) June 21, 2018
Global Pose Estimation with an Attention-based Recurrent Network: https://t.co/H1lyIZahbb!! Congratulations to my co-authors Emilio, Jian and @rsalakhu. https://t.co/4OSny3lX05
Non-local Neural Networks: https://t.co/pu49vycxp9 #deeplearning #computervision pic.twitter.com/AUPZOc4qSV
— Learn PyTorch (@learnpytorch) June 21, 2018
Learning Intrinsic Image Decomposition from Watching Videos https://t.co/CfAKDumeo5
— Nando de Freitas (@NandoDF) June 21, 2018
Similarity Between NN Representations
In our most recent collaboration with Google Brain, we measure the similarity between neural network representations to provide insights into generalisation and the training dynamics of RNNs. @arimorcos @maithra_raghu
— DeepMind (@DeepMindAI) June 21, 2018
Blog: https://t.co/7qXQFkBMKI
Paper: https://t.co/G9ErnfJruG
Very excited about our latest preprint: https://t.co/ViXaxMq8RZ, joint work with @arimorcos and Samy Bengio. We apply Canonical Correlation (CCA) to study the representational similarity between memorizing and generalizing networks, and also examine the training dynamics of RNNs. https://t.co/5QmRxml0in
— Maithra Raghu (@maithra_raghu) June 21, 2018
EHS
The recent paper out from Google, "Scalable and accurate deep learning with electronic health records", has an notable result in the supplement: regularized logistic regression essentially performs just as well as Deep Netshttps://t.co/2vYzZiBoWRhttps://t.co/IStdZQOAe0 pic.twitter.com/U2qWwCb63p
— Uri Shalit (@ShalitUri) June 20, 2018
Tutorials
“Predicting the next Fibonacci number with Linear Regression in TensorFlow.js” by @curiousily https://t.co/FjrKK3UMph #tensorflow #deeplearning #machinelearning
— TensorFlow Practices (@TFBestPractices) June 21, 2018
#TransferLearning is crucial for general #AI, and understanding what transfers to what is crucial for #TransferLearning. Taskonomy (#CVPR18 oral) is one step towards understanding transferability among #perception tasks. Live demo and more: https://t.co/jGJSA4oIzM pic.twitter.com/Fl2UxyHYbl
— Berkeley AI Research (@berkeley_ai) June 6, 2018
Here is how we @ToyotaResearch do awesome large scale distributed #DeepLearning with @PyTorch in the Cloud for #AutomatedDriving with @awscloud 🤖🚘⛅️(we're hiring 😉): https://t.co/JvA0yPioyH
— Adrien Gaidon (@adnothing) June 21, 2018
Must-read for anyone who:
— Mara Averick (@dataandme) June 21, 2018
a. is living in reality
b. has no Scrooge McDuckian data-analysis vault…
"The Role of Resources in Data Analysis" by @rdpenghttps://t.co/a2OsmOqXtQ [🦆💰 animation by tedjohanssonsweden] pic.twitter.com/HvvXmxkfGE
Learning Structural Node Embeddings via Diffusion Wavelets. https://t.co/sjHyn4FJMu pic.twitter.com/Y3sNa6L3wB
— Jure Leskovec (@jure) June 14, 2018
How do you plot 3000 time series? Extract their features and plot the distribution! @robjhyndman #nyhackr pic.twitter.com/j4gcmE6eJe
— Emily Robinson (@robinson_es) June 21, 2018
Reproducibility
Reproducibility tip of the day: If you're sharing code and data for people to reproduce your work, please release them both under a license that allows reuse. (I'm personally partial to Apache 2.0 and CC-BY-SA.) Unlicensed code + data is protected by copyright & not reusable. 😢
— Dr. Rachael Tatman (@rctatman) June 21, 2018
rstats
😍 sketchy *and* magnified, my #dataviz cup runneth over!
— Mara Averick (@dataandme) June 21, 2018
"ggrough: Convert ggplot2 charts to roughjs" 🖍 @xvrdmhttps://t.co/CJz6J5ryqk #rstats #ggplot2 pic.twitter.com/r4B2kVhpld
🚀 your team ⇨ R (+ good learning advice)...
— Mara Averick (@dataandme) June 22, 2018
"Teaching the tidyverse to co-workers" ✍️ @astroeringrand https://t.co/UA9YGdvWIt #rstats pic.twitter.com/IsZlBKDQr4
🌟 post for understanding Imports vs Depends:
— Mara Averick (@dataandme) June 21, 2018
🔭 "How R Searches and Finds Stuff" by @surajgupta
🔗 https://t.co/TIpb9OiJZy #rstats pic.twitter.com/rrrTiJLteD
Statistics
Answering the question, What predictors are more important?, going beyond p-value thresholding and ranking https://t.co/IOr29G1T8a
— Andrew Gelman (@StatModeling) June 21, 2018
Resources
Here's a sneak peek at our new Federated Learning interfaces using @PyTorch and PySyft.
— OpenMined (@openminedorg) June 21, 2018
Try Federated Learning Using OpenMined: https://t.co/bSMgUZjgL1
Come Join our Slack: https://t.co/e1avrEWtIo pic.twitter.com/sHeQ2568gs
Check out Distiller, our @PyTorch based package for neural network compression research at https://t.co/px4g8yCrjS https://t.co/Fews6mK3X6
— Gal Novik (@gal_novik) June 22, 2018
Check out the #rstats fable package from @robjhyndman, a replacement for forecast! Many improvements including integrating with tidyverse packages #nyhackr pic.twitter.com/59rHAbRX78
— Emily Robinson (@robinson_es) June 21, 2018
Gensim Doc2Vec Bug Fixed
A serious bug in #doc2vec fixed, after 3 years :O https://t.co/KBlA0XbLru
— Gensim (@gensim_py) June 20, 2018
New release will have faster convergence & better vectors. Huge thanks to Umang!
Microsft Research Open Data
We're excited to announce the launch of Microsoft Research Open Data! This single, cloud-hosted location offers datasets representing many years of data curation and research efforts by Microsoft. Learn how it works: https://t.co/f4LhMxWlKF
— Microsoft Research (@MSFTResearch) June 21, 2018
Miscellanous
On our blog, @stanford research engineer, @vsoch, makes a strong case for incentivizing public sharing and collaboration amongst academic researchers + lays out the challenges to making this happen. We hope Kaggle can contribute. Read the article here: https://t.co/U6ZOoCujcH
— Kaggle (@kaggle) June 21, 2018
New blog post in which I share my thoughts on the ICRA conference & state of robotics research! https://t.co/Qde92HjXF0
— Eric Jang (@ericjang11) June 22, 2018
Some say ML today is like alchemy more than science. That might be too harsh, but out of all the sciences, ML today is a lot like protein crystallography: you know when it works because you can measure it, but you have absolutely no idea what you need to do to make it work. https://t.co/UbFqgGJDHN
— David Pfau (@pfau) June 21, 2018
Investigating Human Priors for Playing Video Games https://t.co/OOPlV70fkR
— Nando de Freitas (@NandoDF) June 21, 2018
AI
This article echoes my thoughts on AI - deep learning alone will not get us to AGI. Also what @GaryMarcus has been saying all along!https://t.co/hVlfz3XMZL
— Arun Shroff (@arunshroff) June 22, 2018
AI - the story so far:
— Pedro Domingos (@pmddomingos) June 21, 2018
1. Manually encoding all the knowledge you need for AI is hopeless.
2. Purely empirical learning methods keep exceeding expectations, but have limitations.
3. Solution: A little knowledge + A lot of data. How to do this: https://t.co/YaZ0pmwA60
Employee Activism
Employee activism keeps spreading in tech—now Amazon employees are calling on the company to stop selling Rekognition to law enforcement and to boot Palantir from AWS https://t.co/6iJAQv00ub
— kate conger (@kateconger) June 22, 2018
Rating Your Servers
I had no idea the tablets you sometimes find at chain restaurants are causing employees great trouble and threatening their livelihoods. Great @ceodonovan read here: https://t.co/62d2MRRlpr
— Tony Romm (@TonyRomm) June 21, 2018