Graph Neural Network
Graph Neural Networks / Relational Networks are models worth studying. We wrote a pretty comprehensive review about them which I hope you will find helpful (code forthcoming!). https://t.co/D46XCkUIeb pic.twitter.com/Shw0FOhdIh
— Oriol Vinyals (@OriolVinyalsML) June 12, 2018
World Cup 2018 Prediction
"Prediction of the FIFA World Cup 2018 - A random forest approach with an emphasis on estimated team ability parameters" -- The mandatory big-event forecast article. That's a pretty good one though (or we will see in hindsight) https://t.co/EjUv2YGKHF
— Sebastian Raschka (@rasbt) June 12, 2018
Machine learning predicts World Cup winner https://t.co/JhVtFZI6Lu
— MIT Tech Review (@techreview) June 12, 2018
SQuAD 2.0
Since 2016, SQuAD has been the key textual question answering benchmark, used by top AI groups & featured in AI Index—https://t.co/Or5UT7zQtD—Today @pranavrajpurkar, Robin Jia & @percyliang release SQuAD2.0 with 50K unanswerable Qs to test understanding: https://t.co/VnCqDhBwLB pic.twitter.com/gTCvvFVcsm
— Stanford NLP Group (@stanfordnlp) June 12, 2018
Excited to release SQuAD2.0 today. SQuAD2.0 is an effort to test the ability of question answering systems to know what they don't know.
— Pranav Rajpurkar (@pranavrajpurkar) June 13, 2018
Incredibly grateful to have worked with Robin Jia and Prof. Percy Liang (@percyliang)! https://t.co/QA2fCWUInj
Generating Memes
"Dank Learning: Generating Memes Using Deep Neural Networks," Peirson and Tolunay: https://t.co/dUdA0VDaYo
— Miles Brundage (@Miles_Brundage) June 13, 2018
Stanford CS 224n project... not sure what @RichardSocher is doing here pic.twitter.com/0UgWKvgaMF
Notable Research
This is incredible. "Neural Best-Buddies: Sparse Cross-Domain Correspondence" https://t.co/RvZmflafAJ https://t.co/FAbRzwRao0
— hardmaru (@hardmaru) June 13, 2018
Excited at #SIGMOD2018, where @tim_kraska just presented our paper on Learned Indexes after debate from research community. Paper was updated recently w/ appendices that address comments over the past several months, including alternative hashmaps. https://t.co/9prgOGe1bc https://t.co/aWx9wYEtnZ
— Ed H. Chi (@edchi) June 12, 2018
AI could get 100 times more energy-efficient with IBM’s new artificial synapses https://t.co/sMYPhCfeJF
— MIT Tech Review (@techreview) June 12, 2018
Meta-learning enables fast learning, but needs hand-engineered meta-training tasks. Can we get the tasks themselves automatically? Our first attempt at this for RL: unsupervised meta-reinforcement learning:https://t.co/ePkIsMcMjZ
— Sergey Levine (@svlevine) June 13, 2018
w/ Abhishek Gupta, Ben Eysebach, @chelseabfinn
New paper: "Reconstructing networks with unknown and heterogeneous errors"https://t.co/nGfIHZngVK
— Tiago Peixoto (@tiagopeixoto) June 12, 2018
Did you know you can make error estimates from networks, by making only a single measurement? pic.twitter.com/1yeYLC93Vn
Meta-learning enables fast learning, but needs hand-engineered meta-training tasks. Can we get the tasks themselves automatically? Our first attempt at this for RL: unsupervised meta-reinforcement learning:https://t.co/ePkIsMcMjZ
— Sergey Levine (@svlevine) June 13, 2018
w/ Abhishek Gupta, Ben Eysebach, @chelseabfinn
Backdrop: Stochastic Backpropagation
— ML Review (@ml_review) June 12, 2018
By @KyleCranmer
Intuitively: a dropout acting only along the backpropagation pipeline, that significantly improve generalization.https://t.co/avByBt9CSA pic.twitter.com/vPwI3HW4nY
Tutorials and Resources
Excited to announce our first guide for a deep understanding of InfoGAN: https://t.co/yd2ublEqfu https://t.co/OfkMzYbZvl
— Avital Oliver (@avitaloliver) June 6, 2018
Top-2 winning solution for the Adversarial Attacks on Black Box Face Recognition competition: https://t.co/14MlLbCWsP
— Alexandr Kalinin (@alxndrkalinin) June 12, 2018
- Fast Gradient Sign/Value methods + heuristics
- genetic differential evolution
- stack ensembling#PyTorch code: https://t.co/GBCxFKQOrU pic.twitter.com/gyafvD0YP4
"Why are some probability distributions studied more than others?" - Great answer by @stat110 on @quora https://t.co/lfzRMGYwKc
— William Chen (@wzchen) June 12, 2018
rstats
fpeek, an #rstats package to help check text files content, counting total number of lines, view first and last lines; performances are looking good... feedbacks more than welcome: https://t.co/za3zeUNOyK
— David Gohel (@DavidGohel) June 12, 2018
The cheat-sheet cheat sheet by @StatGarrett = 🥇
— Mara Averick (@dataandme) June 12, 2018
📝 "How to Contribute a Cheatsheet"
🔗 https://t.co/mmylqk749g #rstats
[Also peep Tips & Tricks: https://t.co/9glwV8GW8c] pic.twitter.com/c1dyHOAF24
ICYMI, 😻 @kjhealy's 📖 just keeps getting better:
— Mara Averick (@dataandme) June 12, 2018
"Data Visualization for Social Science: A practical intro w/ R & #ggplot2" https://t.co/Kt2duEqQ9L #rstats #dataviz (🌟 #SoDS18 resource) pic.twitter.com/o1xr8v1nCG
ICYMI, 📉 those posteriors...
— Mara Averick (@dataandme) June 12, 2018
"Plotting Posterior Distributions w/ ggdistribute" by Joseph M. Burlinghttps://t.co/y3xSQdrqyL #rstats #dataviz pic.twitter.com/XTlAQ4R05N
Miscellaneous
a new breed of deep learning tools are so easy to use, even an incompetent like me can train his own AI https://t.co/9GZelFxjOP pic.twitter.com/7peberHsi4
— James Vincent (@jjvincent) June 12, 2018
Why Most Research Findings Are False, by J.Ioannidis will tell you why
— Fermat's Library (@fermatslibrary) June 12, 2018
false positive findings, non-reproducible research and biased research plague academia and represent the majority of published research.
PAPER: https://t.co/4Usa1KpmjM
Here are Dr. Ioannidis's 6 corollaries pic.twitter.com/U89rHcB4CS
Nice thread debating between academic concise style of writing, or a longer, reader-friendly scientific writing style. https://t.co/uflBCPmosP
— hardmaru (@hardmaru) June 12, 2018
The world has 7.6 billion people. We can work on more than one problem at a time. Those problems are important and should be worked on, AGI should be worked on too. https://t.co/dCH1Yijjo9
— Geoffrey Irving (@geoffreyirving) June 11, 2018
Clickagy, a data harvesting firm that claims to track "behavioral data on 91% of online devices in the US", sells data to target people who "may have recently gotten into a car accident" or are suffering from other injuries.https://t.co/koMH6U1tdm /cc @BobbyAllyn pic.twitter.com/4wdAsdG5Tp
— Wolfie Christl (@WolfieChristl) June 12, 2018
release notes for humans pic.twitter.com/KAIP0B2mtl
— 👩💻 DynamicWebPaige @ 🏡📚✨ (@DynamicWebPaige) June 10, 2018
People often ask me how they can create some open source, and the only answer–as far as I’m concerned–is to make something you *need*. You make software every day, there is something missing, some library, some tool, make it encapsulated, make it generally useful and release it.
— Max Howell (@mxcl) June 12, 2018
“Soft Skills”
(Long thread. Click the tweet to read the full thread.)
i told a friend, in photography, what the tech teams considers "soft skills."
— Michael Chan (@chantastic) June 10, 2018
he laughed and said "your industry is so fucked. you're saying that being human is a 'nice to have'." https://t.co/LpiA82UYLu
I'd like to start today with an apology. Today I've learned how insensitive (and loaded) it is to imply that people without developed social and interpersonal skills are not "human". Even in hyperbolic jest, this is an ignorant thing to do. [1/3]
— Michael Chan (@chantastic) June 11, 2018