Visualization
What can cause cycles in predator–prey 🐈🐁 populations? Let’s use visualization to explore a dynamical system! https://t.co/HnaBRVedm5
— Mike Bostock (@mbostock) June 8, 2018
DeepMind Patents
These patents seem very broad. If @DeepMindAI is going to start claiming exclusive intellectual ownership of fundamental machine learning methods, this will be terrible for open AI research. I hope this is not the intent. https://t.co/3KWcECqPo2
— Julian Togelius (@togelius) June 8, 2018
It doesn't matter that they haven't been enforced. At some point they may well be packaged and sold, as has happened many times before. That's why we need defensive patent pledges that are enforceable
— Jeremy Howard (@jeremyphoward) June 8, 2018
I googled your question. Apparently, it's a serious issue in genetics:https://t.co/RtfEmavJ9I
— Julian Togelius (@togelius) June 8, 2018
And patent trolls (not the same as patents in general) have been shown to hinder innovation:https://t.co/XVOiHDFDG0
Notable Research
Few-shot learning problems can be ambiguous. Now MAML can handle the ambiguity by sampling multiple classifiers via a Bayesian formulation of meta-learning. We call it PLATIPUS: https://t.co/oOHQl1tuj4
— Sergey Levine (@svlevine) June 8, 2018
w/ @chelseabfinn and @imkelvinxu
some ambiguous celeba task: pic.twitter.com/oGakQW1Tr5
Interesting talk at @NetflixResearch workshop on the use of Deep and Shallow latent models and the relation between MF, LDA, or Autoencoders. #WWW_2018 [PAPER] here: https://t.co/XT37RPAMLc pic.twitter.com/zQEpFLkt92
— Xavier 🎗🤖🏃 (@xamat) June 8, 2018
Tutorials and Resources
This week's #KernelAwards winner uses the Stack Overflow 2018 Developer Survey and an extra-trees classifier to predict whether a given developer prefers Python or R: https://t.co/ZASWYDSKFK pic.twitter.com/S2MqEKjWHY
— Kaggle (@kaggle) June 8, 2018
Kaggle submissions are an awesome way to start as a data scientist: check out @gebanks90's exploration of the classic Titanic survival dataset, looking at whether being in a nuclear family helped your chances 🚢 https://t.co/mDD47VdV6c #datablog #rstats pic.twitter.com/AeSjlT5iKL
— David Robinson (@drob) June 8, 2018
Slides of my talk at École Normale Supérieure de Paris "Smoothing/Regularization Techniques for Probabilistic and Structured Classification" https://t.co/O4MI6vK3Ke I give the big picture on regularized prediction functions, Fenchel-Young losses, SparseMAP and differentiable DP. pic.twitter.com/4C8Cv3TYNh
— Mathieu Blondel (@mblondel_ml) June 8, 2018
NeuralCoref v3.0 is out✨!
— Thomas Wolf (@Thom_Wolf) June 8, 2018
- up to 100x faster than v2.0 (thanks Cython) 🚀
- Integrated in spaCy models and pipeline 🤗 + 💫 = 💙
- Based on the fast neural net model by @stanfordnlp, trained in @PyTorch
Check it out: https://t.co/CzuJZi9TuV
Cc @spacy_io pic.twitter.com/z8OjrzEOAU
Here’s a PyMC3 example using Gelman’s rat tumors dataset: https://t.co/jAIfNqYftK https://t.co/iW3JwHjjeu
— Chris Fonnesbeck (@fonnesbeck) June 8, 2018
Datasets
This is a great collection of 1,000+ datasets available through popular #rstats packages. Thanks for putting it together, @VincentAB!https://t.co/z854BWlWGU pic.twitter.com/HaOIBkWEvz
— Mike Freeman (@mf_viz) June 8, 2018
Short Jokes dataset: dataset contains 231,657 short jokes scraped from various websites: https://t.co/C0kvbwJUkk
— Soumith Chintala (@soumithchintala) June 8, 2018
Are there any consolidated references for the types of humor, what makes something funny?
Miscellaneous
Finally got around to finishing @mchorowitz's thoughtful article, "Artificial Intelligence, International Competition, and the Balance of Power": https://t.co/fYvKq9GZmw
— Miles Brundage (@Miles_Brundage) June 9, 2018
Worth checking out for those interested in such things!
If you are interested in X, jumping straight into X is the best way to learn. Just pick up some of the Y that is needed while doing X, and fake it until you make it. https://t.co/CPf7cAVcES
— hardmaru (@hardmaru) June 9, 2018
My latest on the role of trust in data analysis and how it can affect the likelihood of success. https://t.co/eDVGXRV16a
— Roger D. Peng (@rdpeng) June 8, 2018
Even with a sample size greater than what is typical in the field (n = 100), the replicability of fMRI studies is still undesirable low 😟https://t.co/LmuYEwUnA4 pic.twitter.com/GKsKHEL0CT
— Indrajeet Patil (@patilindrajeets) June 7, 2018
"Experimental evidence for tipping points in social convention" is out! https://t.co/yOnXoHIRjR
— Andrea Baronchelli (@a_baronca) June 8, 2018
Committed minorities can overturn established social conventions when a critical size is reached.
with @NDGannenberg, @joshua_a_becker, @devonbrackbill. pic.twitter.com/lSCdUskWVj
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” —Edsger W. Dijkstra
— hardmaru (@hardmaru) June 8, 2018