Visualization
Funding probabilities on Shark Tank, grouped by gender. #datavizhttps://t.co/GYwvyYEE5I pic.twitter.com/yYkv5dHiaU
— Randy Olson (@randal_olson) July 7, 2018
Estimated Steam game player counts (Source: Steam Spy) pic.twitter.com/4Oztsw1uhZ
— Mike Bostock (@mbostock) July 7, 2018
What does it mean to say "there's a real possibility that football's coming home"? Lovely visualisation:https://t.co/WYYuTMx3mj
— Tim Harford (@TimHarford) July 7, 2018
Research
SwitchNorm
The quest for optimal normalization in neural nets continues. SwitchNorm: add BatchNorm + InstanceNorm + GroupNorm with a learnable blend at each layer https://t.co/9hOQXnkk8T fun plots; + code https://t.co/34r96BStCS
— Andrej Karpathy (@karpathy) July 7, 2018
Interesting! Adding that to the reading list. Is there a particular reason they didn't include the recent GroupNorm by Yuxin Wu & Kaiming He? https://t.co/UMSiG3kxXU
— Sebastian Raschka (@rasbt) July 8, 2018
Train CNN with Megapixel Images
Will present my #midl2018 poster next session. Learn how to train a normal CNN with 8192x8192 input sizes and a single label on one GPU! (from 235gb to 7gb memory required)
— Hans Pinckaers (@hanspinckaers) July 6, 2018
Code: https://t.co/KzDPFnbArl pic.twitter.com/Vcfco5fBPk
How to Backdoor Federated Learning
How to Backdoor Federated Learning, Bagdasaryan et al. β attacks federated learning scenarios where many users contribute to a single shared model: https://t.co/7liNzrenBa pic.twitter.com/c8f7xEds04
— Brendan Dolan-Gavitt (@moyix) July 7, 2018
Tutorials
Qualitative Data Science: Using RQDA to analyse interviews https://t.co/2hMIcscAMD #rstats #DataScience
— R-bloggers (@Rbloggers) July 7, 2018
The official repository for the Deep Reinforcement Learning Nanodegree program at @udacity is now public! Check it out to see many implementations in @PyTorch, including DQN, DDPG, and much more! https://t.co/umOUxLdwTj pic.twitter.com/Be5UAILQ6l
— Alexis Cook (@alexis_b_cook) July 6, 2018
How many random seeds are needed to compare #DeepRL algorithms?
— Pierre-Yves Oudeyer (@pyoudeyer) July 6, 2018
Our new tutorial to address this key issue of #reproducibility in #reinforcementlearning
PDF: https://t.co/7eHOzhtLuC
Code: https://t.co/0CRRM8RYYr
Blog: https://t.co/rYWM5zPYZB#machinelearning #neuralnetworks
ICYMI, π©βπ« great material β code, slides, & π¬!
— Mara Averick (@dataandme) July 7, 2018
π» "Code for Workshop: Intro to Machine Learning w/ R" by @ShirinGlander https://t.co/An6MvGx4TH #rstats #MachineLearning pic.twitter.com/tpigxcH4vT
Tools
Pandas on Ray
"Pandas on Ray β Early Lessons from Parallelizing Pandas" - almost forgot about this neat project! https://t.co/3CKXWlPHo7
— Sebastian Raschka (@rasbt) July 8, 2018
Horovod
Horovod β distributed training framework for TensorFlow, Keras, and PyTorch
— ML Review (@ml_review) July 8, 2018
By @UberEng
Require far less code changes than the Distributed TensorFlowhttps://t.co/ljrWzTJZ4b #MachineLeaning pic.twitter.com/gbLTRUVtWc
This tutorial will show you how to convert a neural style transfer model that has been exported from @PyTorch and into the #CoreML format using ONNX. #AI #MachineLearning #Developers https://t.co/AXVuTolAbP
— ONNX (@onnxai) July 7, 2018
Miscellaneous
How did I only just find out about this package??? https://t.co/aXJe2zLRC3. You have all let me down!
— Hadley Wickham (@hadleywickham) July 6, 2018
probably shouldn't share it publicly, but I have too many projects to work on at the moment anyway: had a great idea to improve ELMs further, ie dropping the hidden layer(s) + run n ELMs with n different random seeds to construct a majority vote ensemble. Someone should try this!
— Sebastian Raschka (@rasbt) July 8, 2018
A useful paradox for data scientists to keep in mind:
— David Robinson (@drob) July 5, 2018
Most cities are small, but most people live in large cities
This relates to analyses of e.g. user engagement: most of your users probably don't do much, but most of your engagement is from frequent users
An astonishing paper that may explain why itβs so difficult to patch.
— Gene Kim (@RealGeneKim) July 5, 2018
They monitored 400 libraries. In 116 days, they saw 282 breaking changes!
Each day, thereβs 6.1% chance of breaking chg, for each lib you use!@topopal @mtnygard @mik_kersten @ctxthttps://t.co/qdRswgAwm2 pic.twitter.com/sns2IOoK0L