See (sort of) Through Walls
Through-Wall Human Pose Estimation Using Radio Signals https://t.co/9IXsdpgJnM "wireless signals in the WiFi frequencies traverse walls and reflect off the human body. It uses a deep neural network approach that parses such radio signals to estimate 2D poses."
— Andrej Karpathy (@karpathy) June 13, 2018
super interesting technology & I can see many use-cases for that, but I don't think that "more productive lives" is one of them. I.e., if someone agrees to be monitored in a medical context, why does it have to be through a wall vs using a regular camera? https://t.co/dkzkrDPXDI
— Sebastian Raschka (@rasbt) June 13, 2018
Through-Wall Human Pose Estimation Using Radio Signals
— ML Review (@ml_review) June 13, 2018
By MIT (@mingmin_zhao et al.)
WiFi frequencies traverse walls and reflect off the human body.
Deep Neural network parses such radio signals to estimate 2D poseshttps://t.co/8yC9CBgg1g #MachineLearning pic.twitter.com/gBqzvK3yDL
On Open Source Projects
(Tweets from a long thread. Click on them to read the full thread.)
So often in open source we see the finished product rather than the process... and that can be discouraging to people starting out who see all the apparent perfection around them. 1/
— Jake VanderPlas (@jakevdp) June 13, 2018
But I would wager that behind the polished veneer of any successful open source project, there's plenty of pain, agony, and self-doubt. 2/
— Jake VanderPlas (@jakevdp) June 13, 2018
But just six months ago, I'd begun to believe I couldn't do it. I had started down a particular implementation path and was getting stuck daily... I was putting in hours of development and getting nowhere. 4/
— Jake VanderPlas (@jakevdp) June 13, 2018
A Flawed Research?
ConvNet outperforms human dermatologists for melanoma detection.
— Yann LeCun (@ylecun) June 12, 2018
Dermatologists in level-II protocol:
- sensitivity: 88.9% (±9.6%, P = 0.19)
- specificity: 75.7% (±11.7%, P < 0.05).... https://t.co/L3kmXgQufa
I wanted to finish talking to the journal, but the media articles keep coming and even @ylecun is repeating the claims... time for some #openpeerreview!
— Luke Oakden-Rayner (@DrLukeOR) June 13, 2018
This paper is flawed. Their comparison underestimates human performance systematically. See whole thread for explanation: https://t.co/RaHWho2bY9
Commercials written by Bots (Fake)
Some very informative discussions. (Click on the tweet to read the full thread.)
Ah yes, the rare "Reverse Turing". (It's interesting that when people pretend to be bots, they tend to maximize for word-level semantic incoherence, rather than something like weird co-reference resolution or disjointed discourse structures, which is what I tend to notice more.) https://t.co/r3E3IIIpEo
— Dr. Rachael Tatman (@rctatman) June 13, 2018
These "I forced a bot to watch X" posts are almost certainly 100% human-written with no bot involved. Here's how you can tell. 1/12 https://t.co/4wVxfraqZS
— Janelle Shane (@JanelleCShane) June 14, 2018
I tried to make a text-GAN and it wasn't pretty. All papers on text-GANs are basically conclude with a ¯\_(ツ)_/¯
— Max Woolf (@minimaxir) June 14, 2018
Ethics
If you’re looking for work in AI, machine learning, or any kind of tech, here is a company to AVOID.
— Daniel Lowd (@dlowd) June 14, 2018
If you already work for @ThompsonReuters, now is the time to speak up for ethics and humanity.
Don’t be like pre-WWII IBM, building counting machines for Nazis. https://t.co/yo0AWfvjNf
Last week I wanted to ask "how long will it take for morality and ethicality to be dismissed by using the shield of patriotism":
— Smerity (@Smerity) June 13, 2018
"[T]he goal for our contribution to Project Maven — to save the lives of soldiers and civilians alike— is unequivocally aligned with our mission."
A while back an engineer tweeted in terror that they found their code deployed in a medical application when they had never architected it for that level of reliability.
— Smerity (@Smerity) June 13, 2018
Imagine that situation but finding your code was part of a pipeline with a bullet or missile at the end of it.
Bias
My @QConAI keynote on "Analyzing and Addressing Bias in AI" is now online: https://t.co/dYnGt6OihT
— Rachel Thomas (@math_rachel) June 13, 2018
“If a data set is biased, #AI will likely be biased as well”, @RichardSocher calls for a conscious approach to leveraging artificial intelligence #trailblazer @cebit @SalesforceDE #salesforceTour pic.twitter.com/HCxojaMIXR
— Nina Keim (@nkeim) June 13, 2018
Algorithms can have disproportionate impacts on those already socially disadvantaged, particularly people of color. https://t.co/SFx3L792BJ
— FiveThirtyEight (@FiveThirtyEight) June 14, 2018
Gender
Attempts to force ML models to give equal positive outcomes to e.g. men/women, end up ‘making up the numbers’ with weird effects; e.g. hurting women who are similar to men and men who are similar to women. (paper from @zacharylipton @achould & McAuley https://t.co/HnMeAM4IGo) pic.twitter.com/3PAGKQpx7b
— Reuben Binns (@RDBinns) June 13, 2018
Good morning! Here is a nice scatterplot about gender gaps in education. https://t.co/XgOtlEHfGx pic.twitter.com/Z8XA6UklRv
— Kevin Quealy (@KevinQ) June 13, 2018
this is by far the best summary of research about #diversity vs #meritocracy and unconscious bias and #genderblind policies. Also why: "teaching more girls and women to code is not enough to solve this problem".@math_rachel just nailed it!https://t.co/dtKagibceF
— Anna Kuliberda (@adrebiluka) June 13, 2018
Are men “naturally” better at spatial navigation? Turns out this relationship is highly correlated with the @WHO gender gap index - negligible in countries with low/no gender gap, like Finland 🇫🇮; driven by countries with a high gender gap. More @hugospiers at #BeOnline2018
— Dr Camilla Nord (@camillalnord) June 13, 2018
Model Interpretation
The different goals ppl have in mind when they say "to interpret a model". 1)What can I change to alter outcome?, 2)what part of this data-point influenced this decision (most)?, 3)which historical (training) data-points influenced this decision?, 4)show me typical failure modes
— Mert Sabuncu (@mertrory) June 14, 2018
5) what (about present data-point and/or historical data) caused this failure mode?, 6) what else can I do to avoid (such) errors?, 7) how would the decision change if model had this additional piece of info, 8) how confident is the model in its decision?
— Mert Sabuncu (@mertrory) June 14, 2018
9)where does the model confidence come from (historical data or inductive bias)?, 10)how can I make it less/more confident in these cases, 11)how would the model's decision change if we collected more training data, 12)what happens when nature of data changes (distribution shift)
— Mert Sabuncu (@mertrory) June 14, 2018
I compiled this list in about 10 minutes. So it is probably redundant to some extent and most likely non-exhaustive. Bottom line is, let's avoid cliches such as "sparse/linear/etc models are more intrepretable." Let's make precise statements about our goals.
— Mert Sabuncu (@mertrory) June 14, 2018
Abstract Art with ML
ABSTRACT ART WITH ML. Excited to say that today I'm launching my ML blog. First post is up and more will be coming soon. Would be honoured if you check it out!! https://t.co/50Y2X2Wvd0
— Jan Hünermann (@janhuenermann) June 13, 2018
Autoreject
A library to automatically reject bad trials and repair bad sensors in magneto-/electroencephalography (M/EEG) data.
Harder faster... AutoReject-er 🎶🚀🛸! We proudly present our first official pip release: https://t.co/01SHYYPKzR It's never been easier: from autoreject import AutoReject; epochs_clean = AutoReject(n_jobs=8).fit_transform(epochs) #EEG #MEG #Python #MachineLearning #neuroscience pic.twitter.com/RHNheeVKic
— Denis A. Engemann (@dngman) June 13, 2018
Notable Research
Playing around with cutout (https://t.co/qI4FtcWJlh), which, mixed with a bit of super-convergence, gets us to training cifar10 to 95% accuracy in just short of 30 minutes on a single GPU. pic.twitter.com/Rb2Sr0cmL0
— Sylvain Gugger (@GuggerSylvain) June 14, 2018
"Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam," Khan et al.: https://t.co/cOOinpU3Fo
— Miles Brundage (@Miles_Brundage) June 14, 2018
The Dockerized @PyTorch implementation of our @acl2018 paper Global-Locally Self-Attentive Dialogue State Tracker (https://t.co/WGMKyZwPPK) is now available on Github (https://t.co/4D0Iry2F5i). #nlproc
— Victor Zhong (@hllo_wrld) June 12, 2018
Tutorials and Resources
Just discovered this excellent and very consumable overview of deep learning, AI and where things may be headed by @GaryMarcus at NYU https://t.co/3Lo4BHxCbi
— Serkan Piantino (@spiantino) June 13, 2018
Slides of my talk at École Normale Supérieure de Paris "Smoothing/Regularization Techniques for Probabilistic and Structured Classification" https://t.co/O4MI6vK3Ke I give the big picture on regularized prediction functions, Fenchel-Young losses, SparseMAP and differentiable DP. pic.twitter.com/4C8Cv3TYNh
— Mathieu Blondel (@mblondel_ml) June 8, 2018
The Scalable Neural Architecture behind #Alexa’s Ability to Select Skills https://t.co/lJLnvvwznS #AI #DeepLearning
— Julien Simon (@julsimon) June 14, 2018
Did you know you can go from raw audio to spectrogram with just six lines of Python? Check it out!https://t.co/VGoQXgycHL pic.twitter.com/FvizMv25Zk
— Dr. Rachael Tatman (@rctatman) June 13, 2018
Some ongoing work with @Calclavia on generating NES music using deep learning 🤖🎮🔊. In this example, we train a neural network on machine code written for the NES audio chip and then use it to generate new examples. ⭐️https://t.co/M0Bs3355xA 📜https://t.co/Rm0IWSSnkJ pic.twitter.com/H1knp15Nh6
— Chris Donahue (@chrisdonahuey) June 13, 2018
Blog post detailing the approach that obtained 5th place in OpenAI Retro Contest. @flyyufelix trained an agent to play previously unseen custom levels of Sonic the Hedgehog using transfer learning. https://t.co/ZlppLXg0TR https://t.co/VcDQDaa8CS
— hardmaru (@hardmaru) June 13, 2018
Someone quickly forked this and added walls to the environment see who can evolve to escape the crowded room first. Watching them evolve makes me cringe ... https://t.co/ks28oCPfL6 pic.twitter.com/GGyRIwxBQ7
— hardmaru (@hardmaru) June 14, 2018
rstats
🌟 Since @alexpghayes is hitting broom this summer...
— Mara Averick (@dataandme) June 13, 2018
Why use broom 📦 for tidying model objects?
📽s “Slides from [@drob's] talk on the broom package” https://t.co/3zgBvupStA #rstats #broom pic.twitter.com/2zvpklLA7m
ICYMI…
— Mara Averick (@dataandme) June 13, 2018
🔎 https://t.co/CKyAGLSKjq - rstats search engine https://t.co/qL5CunruhB #rstats #SoDS18
/* 🙈 I didn't know before, but… */ pic.twitter.com/69kwNujqkM
#rstats users can now use rtweet to manage Twitter lists! Code I used to create this list: https://t.co/BUajioSey5 included below. pic.twitter.com/Limq0WsQHG
— Mike Kearney📊 (@kearneymw) June 13, 2018
New post! How to create LEGO mosaics from images using #rstats & #tidyverse. https://t.co/nKmWWw8X5A pic.twitter.com/LS7W23Tti3
— Ryan Timpe 🦖📉 (@ryantimpe) April 23, 2018
Hooray, #rstats pkg naniar 0.3.1 "Strawberry's Adventure" is out on CRAN!
— Nicholas Tierney (@nj_tierney) June 12, 2018
naniar is a tidyverse friendly approach to missing data that makes it easier to wrangle, clean, explore, visualise + impute missing values.
Read more about whats new in 0.3.1 here: https://t.co/cZI2Pm2iM0
Charts à la @EdwardTufte in base, lattice & ggplot2 w/ code:
— Mara Averick (@dataandme) June 14, 2018
"Tufte in R" by @lukaszpiwek https://t.co/wfcSNAPsL2 #rstats #dataviz #infovis pic.twitter.com/5eyKpYapup
Coffee with a Googler (colah)
In this episode of #CoffeeWithAGoogler, @lmoroney sits down with @ch402 from the Google Brain team to chat about @distillpub, a platform for interactive research and peer review from the machine learning community.
— Google Developers (@googledevs) June 13, 2018
Watch here → https://t.co/e10VmBzeDL pic.twitter.com/SKU1y6tMcO
Miscellaneous
I have the same exact conclusion. I saw again and again (at least at Airbnb), massive upfront investment in data engineering enables development of new ML applications down the line. Sometimes you only appreciate this after the fact. https://t.co/OWdtKXnh5X
— Robert Chang (@_rchang) June 13, 2018
What will Justin Timberlake look like in 20 years?
— News from Science (@NewsfromScience) June 13, 2018
Artificial intelligence is making artificial aging more accurate than ever: https://t.co/xXKbZqkX4V
Dog: Data scientist
— Randy Olson (@randal_olson) June 13, 2018
Food: Solution to #DataScience problem
Closed door: Exciting new #MachineLearning algorithm
Open door: New data source pic.twitter.com/aemSIlb0uO
.@Kubota_Yoko on the latest tool in China's surveillance arsenal: RFID chips installed in cars to help authorities track movements. Adding them will be mandatory by next year for new cars https://t.co/2w9dVlZtPc
— Te-Ping Chen (@tepingchen) June 13, 2018