Today we have a few very creative Lebron James memes(you’re welcome to submit more similar memes!), and a interesting discussion thread about symbolic and deep learning systems.

Microsoft Buys Github

Jason Fried predicted it years ago (2014):

Prediction: If Github ends up selling itself one day, Microsoft will be the buyer.

— Jason Fried (@jasonfried) February 5, 2014

If Microsoft actually buys GitHub. pic.twitter.com/GcicxrJVYW

— Daryl Ginn (@darylginn) June 2, 2018

A spokesman for Microsoft said "It was cheaper than paying for all our private repositories." https://t.co/hA0R52si0z

— Mark Rendle ❄ (@markrendle) June 3, 2018

Endless great jokes aside, I actually think Microsoft would be a great owner of Github. More and more the company is embracing the developer, open source communities rather than fighting them.

— Chris Albon (@chrisalbon) June 3, 2018

[Ethics] Facebook Again

NEW: After Cambridge Analytica, Facebook said it had long ago closed off the kind of broad data-sharing Cambridge used to get users' personal information.
But Apple, Samsung, & dozens of other manufacturers kept their access.
Our exclusive @nytimes story:https://t.co/fJbS3o01dS

— Nick Confessore (@nickconfessore) June 4, 2018

When an ad is targeted to you based on multiple attributes, Facebook's ad explanation shows the attributes that match the most users. That means you get the least useful explanation possible. Paper: https://t.co/EfsUsz7iS1 https://t.co/81lZbI1Dkn

— Arvind Narayanan (@random_walker) June 3, 2018

truly-ism (cont.)

Continuing the topic from Yesterday

I like this post - there's some truth in it - w caveat that we've failed in any of our big wins to go far beyond associative learning. Even in RL, which deals w effects of actions on rewards, estimated reward function depends heavily on current policy & its state dist

— Zachary Lipton (@zacharylipton) June 3, 2018

My hypothesis is that the function of consciousness is self-monitoring and forward projection for safety and robustness. We are creating AI systems with these functions out of engineering necessity. We will look back and say "it was easy".

— Thomas G. Dietterich (@tdietterich) June 3, 2018

Visualization

The evolution of U.S. household sizes, 1960-2017. #datavizhttps://t.co/u4QVonQj4Q pic.twitter.com/69b7CKnpYU

— Randy Olson (@randal_olson) June 3, 2018

Tutorials and Resources

👍read esp. for the "neat tricks"
"To purrr or not to purrr" via 😸 @MangoTheCathttps://t.co/da1lR3a5it #rstats #purrr #tidyverse pic.twitter.com/zUgEU2IWxS

— Mara Averick (@dataandme) June 4, 2018

Super excited to see that @jurafsky is posting chapters of the new edition of his NLP textbook online as he completes them- lots of new stuff on neural nets/embeddings-- highly recommended. https://t.co/VfbhaWb9pH

— Chris Bail (@chris_bail) June 3, 2018

🗺️ Just released v0.3.0 of usmap for #rstats

Updated the shape files to the 2017 versions and added a few convenient features, like being able to specify the data for plotting using state names/abbreviations instead of just FIPS codes.https://t.co/pbym7nBizT pic.twitter.com/v373sC7ehN

— Paolo Di Lorenzo (@dilorenzopl) June 4, 2018

rqdatatable: rquery Powered by data.table https://t.co/lNlHRGXEpC #rstats #DataScience

— R-bloggers (@Rbloggers) June 3, 2018

New R package xplain: Providing interactive interpretations and explanations of statistical https://t.co/GxNqMFfboS #rstats #DataScience

— R-bloggers (@Rbloggers) June 4, 2018

Notable Research

"Learning a Latent Space of Multitrack Measures," Simon et al.: https://t.co/tlN2UPJdr4

Samples: https://t.co/MwT4mMRfEp

The interpolation from Billie Jean to Don't Stop Believin' is quite something, makes me think a competent/"creative" AI DJ/remix isn't long off...

— Miles Brundage (@Miles_Brundage) June 4, 2018

Symbolic Cognitive Systems

When the software world wants to build stuff that works, it still mostly uses stuff that innately incorporates symbol-manipulation. Can someone please show me an operating system, a word processor, a spreadsheet or scaleable DB that is built with neural networks and no symbols? https://t.co/SJ3k224mEs

— Gary Marcus (@GaryMarcus) June 3, 2018

Self-driving cars a great example, because in this case there are two competing approaches -- the symbolic one, mostly consisting of handcrafted software encoding human abstractions, and the deep learning one, learned end-to-end. One will get to L4--even L5, the other never will. https://t.co/ku1NwwWi3u

— François Chollet (@fchollet) June 4, 2018

It's not that deep learning is intrinsically incapable of driving -- it's that the situation space is extremely high-dimensional (due to edge cases), and that a deep learning system requires to be trained on a *dense sampling* of the same space that the system will operate in.

— François Chollet (@fchollet) June 4, 2018

Because such a representative, dense sampling is impossible to obtain, even when heavily leveraging simulated environments, the symbolic approach will prevail (specifically, an approach that is mostly symbolic but blends human abstractions with learned perceptual primitives)

— François Chollet (@fchollet) June 4, 2018

almost always agree w @you fchollet but don’t see why you think either driving approach here will succeed on its own, nor why symbolic systems necessarily need to be mostly handcrafted. please explain... https://t.co/nHVZk0sBh3

— Gary Marcus (@GaryMarcus) June 4, 2018

1) I say at the end of the thread that the successful approach (currently and for the foreseeable future) consists of systems that blend symbolic world models with deep learning perception modules. The core of such systems is symbolic though, the ML is peripheral.

— François Chollet (@fchollet) June 4, 2018

2) Symbolic cognitive systems (and most software in general) doesn't *have* to be handcrafted. In the future most software will be generated. When our ML algorithms start getting good at abstraction. For now, our models just aren't conducive to abstraction.

— François Chollet (@fchollet) June 4, 2018

Miscellaneous

I'd love to see more research into how machine learning can shrink codebases. Google has a couple *billion* lines of code. Can systems be systematically replaced with learned equivalents?

— Bharath Ramsundar (@rbhar90) June 3, 2018

BTW if you ever need copies of @RStudio hex stickers you can find them all at https://t.co/VGNBCo6kq1 #rstats

— Hadley Wickham (@hadleywickham) June 3, 2018

Lebron Memes

pic.twitter.com/teeIk4oh5r

— Miles Brundage (@Miles_Brundage) June 3, 2018

pic.twitter.com/F5NiSp6RQ3

— Miles Brundage (@Miles_Brundage) June 3, 2018

had to be done pic.twitter.com/RXM2QUvRsz

— Seva (@SevaUT) June 3, 2018

Presenting a new idea for the first time vs presenting the published paper 2 years, 9 drafts, and 4 rounds of peer review later pic.twitter.com/i3ja935zoM

— Brooke Watson (@brookLYNevery1) June 2, 2018

@ceshine_en

Inpired by @WTFJHT