Intro – Training a neural network to play a game with TensorFlow and Open AI

This tutorial mini series is focused on training a neural network to play the Open AI environment called CartPole.

The idea of CartPole is that there is a pole standing up on top of a cart. The goal is to balance this pole by wiggling/moving the cart from side to side to keep the pole balanced upright.

Sample code:……


How to Prevent an AI Apocalypse

I traveled to Amsterdam for a week to speak at The Next Web Conference on AI Safety. While roaming the streets of the city, I decided to take some shots and formulate a video on the same topic for you guys. In the battle of good vs evil, it’s up to our community to ensure good wins. I’ll resume the coding videos next week when I get back to San Francisco.

Please Subscribe! And like. And comment. That’s what keeps me going.

I’ll post a link to the talk once it’s up, here’s an article in the mean time:…

More Learning resources:……………

Join us in the Wizards Slack channel:

And please support me on Patreon:



How to Make a Language Translator – Intro to Deep Learning

Let’s build our own language translator using Tensorflow! We’ll go over several translation methods and talk about how Google Translate is able to achieve state of the art performance.

Code for this video:…

Ryan’s Winning Code:…

Sarah’s Runner-up Code:

More Learning Resources:……………………………

Please Subscribe! And like. And comment. That’s what keeps me going.

Join us in the Wizards Slack channel:

And please support me on Patreon:


Free Machine Learning eBooks – March 2017

Here are three eBooks available for free.


Edited by Abdelhamid Mellouk and Abdennacer Chebira

Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behaviour.

Machine Learning addresses more specifically the ability to improve automatically through experience.


by Shai Ben-David and Shai Shalev-Shwartz

Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way.

The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds.

Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering.


by D. Kriesel

The purpose of this book is to help you master the core concepts of neural networks, including modern techniques for deep learning.

After working through the book you will have written code that uses neural networks and deep learning to solve complex pattern recognition problems.

And you will have a foundation to use neural networks and deep learning to attack problems of your own devising.

To check those books and receive announcements when new free eBooks are published, click here.

Top DSC Resources

Follow us on Twitter: @DataScienceCtrl | @AnalyticBridge

Original post here

Posted by Emmanuelle Rieuf on March 20, 2017 at 4:00pm


How to Make a Text Summarizer – Intro to Deep Learning #10

I’ll show you how you can turn an article into a one-sentence summary in Python with the Keras machine learning library. We’ll go over word embeddings, encoder-decoder architecture, and the role of attention in learning theory.

Code for this video (Challenge included):…

Jie’s Winning Code:…

More Learning resources:……………

Please subscribe! And like. And comment. That’s what keeps me going.

Join us in the Wizards Slack channel:

And please support me on Patreon:


What is an Autoencoder? | Two Minute Papers

Autoencoders are neural networks that are capable of creating sparse representations of the input data and can therefore be used for image compression. There are denoising autoencoders that after learning these sparse representations, can be presented with noisy images. What is even better is a variant that is called the variational autoencoder that not only learns these sparse representations, but can also draw new images as well. We can, for instance, ask it to create new handwritten digits and we can actually expect the results to make sense!


The paper “Auto-Encoding Variational Bayes” is available here:

Recommended for you:
Recurrent Neural Network Writes Sentences About Images –…

Andrej Karpathy’s convolutional neural network that you can train in your browser:…

Sentdex’s Youtube channel is available here:

Francois Chollet’s blog post on autoencoders:…

More reading on autoencoders:…

David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang.

We also thank Experiment for sponsoring our series. –

Subscribe if you would like to see more of these! –…

Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (…)

Thumbnail background image source (we have edited the colors and edited it some more):…
Splash screen/thumbnail design: Felícia Fehér –

Károly Zsolnai-Fehér’s links:
Facebook →…
Twitter →
Web →