How to Prevent an AI Apocalypse

I traveled to Amsterdam for a week to speak at The Next Web Conference on AI Safety. While roaming the streets of the city, I decided to take some shots and formulate a video on the same topic for you guys. In the battle of good vs evil, it’s up to our community to ensure good wins. I’ll resume the coding videos next week when I get back to San Francisco.


Please Subscribe! And like. And comment. That’s what keeps me going.

I’ll post a link to the talk once it’s up, here’s an article in the mean time:
https://thenextweb.com/artificial-int…

More Learning resources:
https://futureoflife.org/ai-safety-re…
https://iamtrask.github.io/2017/03/17…
https://blog.openai.com/concrete-ai-s…
https://intelligence.org/why-ai-safety/
https://80000hours.org/career-reviews…
https://foundational-research.org/fil…

Join us in the Wizards Slack channel:
http://wizards.herokuapp.com/

And please support me on Patreon: https://www.patreon.com/user?u=3191693

 

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

How to Make an Evolutionary Tetris AI

Let’s use an evolutionary algorithm to improve a Tetris AI! We’ll be coding this in Javascript (gasp) because I want to try something different. Through the process of selection, crossover, and mutation our AI will eventually be able to reach the high score of 500 in record time.


Code for this video:
https://github.com/llSourcell/How_to_…

Please Subscribe! And like. And comment. That’s what keeps me going.

More Learning resources:
https://www.youtube.com/watch?v=L–Ix…
https://luckytoilet.wordpress.com/201…
https://codemyroad.wordpress.com/2013…
http://www.cs.uml.edu/ecg/uploads/AIf…
http://cs229.stanford.edu/proj2015/23…

Join us in the Wizards Slack channel:
http://wizards.herokuapp.com/

And please support me on Patreon:
https://www.patreon.com/user?u=3191693

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

How to Generate Images – Intro to Deep Learning

We’re going to build a variational autoencoder capable of generating novel images after being trained on a collection of images. We’ll be using handwritten digit images as training data. Then we’ll both generate new digits and plot out the learned embeddings. And I introduce Bayesian theory for the first time in this series 🙂


Code for this video:
https://github.com/llSourcell/how_to_…

Mike’s Winning Code:
https://github.com/xkortex/how_to_win…

SG’s Runner up Code:
https://github.com/esha-sg/Intro-Deep…

Please subscribe! And like. And comment. That’s what keeps me going.

2 things
-The embedding visualization at the end would be more spread out if i trained it for more epochs (50 is recommended) but i just used 5.
-The code in the video doesn’t fully implement the reparameterization trick (to save space) but check the GitHub repo for details on that.

More Learning resources:
https://jaan.io/what-is-variational-a…
http://kvfrans.com/variational-autoen…
http://blog.fastforwardlabs.com/2016/…
http://blog.fastforwardlabs.com/2016/…
http://blog.evjang.com/2016/11/tutori…
https://jmetzen.github.io/2015-11-27/…

Join us in the Wizards Slack channel:
http://wizards.herokuapp.com/

And please support me on Patreon:
https://www.patreon.com/user?u=3191693

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

How to Make a Text Summarizer – Intro to Deep Learning #10

I’ll show you how you can turn an article into a one-sentence summary in Python with the Keras machine learning library. We’ll go over word embeddings, encoder-decoder architecture, and the role of attention in learning theory.


Code for this video (Challenge included):
https://github.com/llSourcell/How_to_…

Jie’s Winning Code:
https://github.com/jiexunsee/rudiment…

More Learning resources:
https://www.quora.com/Has-Deep-Learni…
https://research.googleblog.com/2016/…
https://en.wikipedia.org/wiki/Automat…
http://deeplearning.net/tutorial/rnns…
http://machinelearningmastery.com/tex…

Please subscribe! And like. And comment. That’s what keeps me going.

Join us in the Wizards Slack channel:
http://wizards.herokuapp.com/

And please support me on Patreon:
https://www.patreon.com/user?u=3191693

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Intro – Training a neural network to play a game with TensorFlow and Open AI

This tutorial mini series is focused on training a neural network to play the Open AI environment called CartPole.

The idea of CartPole is that there is a pole standing up on top of a cart. The goal is to balance this pole by wiggling/moving the cart from side to side to keep the pole balanced upright.

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

How to Generate Art – Intro to Deep Learning #8

We’re going to learn how to use deep learning to convert an image into the style of an artist that we choose. We’ll go over the history of computer generated art, then dive into the details of how this process works and why deep learning does it so well.

Coding challenge for this video:
https://github.com/llSourcell/How-to-…

Itai’s winning code:
https://github.com/etai83/lstm_stock_…

Andreas’ runner up code:
https://github.com/AndysDeepAbstracti…

More learning resources:
https://harishnarayanan.org/writing/a…
https://ml4a.github.io/ml4a/style_tra…
http://genekogan.com/works/style-tran…
https://arxiv.org/abs/1508.06576
https://jvns.ca/blog/2017/02/12/neura…

Style transfer apps:
http://www.pikazoapp.com/
http://deepart.io/
https://artisto.my.com/
https://prisma-ai.com/

Please subscribe! And like. And comment. That’s what keeps me going.

Join us in the Wizards Slack channel:
http://wizards.herokuapp.com/

And please support me on Patreon:
https://www.patreon.com/user?u=3191693

Song at the beginning is called Everyday by Carly Comando

 

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Deep Learning Libraries by Language

Python

  1. Theano is a python library for defining and evaluating mathematical expressions with numerical arrays. It makes it easy to write deep learning algorithms in python. On the top of the Theano many more libraries are built.

    1. Keras is a minimalist, highly modular neural network library in the spirit of Torch, written in Python, that uses Theano under the hood for optimized tensor manipulation on GPU and CPU.

    2. Pylearn2 is a library that wraps a lot of models and training algorithms such as Stochastic Gradient Descent that are commonly used in Deep Learning. Its functional libraries are built on top of Theano.

    3. Lasagne is a lightweight library to build and train neural networks in Theano. It is governed by simplicity, transparency, modularity, pragmatism , focus and restraint principles.

    4. Blocks a framework that helps you build neural network models on top of Theano.

  2. Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Google’s DeepDream is based on Caffe Framework. This framework is a BSD-licensed C++ library with Python Interface.

  3. nolearn contains a number of wrappers and abstractions around existing neural network libraries, most notably Lasagne, along with a few machine learning utility modules.

  4. Gensim is deep learning toolkit implemented in python programming language intended for handling large text collections, using efficient algorithms.

  5. Chainer bridge the gap between algorithms and implementations of deep learning. Its powerful, flexible and intuitive and is considered as the flexible framework for Deep Learning.

  6. deepnet is a GPU-based python implementation of deep learning algorithms like Feed-forward Neural Nets, Restricted Boltzmann Machines, Deep Belief Nets, Autoencoders, Deep Boltzmann Machines and Convolutional Neural Nets.

  7. Hebel is a library for deep learning with neural networks in Python using GPU acceleration with CUDA through PyCUDA. It implements the most important types of neural network models and offers a variety of different activation functions and training methods such as momentum, Nesterov momentum, dropout, and early stopping.

  8. CXXNET is fast, concise, distributed deep learning framework based on MShadow. It is a lightweight and easy extensible C++/CUDA neural network toolkit with friendly Python/Matlab interface for training and prediction.

  9. DeepPy is a Pythonic deep learning framework built on top of NumPy.

  10. DeepLearning is deep learning library, developed with C++ and python.

  11. Neon is Nervana’s Python based Deep Learning framework.

Matlab

  1. ConvNet Convolutional neural net is a type of deep learning classification algorithms, that can learn useful features from raw data by themselves and is performed by tuning its weighs.

  2. DeepLearnToolBox is a matlab/octave toolbox for deep learning and includes Deep Belief Nets, Stacked Autoencoders, convolutional neural nets.

  3. cuda-convnet is a fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks. It can model arbitrary layer connectivity and network depth. Any directed acyclic graph of layers will do. Training is done using the backpropagation algorithm.

  4. MatConvNet  is a MATLAB toolbox implementing Convolutional Neural Networks (CNNs) for computer vision applications. It is simple, efficient, and can run and learn state-of-the-art CNNs

CPP

  1. eblearn is an open-source C++ library of machine learning by New York University’s machine learning lab, led by Yann LeCun. In particular, implementations of convolutional neural networks with energy-based models along with a GUI, demos and tutorials.

  2. SINGA is designed to be general to implement the distributed training algorithms of existing systems. It is supported by Apache Software Foundation.

  3. NVIDIA DIGITS is a new system for developing, training and visualizing deep neural networks. It puts the power of deep learning into an intuitive browser-based interface, so that data scientists and researchers can quickly design the best DNN for their data using real-time network behavior visualization.

  4. Intel® Deep Learning Framework provides a unified framework for Intel® platforms accelerating Deep Convolutional Neural Networks.

Java

  1. N-Dimensional Arrays for Java (ND4J)is scientific computing libraries for the JVM. They are meant to be used in production environments, which means routines are designed to run fast with minimum RAM requirements.

  2. Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala. It is designed to be used in business environments, rather than as a research tool.

  3. Encog is an advanced machine learning framework which supports Support Vector Machines,Artificial Neural Networks, Genetic Programming, Bayesian Networks, Hidden Markov Models, Genetic Programming and Genetic Algorithms are supported.

JavaScript

  1. Convnet.js is a Javascript library for training Deep Learning models (mainly Neural Networks) entirely in a browser. No software requirements, no compilers, no installations, no GPUs, no sweat.

Lua

  1. Torch is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, fast scripting language, LuaJIT, and an underlying C/CUDA implementation. Torch is based on Lua programming language.

Julia

  1. Mocha is a Deep Learning framework for Julia, inspired by the C++ framework Caffe. Efficient implementations of general stochastic gradient solvers and common layers in Mocha could be used to train deep / shallow (convolutional) neural networks, with (optional) unsupervised pre-training via (stacked) auto-encoders. Its best feature include Modular architecture, High-level Interface, portability with speed, compatibility and many more.

Lisp

  1. Lush(Lisp Universal Shell) is an object-oriented programming language designed for researchers, experimenters, and engineers interested in large-scale numerical and graphic applications. It comes with rich set of deep learning libraries as a part of machine learning libraries.

Haskell

  1. DNNGraph is a deep neural network model generation DSL in Haskell.

.NET

  1. Accord.NET is a .NET machine learning framework combined with audio and image processing libraries completely written in C#. It is a complete framework for building production-grade computer vision, computer audition, signal processing and statistics applications

R

  1. darch package can be used for generating neural networks with many layers (deep architectures). Training methods includes a pre training with the contrastive divergence method and a fine tuning with common known training algorithms like backpropagation or conjugate gradient.
  2. deepnet implements some deep learning architectures and neural network algorithms, including BP,RBM,DBN,Deep autoencoder and so on.

source from teglor

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Stanford training: CONVOLUTIONAL NEURAL NETWORKS FOR VISUAL RECOGNITION

OVERVIEW

Computer Vision is a dynamic and rapidly growing field with countless high-profile applications that have been developed in recent years. The potential uses are diverse, and its integration with cutting edge research has already been validated with self-driving cars, facial recognition, 3D reconstructions, photo search and augmented reality. Artificial Intelligence has become a fundamental component of everyday technology, and visual recognition is a key aspect of that.  It is a valuable tool for interpreting the wealth of visual data that surrounds us and on a scale impossible with natural vision.

This course covers the tasks and systems at the core of visual recognition with a detailed exploration of deep learning architectures. While there will be a brief introduction to computer vision and frameworks, such as Caffe, Torch, Theano and TensorFlow, the focus will be learning end-to-end models, particularly for image classification. Students will learn to implement, train and debug their own neural networks as well as gain a detailed understanding of cutting-edge research in computer vision.

The final assignment will include training a multi-million parameter convolutional neural network and applying it on the largest image classification dataset (ImageNet).

INSTRUCTORS

  • Justin Johnson Instructor, Computer Science

TOPICS INCLUDE

  • End-to-end models
  • Image classification, localization and detection
  • Implementation, training and debugging
  • Learning algorithms, such as backpropagation
  • Long Short Term Memory (LSTM)
  • Recurrent Neural Networks (RNN)
  • Supervised and unsupervised learning

UNITS

3.0 – 4.0

Students enrolling under the non degree option are required to take the course for 4.0 units.

PREREQUISITES

Proficiency in Python; familiarity with C/C++; CS131 and CS229 or equivalents; Math21 or equivalent, linear algebra.

More information

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail