I traveled to Amsterdam for a week to speak at The Next Web Conference on AI Safety. While roaming the streets of the city, I decided to take some shots and formulate a video on the same topic for you guys. In the battle of good vs evil, it’s up to our community to ensure good wins. I’ll resume the coding videos next week when I get back to San Francisco.
Please Subscribe! And like. And comment. That’s what keeps me going.
We’re going to build a variational autoencoder capable of generating novel images after being trained on a collection of images. We’ll be using handwritten digit images as training data. Then we’ll both generate new digits and plot out the learned embeddings. And I introduce Bayesian theory for the first time in this series 🙂
Please subscribe! And like. And comment. That’s what keeps me going.
-The embedding visualization at the end would be more spread out if i trained it for more epochs (50 is recommended) but i just used 5.
-The code in the video doesn’t fully implement the reparameterization trick (to save space) but check the GitHub repo for details on that.
Deep learning is the fastest-growing field in artificial intelligence (AI), helping computers make sense of infinite amounts of data in the form of images, sound, and text. Using multiple levels of neural networks, computers now have the capacity to see, learn, and react to complex situations as well or better than humans. Today’s deep learning solutions rely almost exclusively on NVIDIA GPU-accelerated computing to train and speed up challenging applications such as image, handwriting, and voice identification.