Irene Chen’s Beginner’s Guide to Deep Learning

Deep Learning, the endeavor to make computers as smart as humans, or even just its simpler cousin Machine Learning, can be incredibly overwhelming and daunting to learn. There’s either too much code or too much math. Luckily, Irene Chen wrote this talk for Beginners to learn the fundamentals of deep learning.

If these Talk Notes are useful to you, become a patron!

Deep Learning: Why Now?

Neural networks have been around since the 1970’s. Why the resurgence now? Three factors provide a new foundation for modern Deep Learning.

  1. Big data (aka the “fuel” of the rocket ship)
  2. Big processing power
  3. Robust neural networks  (aka the “engine” of the rocket ship)

 Because of this “perfect storm,” we are seeing a tremendous number of breakthroughs in ML / DL / AI. (i.e. AlphaGo)

Neural Networks: The Avocado Classifier

Neurons are the cells that comprise the human brain. Synapses connect neurons together. Computer scientists have modeled this with a simplified graph called a neural networks. In this graph, neurons are modeled with what are called nodes, and synapses are modeled with what are called edges.

Simple graph of a neural network, a crticial data structure to deep learning
Simple graph of a neural network


Note that some arrows are thicker than others – each edge has a weight which a measurement of the importance of data passing through. This is represented mathematically via a sigmoid function.

The hidden layers are everything between the input and output nodes.

Very simple example: Given an avocado and its height, “squishiness”, and the color of its skin, can you determine whether or not it is perfectly ripe? 

Forward Propagation is the standard execution model of the neural network, from input to output. Likewise, backwards propagation (or “backpropagation”) is when you work from output to input, changing weights and value of nodes and edges to improve your model.

Mathematical constructs used in deep learning
Mathematical constructs used in deep learning


To reduce errors, “tune” your parameters experimentally or by using the above math. Convergence is when your error rate is “good enough” based on the number of iterations.

Deep Learning Tools and Communities

  • Scikit-learn – very beginner friendly, contains a number of ML algorithms
  • Caffe – UC Berkeley’s computer vision library. Contains “Zoo,” a group of pre-trained models
  • Theano – Efficient GPU powered math
  • iPython Notebook (Jupyter) – great for interactive coding
  • Kaggle – Casual ML cooperative with contests and such

If these Talk Notes are useful to you, become a patron!

Leave a Reply

Your email address will not be published. Required fields are marked *