TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

Overview

TensorFlow-LiveLessons

Note that the second edition of this video series is now available here. The second edition contains all of the content from this (first) edition plus quite a bit more, as well as updated library versions.

This repository is home to the code that accompanies Jon Krohn's:

  1. Deep Learning with TensorFlow LiveLessons (summary blog post here)
  2. Deep Learning for Natural Language Processing LiveLessons (summary blog post here)
  3. Deep Reinforcement Learning and GANs LiveLessons (summary blog post here)

The above order is the recommended sequence in which to undertake these LiveLessons. That said, Deep Learning with TensorFlow provides a sufficient theoretical and practical background for the other LiveLessons.

Prerequisites

Command Line

Working through these LiveLessons will be easiest if you are familiar with the Unix command line basics. A tutorial of these fundamentals can be found here.

Python for Data Analysis

In addition, if you're unfamiliar with using Python for data analysis (e.g., the pandas, scikit-learn, matplotlib packages), the data analyst path of DataQuest will quickly get you up to speed -- steps one (Introduction to Python) and two (Intermediate Python and Pandas) provide the bulk of the essentials.

Installation

Step-by-step guides for running the code in this repository can be found in the installation directory.

Notebooks

All of the code that I cover in the LiveLessons can be found in this directory as Jupyter notebooks.

Below is the lesson-by-lesson sequence in which I covered them:

Deep Learning with TensorFlow LiveLessons

Lesson One: Introduction to Deep Learning

1.1 Neural Networks and Deep Learning
  • via analogy to their biological inspirations, this section introduces Artificial Neural Networks and how they developed to their predominantly deep architectures today
1.2 Running the Code in These LiveLessons
1.3 An Introductory Artificial Neural Network
  • get your hands dirty with a simple-as-possible neural network (shallow_net_in_keras.ipynb) for classifying handwritten digits
  • introduces Jupyter notebooks and their most useful hot keys
  • introduces a gentle quantity of deep learning terminology by whiteboarding through:
    • the MNIST digit data set
    • the preprocessing of images for analysis with a neural network
    • a shallow network architecture

Lesson Two: How Deep Learning Works

2.1 The Families of Deep Neural Nets and their Applications
  • talk through the function and popular applications of the predominant modern families of deep neural nets:
    • Dense / Fully-Connected
    • Convolutional Networks (ConvNets)
    • Recurrent Neural Networks (RNNs) / Long Short-Term Memory units (LSTMs)
    • Reinforcement Learning
    • Generative Adversarial Networks
2.2 Essential Theory I —- Neural Units
  • the following essential deep learning concepts are explained with intuitive, graphical explanations:
    • neural units and activation functions
2.3 Essential Theory II -- Cost Functions, Gradient Descent, and Backpropagation
2.4 TensorFlow Playground -- Visualizing a Deep Net in Action
2.5 Data Sets for Deep Learning
  • overview of canonical data sets for image classification and meta-resources for data sets ideally suited to deep learning
2.6 Applying Deep Net Theory to Code I
  • apply the theory learned throughout Lesson Two to create an intermediate-depth image classifier (intermediate_net_in_keras.ipynb)
  • builds on, and greatly outperforms, the shallow architecture from Section 1.3

Lesson Three: Convolutional Networks

3.1 Essential Theory III -- Mini-Batches, Unstable Gradients, and Avoiding Overfitting
  • add to our state-of-the-art deep learning toolkit by delving further into essential theory, specifically:
    • weight initialization
      • uniform
      • normal
      • Xavier Glorot
    • stochastic gradient descent
      • learning rate
      • batch size
      • second-order gradient learning
        • momentum
        • Adam
    • unstable gradients
      • vanishing
      • exploding
    • avoiding overfitting / model generalization
      • L1/L2 regularization
      • dropout
      • artificial data set expansion
    • batch normalization
    • more layers
      • max-pooling
      • flatten
3.2 Applying Deep Net Theory to Code II
  • apply the theory learned in the previous section to create a deep, dense net for image classification (deep_net_in_keras.ipynb)
  • builds on, and outperforms, the intermediate architecture from Section 2.5
3.3 Introduction to Convolutional Neural Networks for Visual Recognition
  • whiteboard through an intuitive explanation of what convolutional layers are and how they're so effective
3.4 Classic ConvNet Architectures -— LeNet-5
  • apply the theory learned in the previous section to create a deep convolutional net for image classification (lenet_in_keras.ipynb) that is inspired by the classic LeNet-5 neural network introduced in section 1.1
3.5 Classic ConvNet Architectures -— AlexNet and VGGNet
3.6 TensorBoard and the Interpretation of Model Outputs
  • return to the networks from the previous section, adding code to output results to the TensorBoard deep learning results-visualization tool
  • explore TensorBoard and explain how to interpret model results within it

Lesson Four: Introduction to TensorFlow

4.1 Comparison of the Leading Deep Learning Libraries
  • discuss the relative strengths, weaknesses, and common applications of the leading deep learning libraries:
    • Caffe
    • Torch
    • Theano
    • TensorFlow
    • and the high-level APIs TFLearn and Keras
  • conclude that, for the broadest set of applications, TensorFlow is the best option
4.2 Introduction to TensorFlow
4.3 Fitting Models in TensorFlow
4.4 Dense Nets in TensorFlow
4.5 Deep Convolutional Nets in TensorFlow
  • create a deep convolutional neural net (lenet_in_tensorflow.ipynb) in TensorFlow with an architecture identical to the LeNet-inspired one built in Keras in Section 3.4

Lesson Five: Improving Deep Networks

5.1 Improving Performance and Tuning Hyperparameters
  • detail systematic steps for improving the performance of deep neural nets, including by tuning hyperparameters
5.2 How to Built Your Own Deep Learning Project
  • specific steps for designing and evaluating your own deep learning project
5.3 Resources for Self-Study
  • topics worth investing time in to become an expert deployer of deep learning models


Deep Learning for Natural Language Processing

Lesson One: The Power and Elegance of Deep Learning for NLP

1.1 Introduction to Deep Learning for Natural Language Processing
  • high-level overview of deep learning as it pertains to Natural Language Processing (NLP)
  • influential examples of industrial applications of NLP
  • timeline of contemporary breakthroughs that have brought Deep Learning approaches to the forefront of NLP research and development
1.2 Computational Representations of Natural Language Elements
  • introduce the elements of natural language
  • contrast how these elements are represented by traditional machine-learning models and emergent deep-learning models
1.3 NLP Applications
  • specify common NLP applications and bucket them into three tiers of relative complexity
1.4 Installation, Including GPU Considerations
1.5 Review of Prerequisite Deep Learning Theory
1.6 A Sneak Peak
  • take a tantalising look ahead at the capabilities developed over the course of these LiveLessons

Lesson Two: Word Vectors

2.1 Vector-Space Embedding
  • leverage interactive demos to enable an intuitive understanding of vector-space embeddings of words, nuanced quantitative representations of word meaning
2.2 word2vec
  • key papers that led to the development of word2vec, a technique for transforming natural language into vector representations
  • essential word2vec theory introduced:
    • architectures:
      1. Skip-Gram
      2. Continuous Bag of Words
    • training algorithms:
      1. hierarchical softmax
      2. negative sampling
    • evaluation perspectives:
      1. intrinsic
      2. extrinsic
    • hyperparameters:
      1. number of dimensions
      2. context-word window size
      3. number of iterations
      4. size of data set
  • contrast word2vec with its leading alternative, GloVe
2.3 Data Sets for NLP
2.4 Creating Word Vectors with word2vec

Lesson Three: Modeling Natural Language Data

3.1 Best Practices for Preprocessing Natural Language Data
  • in natural_language_preprocessing_best_practices.ipynb, apply the following recommended best practices to clean up a corpus natural language data prior to modeling:
    • tokenize
    • convert all characters to lowercase
    • remove stopwords
    • remove punctuation
    • stem words
    • handle bigram (and trigram) word collocations
3.2 The Area Under the ROC Curve
  • detail the calculation and functionality of the area under the Receiver Operating Characteristic curve summary metric, which is used throughout the remainder of the LiveLessons for evaluating model performance
3.3 Dense Neural Network Classification
3.4 Convolutional Neural Network Classification

Lesson Four: Recurrent Neural Networks

4.1 Essential Theory of RNNs
  • provide an intuitive understanding of Recurrent Neural Networks (RNNs), which permit backpropagation through time over sequential data, such as natural language and financial time series data
4.2 RNNs in Practice
  • incorporate simple RNN layers into a model that classifies documents by their sentiment (rnn_in_keras.ipynb
4.3 Essential Theory of LSTMs and GRUs
  • develop familiarity with the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) varieties of RNNs which provide markedly more productive modeling of sequential data with deep learning models
4.4 LSTMs and GRUs in Practice

Lesson Five: Advanced Models

5.1 Bi-Directional LSTMs
  • Bi-directional LSTMs are an especially potent variant of the LSTM
  • high-level theory on Bi-LSTMs before leveraging them in practice (bidirectional_lstm.ipynb)
5.2 Stacked LSTMs
5.3 Parallel Network Architectures
  • advanced data modeling capabilities are possible with non-sequential architectures, e.g., parallel convolutional layers, each with unique hyperparameters (multi_convnet_architectures.ipynb)


Deep Reinforcement Learning and GANs

Lesson One: The Foundations of Artificial Intelligence

1.1 The Contemporary State of AI
  • examine what the term "Artificial Intelligence" means and how it relates to deep learning
  • define narrow, general, and super intelligence
1.2 Applications of Generative Adversarial Networks
  • uncover the rapidly-improving quality of Generative Adversarial Networks for creating compelling novel imagery in the style of humans
  • involves the fun, interactive pix2pix tool
1.3 Applications of Deep Reinforcement Learning
  • distinguish supervised and unsupervised learning from reinforcement learning
  • provide an overview of the seminal contemporary deep reinforcement learning breakthroughs, including:
    • the Deep Q-Learning algorithm
    • AlphaGo
    • AlphaGo Zero
    • AlphaZero
    • robotics advances
  • introduce the most popular deep reinforcement learning environments:
1.4 Running the Code in these LiveLessons
1.5 Review of Prerequisite Deep Learning Theory

Lesson Two: Generative Adversarial Networks (GANs)

2.1 Essential GAN Theory
  • cover the high-level theory of what GANs are and how they are able to generate realistic-looking images
2.2 The “Quick, Draw!” Game Dataset
  • show the Quick, Draw! game, which we use as the source of hundreds of thousands of hand-drawn images from a single class for a GAN to learn to imitate
2.3 A Discriminator Network
2.4 A Generator Network
2.5 Training an Adversarial Network

Lesson Three: Deep Q-Learning Networks (DQNs)

3.1 The Cartpole Game
  • introduce the Cartpole Game, an environment provided by OpenAI and used throughout the remainder these LiveLessons to train deep reinforcement learning algorithms
3.2 Essential Deep RL Theory
  • delve into the essential theory of deep reinforcement learning in general
3.3 Essential DQN Theory
  • delve into the essential theory of Deep Q-Learning networks, a popular, particular type of deep reinforcement learning algorithm
3.4 Defining a DQN Agent
3.5 Interacting with an OpenAI Gym Environment
  • leverage OpenAI Gym to enable our Deep Q-Learning agent to master the Cartpole Game (cartpole_dqn.ipynb completed)

Lesson Four: OpenAI Lab

4.1 Visualizing Agent Performance
  • use the OpenAI Lab to visualise our Deep Q-Learning agent's performance in real-time
4.2 Modifying Agent Hyperparameters
  • learn to straightforwardly optimise a deep reinforcement learning agent's hyperparameters
4.3 Automated Hyperparameter Experimentation and Optimization
  • automate the search through hyperparameters to optimize our agent’s performance
4.4 Fitness
  • calculate summary metrics to gauge our agent’s overall fitness

Lesson Five: Advanced Deep Reinforcement Learning Agents

5.1 Policy Gradients and the REINFORCE Algorithm
  • at a high level, discover Policy Gradient algorithms in general and the classic REINFORCE implementation in particular
5.2 The Actor-Critic Algorithm
  • cover how Policy Gradients can be combined with Deep Q-Learning to facilitate the Actor-Critic algorithms
5.3 Software 2.0
  • discuss how deep learning is ushering in a new era of software development driven by data in place of hard-coded rules
5.4 Approaching Artificial General Intelligence
  • return to our discussion of Artificial Intelligence, specifically addressing the limitations of modern deep learning approaches


Owner
Deep Learning Study Group
Deep Learning Study Group
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
Pytorch implementation of VAEs for heterogeneous likelihoods.

Heterogeneous VAEs Beware: This repository is under construction 🛠️ Pytorch implementation of different VAE models to model heterogeneous data. Here,

Adrián Javaloy 35 Nov 29, 2022
Active Offline Policy Selection With Python

Active Offline Policy Selection This is supporting example code for NeurIPS 2021 paper Active Offline Policy Selection by Ksenia Konyushkova*, Yutian

DeepMind 27 Oct 15, 2022
Official code for UnICORNN (ICML 2021)

UnICORNN (Undamped Independent Controlled Oscillatory RNN) [ICML 2021] This repository contains the implementation to reproduce the numerical experime

Konstantin Rusch 21 Dec 22, 2022
RoMa: A lightweight library to deal with 3D rotations in PyTorch.

RoMa: A lightweight library to deal with 3D rotations in PyTorch. RoMa (which stands for Rotation Manipulation) provides differentiable mappings betwe

NAVER 90 Dec 27, 2022
A Python package for generating concise, high-quality summaries of a probability distribution

GoodPoints A Python package for generating concise, high-quality summaries of a probability distribution GoodPoints is a collection of tools for compr

Microsoft 28 Oct 10, 2022
Optical machine for senses sensing using speckle and deep learning

# Senses-speckle [Remote Photonic Detection of Human Senses Using Secondary Speckle Patterns](https://doi.org/10.21203/rs.3.rs-724587/v1) paper Python

Zeev Kalyuzhner 0 Sep 26, 2021
This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text"

Iconary This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text". It includes the

AI2 6 May 24, 2022
A style-based Quantum Generative Adversarial Network

Style-qGAN A style based Quantum Generative Adversarial Network (style-qGAN) model for Monte Carlo event generation. Tutorial We have prepared a noteb

9 Nov 24, 2022
Real-CUGAN - Real Cascade U-Nets for Anime Image Super Resolution

Real Cascade U-Nets for Anime Image Super Resolution 中文 | English 🔥 Real-CUGAN

tarsin 111 Dec 28, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

33 Dec 18, 2022
Code for ACM MM2021 paper "Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection"

CTDNet The PyTorch code for ACM MM2021 paper "Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection" Requirements Python 3.6

CVTEAM 28 Oct 20, 2022
Faster Convex Lipschitz Regression

Faster Convex Lipschitz Regression This reepository provides a python implementation of our Faster Convex Lipschitz Regression algorithm with GPU and

Ali Siahkamari 0 Nov 19, 2021
Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper] DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context G

Cheng Zhang 66 Nov 16, 2022
Benchmark VAE - Library for Variational Autoencoder benchmarking

Documentation pythae This library implements some of the most common (Variational) Autoencoder models. In particular it provides the possibility to pe

1.1k Jan 02, 2023
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 06, 2022
A quantum game modeling of pandemic (QHack 2022)

Contributors: @JongheumJung, @YoonjaeChung, @GyunghunKim Abstract In the regime of a global pandemic, leaders around the world need to consider variou

Yoonjae Chung 8 Apr 03, 2022
Learning Off-Policy with Online Planning, CoRL 2021

LOOP: Learning Off-Policy with Online Planning Accepted in Conference of Robot Learning (CoRL) 2021. Harshit Sikchi, Wenxuan Zhou, David Held Paper In

Harshit Sikchi 24 Nov 22, 2022
Auto-updating data to assist in investment to NEPSE

Symbol Ratios Summary Sector LTP Undervalued Bonus % MEGA Strong Commercial Banks 368 5 10 JBBL Strong Development Banks 568 5 10 SIFC Strong Finance

Amit Chaudhary 16 Nov 01, 2022
Code for SALT: Stackelberg Adversarial Regularization, EMNLP 2021.

SALT: Stackelberg Adversarial Regularization Code for Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach, EMNLP 2021. R

Simiao Zuo 10 Jan 10, 2022