How to use TensorLayer

Overview

How to use TensorLayer

While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLayer day to day.

Here are a summary of the tricks to use TensorLayer. If you find a trick that is particularly useful in practice, please open a Pull Request to add it to the document. If we find it to be reasonable and verified, we will merge it in.

1. Installation

  • To keep your TL version and edit the source code easily, you can download the whole repository by excuting git clone https://github.com/zsdonghao/tensorlayer.git in your terminal, then copy the tensorlayer folder into your project
  • As TL is growing very fast, if you want to use pip install, we suggest you to install the master version
  • For NLP application, you will need to install NLTK and NLTK data

2. Interaction between TF and TL

3. Training/Testing switching

def mlp(x, is_train=True, reuse=False):
    with tf.variable_scope("MLP", reuse=reuse):
      net = InputLayer(x, name='in')
      net = DropoutLayer(net, 0.8, True, is_train, name='drop1')
      net = DenseLayer(net, n_units=800, act=tf.nn.relu, name='dense1')
      net = DropoutLayer(net, 0.8, True, is_train, name='drop2')
      net = DenseLayer(net, n_units=800, act=tf.nn.relu, name='dense2')
      net = DropoutLayer(net, 0.8, True, is_train, name='drop3')
      net = DenseLayer(net, n_units=10, act=tf.identity, name='out')
      logits = net.outputs
      net.outputs = tf.nn.sigmoid(net.outputs)
      return net, logits
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
net_train, logits = mlp(x, is_train=True, reuse=False)
net_test, _ = mlp(x, is_train=False, reuse=True)
cost = tl.cost.cross_entropy(logits, y_, name='cost')

More in here.

4. Get variables and outputs

train_vars = tl.layers.get_variables_with_name('MLP', True, True)
train_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost, var_list=train_vars)
layers = tl.layers.get_layers_with_name(network, "MLP", True)
  • This method usually be used for activation regularization.

5. Data augmentation for large dataset

If your dataset is large, data loading and data augmentation will become the bottomneck and slow down the training. To speed up the data processing you can:

6. Data augmentation for small dataset

If your data size is small enough to feed into the memory of your machine, and data augmentation is simple. To debug easily, you can:

7. Pre-trained CNN and Resnet

8. Using tl.models

  • Use pretrained VGG16 for ImageNet classification
x = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get the whole model
vgg = tl.models.VGG16(x)
# restore pre-trained VGG parameters
sess = tf.InteractiveSession()
vgg.restore_params(sess)
# use for inferencing
probs = tf.nn.softmax(vgg.outputs)
  • Extract features with VGG16 and retrain a classifier with 100 classes
x = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get VGG without the last layer
vgg = tl.models.VGG16(x, end_with='fc2_relu')
# add one more layer
net = tl.layers.DenseLayer(vgg, 100, name='out')
# initialize all parameters
sess = tf.InteractiveSession()
tl.layers.initialize_global_variables(sess)
# restore pre-trained VGG parameters
vgg.restore_params(sess)
# train your own classifier (only update the last layer)
train_params = tl.layers.get_variables_with_name('out')
  • Reuse model
x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get VGG without the last layer
vgg1 = tl.models.VGG16(x1, end_with='fc2_relu')
# reuse the parameters of vgg1 with different input
vgg2 = tl.models.VGG16(x2, end_with='fc2_relu', reuse=True)
# restore pre-trained VGG parameters (as they share parameters, we don’t need to restore vgg2)
sess = tf.InteractiveSession()
vgg1.restore_params(sess)

9. Customized layer

    1. Write a TL layer directly
    1. Use LambdaLayer, it can also accept functions with new variables. With this layer you can connect all third party TF libraries and your customized function to TL. Here is an example of using Keras and TL together.
import tensorflow as tf
import tensorlayer as tl
from keras.layers import *
from tensorlayer.layers import *
def my_fn(x):
    x = Dropout(0.8)(x)
    x = Dense(800, activation='relu')(x)
    x = Dropout(0.5)(x)
    x = Dense(800, activation='relu')(x)
    x = Dropout(0.5)(x)
    logits = Dense(10, activation='linear')(x)
    return logits

network = InputLayer(x, name='input')
network = LambdaLayer(network, my_fn, name='keras')
...

10. Sentences tokenization

>>> captions = ["one two , three", "four five five"] # 2个 句 子 
>>> processed_capts = []
>>> for c in captions:
>>>    c = tl.nlp.process_sentence(c, start_word="<S>", end_word="</S>")
>>>    processed_capts.append(c)
>>> print(processed_capts)
... [['<S>', 'one', 'two', ',', 'three', '</S>'],
... ['<S>', 'four', 'five', 'five', '</S>']]
>>> tl.nlp.create_vocab(processed_capts, word_counts_output_file='vocab.txt', min_word_count=1)
... [TL] Creating vocabulary.
... Total words: 8
... Words in vocabulary: 8
... Wrote vocabulary file: vocab.txt
  • Finally use tl.nlp.Vocabulary to create a vocabulary object from the txt vocabulary file created by tl.nlp.create_vocab
>>> vocab = tl.nlp.Vocabulary('vocab.txt', start_word="<S>", end_word="</S>", unk_word="<UNK>")
... INFO:tensorflow:Initializing vocabulary from file: vocab.txt
... [TL] Vocabulary from vocab.txt : <S> </S> <UNK>
... vocabulary with 10 words (includes start_word, end_word, unk_word)
...   start_id: 2
...   end_id: 3
...   unk_id: 9
...   pad_id: 0

Then you can map word to ID or vice verse as follow:

>>> vocab.id_to_word(2)
... 'one'
>>> vocab.word_to_id('one')
... 2
>>> vocab.id_to_word(100)
... '<UNK>'
>>> vocab.word_to_id('hahahaha')
... 9

11. Dynamic RNN and sequence length

  • Apply zero padding on a batch of tokenized sentences as follow:
>>> sequences = [[1,1,1,1,1],[2,2,2],[3,3]]
>>> sequences = tl.prepro.pad_sequences(sequences, maxlen=None, 
...         dtype='int32', padding='post', truncating='pre', value=0.)
... [[1 1 1 1 1]
...  [2 2 2 0 0]
...  [3 3 0 0 0]]
>>> data = [[1,2,0,0,0], [1,2,3,0,0], [1,2,6,1,0]]
>>> o = tl.layers.retrieve_seq_length_op2(data)
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> print(o.eval())
... [2 3 4]

12. Save models

    1. tl.files.save_npz save all model parameters (weights) into a a list of array, restore using tl.files.load_and_assign_npz
    1. tl.files.save_npz_dict save all model parameters (weights) into a dictionary of array, key is the parameter name, restore using tl.files.load_and_assign_npz_dict
    1. tl.files.save_ckpt save all model parameters (weights) into TensorFlow ckpt file, restore using tl.files.load_ckpt.

13. Compatibility with other TF wrappers

TL can interact with other TF wrappers, which means if you find some codes or models implemented by other wrappers, you can just use it !

  • Other TensorFlow layer implementations can be connected into TensorLayer via LambdaLayer, see example here)
  • TF-Slim to TL: SlimNetsLayer (you can use all Google's pre-trained convolutional models with this layer !!!)

14. Others

  • BatchNormLayer's decay default is 0.9, set to 0.999 for large dataset.
  • Matplotlib issue arise when importing TensorLayer issues, see FQA

Useful links

Author

  • Zhang Rui
  • Hao Dong
Serving PyTorch 1.0 Models as a Web Server in C++

Serving PyTorch Models in C++ This repository contains various examples to perform inference using PyTorch C++ API. Run git clone https://github.com/W

Onur Kaplan 223 Jan 04, 2023
PyTorch implementation for the paper Pseudo Numerical Methods for Diffusion Models on Manifolds

Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM) This repo is the official PyTorch implementation for the paper Pseudo Numerical Meth

Luping Liu (刘路平) 196 Jan 05, 2023
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 09, 2023
A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation

##A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation. #USAGE To run the trained classifier on some images: python w

Alex Seewald 13 Nov 17, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 105 Jan 03, 2023
HyDiff: Hybrid Differential Software Analysis

HyDiff: Hybrid Differential Software Analysis This repository provides the tool and the evaluation subjects for the paper HyDiff: Hybrid Differential

Yannic Noller 22 Oct 20, 2022
PyTorch implementation of our paper How robust are discriminatively trained zero-shot learning models?

How robust are discriminatively trained zero-shot learning models? This repository contains the PyTorch implementation of our paper How robust are dis

Mehmet Kerim Yucel 5 Feb 04, 2022
Nvidia Semantic Segmentation monorepo

Paper | YouTube | Cityscapes Score Pytorch implementation of our paper Hierarchical Multi-Scale Attention for Semantic Segmentation. Please refer to t

NVIDIA Corporation 1.6k Jan 04, 2023
banditml is a lightweight contextual bandit & reinforcement learning library designed to be used in production Python services.

banditml is a lightweight contextual bandit & reinforcement learning library designed to be used in production Python services. This library is developed by Bandit ML and ex-authors of Facebook's app

Bandit ML 51 Dec 22, 2022
A simple implementation of Kalman filter in Multi Object Tracking

kalman Filter in Multi-object Tracking A simple implementation of Kalman filter in Multi Object Tracking 本实现是在https://github.com/liuchangji/kalman-fil

124 Dec 29, 2022
Learned model to estimate number of distinct values (NDV) of a population using a small sample.

Learned NDV estimator Learned model to estimate number of distinct values (NDV) of a population using a small sample. The model approximates the maxim

2 Nov 21, 2022
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

118 Dec 12, 2022
AdelaiDepth is an open source toolbox for monocular depth prediction.

AdelaiDepth is an open source toolbox for monocular depth prediction.

Adelaide Intelligent Machines (AIM) Group 743 Jan 01, 2023
Using multidimensional LSTM neural networks to create a forecast for Bitcoin price

Multidimensional LSTM BitCoin Time Series Using multidimensional LSTM neural networks to create a forecast for Bitcoin price. For notes around this co

Jakob Aungiers 318 Dec 14, 2022
A PyTorch Implementation of ViT (Vision Transformer)

ViT - Vision Transformer This is an implementation of ViT - Vision Transformer by Google Research Team through the paper "An Image is Worth 16x16 Word

Quan Nguyen 7 May 11, 2022
List of all dependencies affected by node-ipc malicious commit

node-ipc-dependencies-list List of all dependencies affected by node-ipc malicious commit as of 17/3/2022 - 19/3/2022 (timestamp) Please improve upon

99 Oct 15, 2022
Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression

Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression We provide the code used in our paper "How Good are Low-Rank Approximation

Aristeidis (Ares) Panos 0 Dec 13, 2021
这是一个deeplabv3-plus-pytorch的源码,可以用于训练自己的模型。

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 训练步骤

Bubbliiiing 350 Dec 28, 2022
High-quality implementations of standard and SOTA methods on a variety of tasks.

Uncertainty Baselines The goal of Uncertainty Baselines is to provide a template for researchers to build on. The baselines can be a starting point fo

Google 1.1k Dec 30, 2022
Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

2 Nov 15, 2021