How to use TensorLayer

Overview

How to use TensorLayer

While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLayer day to day.

Here are a summary of the tricks to use TensorLayer. If you find a trick that is particularly useful in practice, please open a Pull Request to add it to the document. If we find it to be reasonable and verified, we will merge it in.

1. Installation

  • To keep your TL version and edit the source code easily, you can download the whole repository by excuting git clone https://github.com/zsdonghao/tensorlayer.git in your terminal, then copy the tensorlayer folder into your project
  • As TL is growing very fast, if you want to use pip install, we suggest you to install the master version
  • For NLP application, you will need to install NLTK and NLTK data

2. Interaction between TF and TL

3. Training/Testing switching

def mlp(x, is_train=True, reuse=False):
    with tf.variable_scope("MLP", reuse=reuse):
      net = InputLayer(x, name='in')
      net = DropoutLayer(net, 0.8, True, is_train, name='drop1')
      net = DenseLayer(net, n_units=800, act=tf.nn.relu, name='dense1')
      net = DropoutLayer(net, 0.8, True, is_train, name='drop2')
      net = DenseLayer(net, n_units=800, act=tf.nn.relu, name='dense2')
      net = DropoutLayer(net, 0.8, True, is_train, name='drop3')
      net = DenseLayer(net, n_units=10, act=tf.identity, name='out')
      logits = net.outputs
      net.outputs = tf.nn.sigmoid(net.outputs)
      return net, logits
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
net_train, logits = mlp(x, is_train=True, reuse=False)
net_test, _ = mlp(x, is_train=False, reuse=True)
cost = tl.cost.cross_entropy(logits, y_, name='cost')

More in here.

4. Get variables and outputs

train_vars = tl.layers.get_variables_with_name('MLP', True, True)
train_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost, var_list=train_vars)
layers = tl.layers.get_layers_with_name(network, "MLP", True)
  • This method usually be used for activation regularization.

5. Data augmentation for large dataset

If your dataset is large, data loading and data augmentation will become the bottomneck and slow down the training. To speed up the data processing you can:

6. Data augmentation for small dataset

If your data size is small enough to feed into the memory of your machine, and data augmentation is simple. To debug easily, you can:

7. Pre-trained CNN and Resnet

8. Using tl.models

  • Use pretrained VGG16 for ImageNet classification
x = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get the whole model
vgg = tl.models.VGG16(x)
# restore pre-trained VGG parameters
sess = tf.InteractiveSession()
vgg.restore_params(sess)
# use for inferencing
probs = tf.nn.softmax(vgg.outputs)
  • Extract features with VGG16 and retrain a classifier with 100 classes
x = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get VGG without the last layer
vgg = tl.models.VGG16(x, end_with='fc2_relu')
# add one more layer
net = tl.layers.DenseLayer(vgg, 100, name='out')
# initialize all parameters
sess = tf.InteractiveSession()
tl.layers.initialize_global_variables(sess)
# restore pre-trained VGG parameters
vgg.restore_params(sess)
# train your own classifier (only update the last layer)
train_params = tl.layers.get_variables_with_name('out')
  • Reuse model
x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get VGG without the last layer
vgg1 = tl.models.VGG16(x1, end_with='fc2_relu')
# reuse the parameters of vgg1 with different input
vgg2 = tl.models.VGG16(x2, end_with='fc2_relu', reuse=True)
# restore pre-trained VGG parameters (as they share parameters, we don’t need to restore vgg2)
sess = tf.InteractiveSession()
vgg1.restore_params(sess)

9. Customized layer

    1. Write a TL layer directly
    1. Use LambdaLayer, it can also accept functions with new variables. With this layer you can connect all third party TF libraries and your customized function to TL. Here is an example of using Keras and TL together.
import tensorflow as tf
import tensorlayer as tl
from keras.layers import *
from tensorlayer.layers import *
def my_fn(x):
    x = Dropout(0.8)(x)
    x = Dense(800, activation='relu')(x)
    x = Dropout(0.5)(x)
    x = Dense(800, activation='relu')(x)
    x = Dropout(0.5)(x)
    logits = Dense(10, activation='linear')(x)
    return logits

network = InputLayer(x, name='input')
network = LambdaLayer(network, my_fn, name='keras')
...

10. Sentences tokenization

>>> captions = ["one two , three", "four five five"] # 2个 句 子 
>>> processed_capts = []
>>> for c in captions:
>>>    c = tl.nlp.process_sentence(c, start_word="<S>", end_word="</S>")
>>>    processed_capts.append(c)
>>> print(processed_capts)
... [['<S>', 'one', 'two', ',', 'three', '</S>'],
... ['<S>', 'four', 'five', 'five', '</S>']]
>>> tl.nlp.create_vocab(processed_capts, word_counts_output_file='vocab.txt', min_word_count=1)
... [TL] Creating vocabulary.
... Total words: 8
... Words in vocabulary: 8
... Wrote vocabulary file: vocab.txt
  • Finally use tl.nlp.Vocabulary to create a vocabulary object from the txt vocabulary file created by tl.nlp.create_vocab
>>> vocab = tl.nlp.Vocabulary('vocab.txt', start_word="<S>", end_word="</S>", unk_word="<UNK>")
... INFO:tensorflow:Initializing vocabulary from file: vocab.txt
... [TL] Vocabulary from vocab.txt : <S> </S> <UNK>
... vocabulary with 10 words (includes start_word, end_word, unk_word)
...   start_id: 2
...   end_id: 3
...   unk_id: 9
...   pad_id: 0

Then you can map word to ID or vice verse as follow:

>>> vocab.id_to_word(2)
... 'one'
>>> vocab.word_to_id('one')
... 2
>>> vocab.id_to_word(100)
... '<UNK>'
>>> vocab.word_to_id('hahahaha')
... 9

11. Dynamic RNN and sequence length

  • Apply zero padding on a batch of tokenized sentences as follow:
>>> sequences = [[1,1,1,1,1],[2,2,2],[3,3]]
>>> sequences = tl.prepro.pad_sequences(sequences, maxlen=None, 
...         dtype='int32', padding='post', truncating='pre', value=0.)
... [[1 1 1 1 1]
...  [2 2 2 0 0]
...  [3 3 0 0 0]]
>>> data = [[1,2,0,0,0], [1,2,3,0,0], [1,2,6,1,0]]
>>> o = tl.layers.retrieve_seq_length_op2(data)
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> print(o.eval())
... [2 3 4]

12. Save models

    1. tl.files.save_npz save all model parameters (weights) into a a list of array, restore using tl.files.load_and_assign_npz
    1. tl.files.save_npz_dict save all model parameters (weights) into a dictionary of array, key is the parameter name, restore using tl.files.load_and_assign_npz_dict
    1. tl.files.save_ckpt save all model parameters (weights) into TensorFlow ckpt file, restore using tl.files.load_ckpt.

13. Compatibility with other TF wrappers

TL can interact with other TF wrappers, which means if you find some codes or models implemented by other wrappers, you can just use it !

  • Other TensorFlow layer implementations can be connected into TensorLayer via LambdaLayer, see example here)
  • TF-Slim to TL: SlimNetsLayer (you can use all Google's pre-trained convolutional models with this layer !!!)

14. Others

  • BatchNormLayer's decay default is 0.9, set to 0.999 for large dataset.
  • Matplotlib issue arise when importing TensorLayer issues, see FQA

Useful links

Author

  • Zhang Rui
  • Hao Dong
Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Daniel Seita 121 Dec 30, 2022
FS2KToolbox FS2K Dataset Towards the translation between Face

FS2KToolbox FS2K Dataset Towards the translation between Face -- Sketch. Download (photo+sketch+annotation): Google-drive, Baidu-disk, pw: FS2K. For

Deng-Ping Fan 5 Jan 03, 2023
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data

Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data This is the official PyTorch implementation of the SeCo paper: @articl

ElementAI 101 Dec 12, 2022
COVID-VIT: Classification of Covid-19 from CT chest images based on vision transformer models

COVID-ViT COVID-VIT: Classification of Covid-19 from CT chest images based on vision transformer models This code is to response to te MIA-COV19 compe

17 Dec 30, 2022
Angle data is a simple data type.

angledat Angle data is a simple data type. Installing + using Put angledat.py in the main dir of your project. Import it and use. Comments Comments st

1 Jan 05, 2022
A project which aims to protect your privacy using inexpensive hardware and easily modifiable software

Protecting your privacy using an ESP32, an IR sensor and a python script This project, which I personally call the "never-gonna-catch-me-in-the-act-ev

8 Oct 10, 2022
Focal and Global Knowledge Distillation for Detectors

FGD Paper: Focal and Global Knowledge Distillation for Detectors Install MMDetection and MS COCO2017 Our codes are based on MMDetection. Please follow

Mesopotamia 261 Dec 23, 2022
Fully Convlutional Neural Networks for state-of-the-art time series classification

Deep Learning for Time Series Classification As the simplest type of time series data, univariate time series provides a reasonably good starting poin

Stephen 572 Dec 23, 2022
UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac protocols on unmanned aerial vehicle networks.

UAV-Networks Simulator - Autonomous Networking - A.A. 20/21 UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac pr

0 Nov 13, 2021
BlueFog Tutorials

BlueFog Tutorials Welcome to the BlueFog tutorials! In this repository, we've put together a collection of awesome Jupyter notebooks. These notebooks

4 Oct 27, 2021
Code for ECIR'20 paper Diagnosing BERT with Retrieval Heuristics

Bert Axioms This is the repository with the code for the Paper Diagnosing BERT with Retrieval Heuristics Required Data In order to run this code, you

Arthur Câmara 5 Jan 21, 2022
Repository for paper "Non-intrusive speech intelligibility prediction from discrete latent representations"

Non-Intrusive Speech Intelligibility Prediction from Discrete Latent Representations Official repository for paper "Non-Intrusive Speech Intelligibili

Alex McKinney 5 Oct 25, 2022
Instance Segmentation in 3D Scenes using Semantic Superpoint Tree Networks

SSTNet Instance Segmentation in 3D Scenes using Semantic Superpoint Tree Networks(ICCV2021) by Zhihao Liang, Zhihao Li, Songcen Xu, Mingkui Tan, Kui J

83 Nov 29, 2022
CSKG is a commonsense knowledge graph that combines seven popular sources into a consolidated representation

CSKG: The CommonSense Knowledge Graph CSKG is a commonsense knowledge graph that combines seven popular sources into a consolidated representation: AT

USC ISI I2 85 Dec 12, 2022
Code for "MetaMorph: Learning Universal Controllers with Transformers", Gupta et al, ICLR 2022

MetaMorph: Learning Universal Controllers with Transformers This is the code for the paper MetaMorph: Learning Universal Controllers with Transformers

Agrim Gupta 50 Jan 03, 2023
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

JD Galileo Team 128 Nov 29, 2022
Details about the wide minima density hypothesis and metrics to compute width of a minima

wide-minima-density-hypothesis Details about the wide minima density hypothesis and metrics to compute width of a minima This repo presents the wide m

Nikhil Iyer 9 Dec 27, 2022
3D Generative Adversarial Network

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling This repository contains pre-trained models and sampling

Chengkai Zhang 791 Dec 20, 2022
EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising

EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising By Tengfei Liang, Yi Jin, Yidong Li, Tao Wang. Th

workingcoder 115 Jan 05, 2023
An open source Python package for plasma science that is under development

PlasmaPy PlasmaPy is an open source, community-developed Python 3.7+ package for plasma science. PlasmaPy intends to be for plasma science what Astrop

PlasmaPy 444 Jan 07, 2023