Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification

Overview

This repository has been ⛔️ DEPRECATED. Please take a look at our fairly recent work:

Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification [paper] [Code]

Deep Learning for Land-cover Classification in Hyperspectral Images

Hyperspectral images are images captured in multiple bands of the electromagnetic spectrum. This project is focussed at the development of Deep Learned Artificial Neural Networks for robust landcover classification in hyperspectral images. Land-cover classification is the task of assigning to every pixel, a class label that represents the type of land-cover present in the location of the pixel. It is an image segmentation/scene labeling task. The following diagram describes the task.



This website describes our explorations with the performance of Multi-Layer Perceptrons and Convolutional Neural Networks at the task of Land-cover Classification in Hyperspectral Images. Currently we perform pixel-wise classification.


Dataset =======

We have performed our experiments on the Indian Pines Dataset. The following are the particulars of the dataset:

  • Source: AVIRIS sensor
  • Region: Indian Pines test site over north-western Indiana
  • Time of the year: June
  • Wavelength range: 0.4 – 2.5 micron
  • Number of spectral bands: 220
  • Size of image: 145x145 pixel
  • Number of land-cover classes: 16

Input data format =================

Each pixel is described by an NxN patch centered at the pixel. N denotes the size of spatial context used for making the inference about a given pixel.

The input data was divided into training set (75%) and a test set (25%).

Hardware used

The neural networks were trained on a machine with dual Intel Xeon E5-2630 v2 CPUs, 32 GB RAM and NVIDIA Tesla K-20C GPU.


Multi-Layer Perceptron

Multi-Layer Perceptron (MLP) is an artificial neural network with one or more hidden layers of neurons. MLP is capable of modelling highly non-linear functions between the input and output and forms the basis of Deep-learning Neural Network (DNN) models.

Architecture of Multi-Layer Perceptron used

input- [affine - relu] x 3 - affine - softmax

(Schematic representation below)

Ndenotes the size of the input patch.


Specifics of the learning algorithm

The following are the details of the learning algorithm used:

  • Parameter update algorithm used: Adagrad

    • Batch size: 200
    • Learning rate: 0.01
  • Number of steps: until best validation performance


Performance

Decoding generated for different input patch sizes:


Convolutional Neural Network

(CNN or ConvNet) are a special category of artificial neural networks designed for processing data with a gridlike structure. The ConvNet architecture is based on sparse interactions and parameter sharing and is highly effective for efficient learning of spatial invariances in images. There are four kinds of layers in a typical ConvNet architecture: convolutional (conv), pooling (pool), fullyconnected (affine) and rectifying linear unit (ReLU). Each convolutional layer transforms one set of feature maps into another set of feature maps by convolution with a set of filters.

Architecture of Convolutional Neural Network used

input- [conv - relu - maxpool] x 2 - [affine - relu] x 2 - affine - softmax

(Schematic representation below)

Ndenotes the size of the input patch.


Specifics of the learning algorithm

The following are the details of the learning algorithm used:

  • Parameter update algorithm used: Adagrad

    • Batch size: 100
    • Learning rate: 0.01
  • Number of steps: until best validation performance


Performance

Decoding generated for different input patch sizes:



Description of the repository

  • IndianPines_DataSet_Preparation_Without_Augmentation.ipynb - does the following operations:

    • Loads the Indian Pines dataset
    • Scales the input between [0,1]
    • Mean normalizes the channels
    • Makes training and test splits
    • Extracts patches of given size
    • Oversamples the training set for balancing the classes
  • Spatial_dataset.py - provides a highly flexible Dataset class for handling the Indian Pines data.

  • patch_size.py - specify the required patch-size here.

  • IndianPinesCNN.ipynb- builds the TensorFlow Convolutional Neural Network and defines the training and evaluation ops:

    • inference() - builds the model as far as is required for running the network forward to make predictions.
    • loss() - adds to the inference model the layers required to generate loss.
    • training() - adds to the loss model the Ops required to generate and apply gradients.
    • evaluation() - calcuates the classification accuracy
  • CNN_feed.ipynb - trains and evaluates the Neural Network using a feed dictionary

  • Decoder_Spatial_CNN.ipynb - generates the landcover classification of an input hyperspectral image for a given trained network

  • IndianPinesMLP.py - builds the TensorFlow Multi-layer Perceptron and defines the training and evaluation ops:

    • inference() - builds the model as far as is required for running the network forward to make predictions.
    • loss() - adds to the inference model the layers required to generate loss.
    • training() - adds to the loss model the Ops required to generate and apply gradients.
    • evaluation() - calcuates the classification accuracy
  • MLP_feed.ipynb - trains and evaluates the MLP using a feed dictionary

  • Decoder_Spatial_MLP.ipynb - generates the landcover classification of an input hyperspectral image for a given trained network

  • credibility.ipynb - summarizes the predictions of an ensemble and produces the land-cover classification and class-wise confusion matrix.


Setting up the experiment

  • Download the Indian Pines data-set from here.
  • Make a directory named Data within the current working directory and copy the downloaded .mat files Indian_pines.mat and Indian_pines_gt.mat in this directory.

In order to make sure all codes run smoothly, you should have the following directory subtree structure under your current working directory:

|-- IndianPines_DataSet_Preparation_Without_Augmentation.ipynb
|-- Decoder_Spatial_CNN.ipynb
|-- Decoder_Spatial_MLP.ipynb
|-- IndianPinesCNN.ipynb
|-- CNN_feed.ipynb
|-- MLP_feed.ipynb
|-- credibility.ipynb
|-- IndianPinesCNN.py
|-- IndianPinesMLP.py
|-- Spatial_dataset.py
|-- patch_size.py
|-- Data
|   |-- Indian_pines_gt.mat
|   |-- Indian_pines.mat


  • Set the required patch-size value (eg. 11, 21, etc) in patch_size.py and run the following notebooks in order:
    1. IndianPines_DataSet_Preparation_Without_Augmentation.ipynb
    2. CNN_feed.ipynb OR MLP_feed.ipynb (specify the number of fragments in the training and test data in the variables TRAIN_FILES and TEST_FILES)
    3. Decoder_Spatial_CNN.ipynb OR Decoder_Spatial_MLP.ipynb (set the required checkpoint to be used for decoding in the model_name variable)

Outputs will be displayed in the notebooks.


Acknowledgement

This repository was developed by Anirban Santara, Ankit Singh, Pranoot Hatwar and Kaustubh Mani under the supervision of Prof. Pabitra Mitra during June-July, 2016 at the Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, India. The project is funded by Satellite Applications Centre, Indian Space Research Organization (SAC-ISRO).

Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
L-Verse: Bidirectional Generation Between Image and Text

Far beyond learning long-range interactions of natural language, transformers are becoming the de-facto standard for many vision tasks with their power and scalabilty

Kim, Taehoon 102 Dec 21, 2022
A novel benchmark dataset for Monocular Layout prediction

AutoLay AutoLay: Benchmarking Monocular Layout Estimation Kaustubh Mani, N. Sai Shankar, J. Krishna Murthy, and K. Madhava Krishna Abstract In this pa

Kaustubh Mani 39 Apr 26, 2022
Official code for 'Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Learning' [ICCV 2021]

RTFM This repo contains the Pytorch implementation of our paper: Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Lear

Yu Tian 242 Jan 08, 2023
Face Recognition Attendance Project

Face-Recognition-Attendance-Project In This Project You will learn how to mark attendance using face recognition, Hello Guys This is Gautam Kumar, Thi

Gautam Kumar 1 Dec 03, 2022
Neural network pruning for finding a sparse computational model for controlling a biological motor task.

MothPruning Scientific Overview Originally inspired by biological nervous systems, deep neural networks (DNNs) are powerful computational tools for mo

Olivia Thomas 0 Dec 14, 2022
Neural network chess engine trained on Gary Kasparov's games.

Neural Chess It's not the best chess engine, but it is a chess engine. Proof of concept neural network chess engine (feed-forward multi-layer perceptr

3 Jun 22, 2022
[ECCV2020] Content-Consistent Matching for Domain Adaptive Semantic Segmentation

[ECCV20] Content-Consistent Matching for Domain Adaptive Semantic Segmentation This is a PyTorch implementation of CCM. News: GTA-4K list is available

Guangrui Li 88 Aug 25, 2022
Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis"

StrengthNet Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis" https://arxiv.org/abs/2110

RuiLiu 65 Dec 20, 2022
Low-code/No-code approach for deep learning inference on devices

EzEdgeAI A concept project that uses a low-code/no-code approach to implement deep learning inference on devices. It provides a componentized framewor

On-Device AI Co., Ltd. 7 Apr 05, 2022
Tesla Light Show xLights Guide With python

Tesla Light Show xLights Guide Welcome to the Tesla Light Show xLights guide! You can create and run your own light shows on Tesla vehicles. Running a

Tesla, Inc. 2.5k Dec 29, 2022
Generate high quality pictures. GAN. Generative Adversarial Networks

ESRGAN generate high quality pictures. GAN. Generative Adversarial Networks """ Super-resolution of CelebA using Generative Adversarial Networks. The

Lieon 1 Dec 14, 2021
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

80 Dec 27, 2022
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

19.4k Jan 04, 2023
Machine Learning Toolkit for Kubernetes

Kubeflow the cloud-native platform for machine learning operations - pipelines, training and deployment. Documentation Please refer to the official do

Kubeflow 12.1k Jan 03, 2023
A baseline code for VSPW

A baseline code for VSPW Preparation Download VSPW dataset The VSPW dataset with extracted frames and masks is available here.

28 Aug 22, 2022
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 03, 2022
Deep Implicit Moving Least-Squares Functions for 3D Reconstruction

DeepMLS: Deep Implicit Moving Least-Squares Functions for 3D Reconstruction This repository contains the implementation of the paper: Deep Implicit Mo

103 Dec 22, 2022
Source-to-Source Debuggable Derivatives in Pure Python

Tangent Tangent is a new, free, and open-source Python library for automatic differentiation. Existing libraries implement automatic differentiation b

Google 2.2k Jan 01, 2023
This code provides various models combining dilated convolutions with residual networks

Overview This code provides various models combining dilated convolutions with residual networks. Our models can achieve better performance with less

Fisher Yu 1.1k Dec 30, 2022