[ICCV 2021 Oral] Deep Evidential Action Recognition

Overview

DEAR (Deep Evidential Action Recognition)

Project | Paper & Supp

Wentao Bao, Qi Yu, Yu Kong

International Conference on Computer Vision (ICCV Oral), 2021.

Table of Contents

  1. Introduction
  2. Installation
  3. Datasets
  4. Testing
  5. Training
  6. Model Zoo
  7. Citation

Introduction

We propose the Deep Evidential Action Recognition (DEAR) method to recognize actions in an open world. Specifically, we formulate the action recognition problem from the evidential deep learning (EDL) perspective and propose a novel model calibration method to regularize the EDL training. Besides, to mitigate the static bias of video representation, we propose a plug-and-play module to debias the learned representation through contrastive learning. Our DEAR model trained on UCF-101 dataset achieves significant and consistent performance gains based on multiple action recognition models, i.e., I3D, TSM, SlowFast, TPN, with HMDB-51 or MiT-v2 dataset as the unknown.

Demo

The following figures show the inference results by the SlowFast + DEAR model trained on UCF-101 dataset.

UCF-101
(Known)

1 2 3 4

HMDB-51
(Unknown)

6 7 8 10

Installation

This repo is developed from MMAction2 codebase. Since MMAction2 is updated in a fast pace, most of the requirements and installation steps are similar to the version MMAction2 v0.9.0.

Requirements and Dependencies

Here we only list our used requirements and dependencies. It would be great if you can work around with the latest versions of the listed softwares and hardwares on the latest MMAction2 codebase.

  • Linux: Ubuntu 18.04 LTS
  • GPU: GeForce RTX 3090, A100-SXM4
  • CUDA: 11.0
  • GCC: 7.5
  • Python: 3.7.9
  • Anaconda: 4.9.2
  • PyTorch: 1.7.1+cu110
  • TorchVision: 0.8.2+cu110
  • OpenCV: 4.4.0
  • MMCV: 1.2.1
  • MMAction2: 0.9.0

Installation Steps

The following steps are modified from MMAction2 (v0.9.0) installation document. If you encountered problems, you may refer to more details in the official document, or raise an issue in this repo.

a. Create a conda virtual environment of this repo, and activate it:

conda create -n mmaction python=3.7 -y
conda activate mmaction

b. Install PyTorch and TorchVision following the official instructions, e.g.,

conda install pytorch=1.7.1 cudatoolkit=11.0 torchvision=0.8.2 -c pytorch

c. Install mmcv, we recommend you to install the pre-build mmcv as below.

pip install mmcv-full==1.2.1 -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.1/index.html

Important: If you have already installed mmcv and try to install mmcv-full, you have to uninstall mmcv first by running pip uninstall mmcv. Otherwise, there will be ModuleNotFoundError.

d. Clone the source code of this repo:

git clone https://github.com/Cogito2012/DEAR.git mmaction2
cd mmaction2

e. Install build requirements and then install DEAR.

pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"

If no error appears in your installation steps, then you are all set!

Datasets

This repo uses standard video action datasets, i.e., UCF-101 for closed set training, and HMDB-51 and MiT-v2 test sets as two different unknowns. Please refer to the default MMAction2 dataset setup steps to setup these three datasets correctly.

Note: You can just ignore the Step 3. Extract RGB and Flow in the referred setup steps since all codes related to our paper do not rely on extracted frames and optical flow. This will save you large amount of disk space!

Testing

To test our pre-trained models (see the Model Zoo), you need to download a model file and unzip it under work_dir. Let's take the I3D-based DEAR model as an example. First, download the pre-trained I3D-based models, where the full DEAR model is saved in the folder finetune_ucf101_i3d_edlnokl_avuc_debias. The following directory tree is for your reference to place the downloaded files.

work_dirs    
├── i3d
│    ├── finetune_ucf101_i3d_bnn
│    │   └── latest.pth
│    ├── finetune_ucf101_i3d_dnn
│    │   └── latest.pth
│    ├── finetune_ucf101_i3d_edlnokl
│    │   └── latest.pth
│    ├── finetune_ucf101_i3d_edlnokl_avuc_ced
│    │   └── latest.pth
│    ├── finetune_ucf101_i3d_edlnokl_avuc_debias
│    │   └── latest.pth
│    └── finetune_ucf101_i3d_rpl
│        └── latest.pth
├── slowfast
├── tpn_slowonly
└── tsm

a. Closed Set Evaluation.

Top-K accuracy and mean class accuracy will be reported.

cd experiments/i3d
bash evaluate_i3d_edlnokl_avuc_debias_ucf101.sh

b. Get Uncertainty Threshold.

The threshold value of one model will be reported.

cd experiments/i3d
# run the thresholding with BATCH_SIZE=2 on GPU_ID=0
bash run_get_threshold.sh 0 edlnokl_avuc_debias 2

c. Open Set Evaluation and Comparison.

The open set evaluation metrics and openness curves will be reported.

Note: Make sure the threshold values of different models are from the reported results in step b.

cd experiments/i3d
bash run_openness.sh HMDB  # use HMDB-51 test set as the Unknown
bash run_openness.sh MiT  # use MiT-v2 test set as the Unknown

d. Out-of-Distribution Detection.

The uncertainty distribution figure of a specified model will be reported.

cd experiments/i3d
bash run_ood_detection.sh 0 HMDB edlnokl_avuc_debias

e. Draw Open Set Confusion Matrix

The confusion matrix with unknown dataset used will be reported.

cd experiments/i3d
bash run_draw_confmat.sh HMDB  # or MiT

Training

Let's still take the I3D-based DEAR model as an example.

cd experiments/i3d
bash finetune_i3d_edlnokl_avuc_debias_ucf101.sh 0

Since model training is time consuming, we strongly recommend you to run the above training script in a backend way if you are using SSH remote connection.

nohup bash finetune_i3d_edlnokl_avuc_debias_ucf101.sh 0 >train.log 2>&1 &
# monitoring the training status whenever you open a new terminal
tail -f train.log

Visualizing the training curves (losses, accuracies, etc.) on TensorBoard:

cd work_dirs/i3d/finetune_ucf101_i3d_edlnokl_avuc_debias/tf_logs
tensorboard --logdir=./ --port 6008

Then, you will see the generated url address http://localhost:6008. Open this address with your Internet Browser (such as Chrome), you will monitoring the status of training.

If you are using SSH connection to a remote server without monitor, tensorboard visualization can be done on your local machine by manually mapping the SSH port number:

ssh -L 16008:localhost:6008 {your_remote_name}@{your_remote_ip}

Then, you can monitor the tensorboard by the port number 16008 by typing http://localhost:16008 in your browser.

Model Zoo

The pre-trained weights (checkpoints) are available below.

Model Checkpoint Train Config Test Config Open maF1 (%) Open Set AUC (%) Closed Set ACC (%)
I3D + DEAR ckpt train test 77.24 / 69.98 77.08 / 81.54 93.89
TSM + DEAR ckpt train test 84.69 / 70.15 78.65 / 83.92 94.48
TPN + DEAR ckpt train test 81.79 / 71.18 79.23 / 81.80 96.30
SlowFast + DEAR ckpt train test 85.48 / 77.28 82.94 / 86.99 96.48

For other checkpoints of the compared baseline models, please download them in the Google Drive.

Citation

If you find the code useful in your research, please cite:

@inproceedings{BaoICCV2021DEAR,
  author = "Bao, Wentao and Yu, Qi and Kong, Yu",
  title = "Evidential Deep Learning for Open Set Action Recognition",
  booktitle = "International Conference on Computer Vision (ICCV)",
  year = "2021"
}

License

See Apache-2.0 License

Acknowledgement

In addition to the MMAction2 codebase, this repo contains modified codes from:

We sincerely thank the owners of all these great repos!

Owner
Wentao Bao
Ph.D. Student
Wentao Bao
This repo generates the training data and the model for Morpheus-Deblend

Morpheus-Deblend This repo generates the training data and the model for Morpheus-Deblend. This is the active development repo for the project and as

Ryan Hausen 2 Apr 18, 2022
Codes for "Template-free Prompt Tuning for Few-shot NER".

EntLM The source codes for EntLM. Dependencies: Cuda 10.1, python 3.6.5 To install the required packages by following commands: $ pip3 install -r requ

77 Dec 27, 2022
Unsupervised Video Interpolation using Cycle Consistency

Unsupervised Video Interpolation using Cycle Consistency Project | Paper | YouTube Unsupervised Video Interpolation using Cycle Consistency Fitsum A.

NVIDIA Corporation 100 Nov 30, 2022
一个多语言支持、易使用的 OCR 项目。An easy-to-use OCR project with multilingual support.

AgentOCR 简介 AgentOCR 是一个基于 PaddleOCR 和 ONNXRuntime 项目开发的一个使用简单、调用方便的 OCR 项目 本项目目前包含 Python Package 【AgentOCR】 和 OCR 标注软件 【AgentOCRLabeling】 使用指南 Pytho

AgentMaker 98 Nov 10, 2022
PyMove is a Python library to simplify queries and visualization of trajectories and other spatial-temporal data

Use PyMove and go much further Information Package Status License Python Version Platforms Build Status PyPi version PyPi Downloads Conda version Cond

Insight Data Science Lab 64 Nov 15, 2022
Code for "Typilus: Neural Type Hints" PLDI 2020

Typilus A deep learning algorithm for predicting types in Python. Please find a preprint here. This repository contains its implementation (src/) and

47 Nov 08, 2022
MDMM - Learning multi-domain multi-modality I2I translation

Multi-Domain Multi-Modality I2I translation Pytorch implementation of multi-modality I2I translation for multi-domains. The project is an extension to

Hsin-Ying Lee 107 Nov 04, 2022
Video Contrastive Learning with Global Context

Video Contrastive Learning with Global Context (VCLR) This is the official PyTorch implementation of our VCLR paper. Install dependencies environments

143 Dec 26, 2022
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Pytorch当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和

Bubbliiiing 102 Dec 30, 2022
This is the official code release for the paper Shape and Material Capture at Home

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashl

89 Dec 10, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers Created by Xumin Yu*, Yongming Rao*, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou

Xumin Yu 317 Dec 26, 2022
PyTorch implementation for paper StARformer: Transformer with State-Action-Reward Representations.

StARformer This repository contains the PyTorch implementation for our paper titled StARformer: Transformer with State-Action-Reward Representations.

Jinghuan Shang 14 Dec 09, 2022
Implementation of U-Net and SegNet for building segmentation

Specialized project Created by Katrine Nguyen and Martin Wangen-Eriksen as a part of our specialized project at Norwegian University of Science and Te

Martin.w-e 3 Dec 07, 2022
Meta Representation Transformation for Low-resource Cross-lingual Learning

MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning This repo hosts the code for MetaXL, published at NAACL 2021. [Meta

Microsoft 36 Aug 17, 2022
Final term project for Bayesian Machine Learning Lecture (XAI-623)

Mixquality_AL Final Term Project For Bayesian Machine Learning Lecture (XAI-623) Youtube Link The presentation is given in YoutubeLink Problem Formula

JeongEun Park 3 Jan 18, 2022
Official implementation of the method ContIG, for self-supervised learning from medical imaging with genomics

ContIG: Self-supervised Multimodal Contrastive Learning for Medical Imaging with Genetics This is the code implementation of the paper "ContIG: Self-s

Digital Health & Machine Learning 22 Dec 13, 2022
📖 Deep Attentional Guided Image Filtering

📖 Deep Attentional Guided Image Filtering [Paper] Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao ,Xiangyang Ji Harbin Institute of Technology,

9 Dec 23, 2022
Keras-1D-NN-Classifier

Keras-1D-NN-Classifier This code is based on the reference codes linked below. reference 1, reference 2 This code is for 1-D array data classification

Jae-Hoon Shim 6 May 18, 2021
learning and feeling SLAM together with hands-on-experiments

modern-slam-tutorial-python Learning and feeling SLAM together with hands-on-experiments 😀 😃 😆 Dependencies Most of the examples are based on GTSAM

Giseop Kim 59 Dec 22, 2022