《Fst Lerning of Temporl Action Proposl vi Dense Boundry Genertor》(AAAI 2020)

Overview

Update

  • 2020.03.13: Release tensorflow-version and pytorch-version DBG complete code.
  • 2019.11.12: Release tensorflow-version DBG inference code.
  • 2019.11.11: DBG is accepted by AAAI2020.
  • 2019.11.08: Our ensemble DBG ranks No.1 on ActivityNet

Introduction

In this repo, we propose a novel and unified action detection framework, named DBG, with superior performance over the state-of-the-art action detectors BSN and BMN. You can use the code to evaluate our DBG for action proposal generation or action detection. For more details, please refer to our paper Fast Learning of Temporal Action Proposal via Dense Boundary Generator!

Contents

Paper Introduction

image

This paper introduces a novel and unified temporal action proposal generator named Dense Boundary Generator (DBG). In this work, we propose dual stream BaseNet to generate two different level and more discriminative features. We then adopt a temporal boundary classification module to predict precise temporal boundaries, and an action-aware completeness regression module to provide reliable action completeness confidence.

ActivityNet1.3 Results

image

THUMOS14 Results

image

Qualitative Results

Prerequisites

  • Tensorflow == 1.9.0 or PyTorch == 1.1
  • Python == 3.6
  • NVIDIA GPU == Tesla P40
  • Linux CUDA 9.0 CuDNN
  • gcc 5

Getting Started

Installation

Clone the github repository. We will call the cloned directory as $DBG_ROOT.

cd $DBG_ROOT

Firstly, you should compile our proposal feature generation layers.

Please compile according to the framework you need.

Compile tensorflow-version proposal feature generation layers:

cd tensorflow/custom_op
make

Compile pytorch-version proposal feature generation layers:

cd pytorch/custom_op
python setup.py install

Download Datasets

Prepare ActivityNet 1.3 dataset. You can use official ActivityNet downloader to download videos from the YouTube. Some videos have been deleted from YouTube,and you can also ask for the whole dataset by email.

Extract visual feature, we adopt TSN model pretrained on the training set of ActivityNet, Please refer this repo TSN-yjxiong to extract frames and optical flow and refer this repo anet2016-cuhk to find pretrained TSN model.

For convenience of training and testing, we rescale the feature length of all videos to same length 100, and we provide the 19993 rescaled feature at here Google Cloud or 微云. Then put the features to data/tsn_anet200 directory.

For generating the video features, scripts in ./tools will help you to start from scrach.

Testing of DBG

If you don't want to train the model, you can run the testing code directly using the pretrained model.

Pretrained model is included in output/pretrained_model and set parameters on config/config_pretrained.yaml. Please check the feat_dir in config/config_pretrained.yaml and use scripts to run DBG.

# TensorFlow version (AUC result = 68.37%):
python tensorflow/test.py config/config_pretrained.yaml
python post_processing.py output/result/ results/result_proposals.json
python eval.py results/result_proposals.json

# PyTorch version (AUC result = 68.26%):
python pytorch/test.py config/config_pretrained.yaml
python post_processing.py output/result/ results/result_proposals.json
python eval.py results/result_proposals.json

Training of DBG

We also provide training code of tensorflow and pytorch version. Please check the feat_dir in config/config.yaml and follow these steps to train your model:

1. Training

# TensorFlow version:
python tensorflow/train.py config/config.yaml

# PyTorch version:
python pytorch/train.py config/config.yaml

2. Testing

# TensorFlow version:
python tensorflow/test.py config/config.yaml

# PyTorch version:
python pytorch/test.py config/config.yaml

3. Postprocessing

python post_processing.py output/result/ results/result_proposals.json

4. Evaluation

python eval.py results/result_proposals.json

Citation

If you find DBG useful in your research, please consider citing:

@inproceedings{DBG2020arXiv,
  author    = {Chuming Lin*, Jian Li*, Yabiao Wang, Ying Tai, Donghao Luo, Zhipeng Cui, Chengjie Wang, Jilin Li, Feiyue Huang, Rongrong Ji},
  title     = {Fast Learning of Temporal Action Proposal via Dense Boundary Generator},
  booktitle   = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
}

Contact

For any question, please file an issue or contact

Jian Li: [email protected]
Chuming Lin: [email protected]
Owner
Tencent
Tencent
Pyramid addon for OpenAPI3 validation of requests and responses.

Validate Pyramid views against an OpenAPI 3.0 document Peace of Mind The reason this package exists is to give you peace of mind when providing a REST

Pylons Project 79 Dec 30, 2022
PyTorch implementation of SIFT descriptor

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification

TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification [NeurIPS 2021] Abstract Multiple instance learn

132 Dec 30, 2022
[arXiv'22] Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation

Panoptic NeRF Project Page | Paper | Dataset Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation Xiao Fu*, Shangzhan zhang*,

Xiao Fu 111 Dec 16, 2022
Dictionary Learning with Uniform Sparse Representations for Anomaly Detection

Dictionary Learning with Uniform Sparse Representations for Anomaly Detection Implementation of the Uniform DL Representation for AD algorithm describ

Paul Irofti 1 Nov 23, 2022
StyleGAN2-ADA - Official PyTorch implementation

Need Help? If you’re new to StyleGAN2-ADA and looking to get started, please check out this video series from a course Lia Coleman and I taught in Oct

Derrick Schultz 217 Jan 04, 2023
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

4 Mar 11, 2022
[NeurIPS'21] Shape As Points: A Differentiable Poisson Solver

Shape As Points (SAP) Paper | Project Page | Short Video (6 min) | Long Video (12 min) This repository contains the implementation of the paper: Shape

394 Dec 30, 2022
PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+

PaddlePaddle Vision Transformers State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 🤖 PaddlePaddle Visual Transformers (PaddleViT or

1k Dec 28, 2022
A script that trains a model to recognize handwritten digits using the MNIST data set.

handwritten-digits-recognition A script that trains a model to recognize handwritten digits using the MNIST data set. Then it loads external files and

Hamza Sayih 1 Oct 30, 2021
This is an implementation of PIFuhd based on Pytorch

Open-PIFuhd This is a unofficial implementation of PIFuhd PIFuHD: Multi-Level Pixel-Aligned Implicit Function forHigh-Resolution 3D Human Digitization

Lingteng Qiu 235 Dec 19, 2022
RuleBERT: Teaching Soft Rules to Pre-Trained Language Models

RuleBERT: Teaching Soft Rules to Pre-Trained Language Models (Paper) (Slides) (Video) RuleBERT is a pre-trained language model that has been fine-tune

16 Aug 24, 2022
Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

This repository provides the official code for replicating experiments from the paper: Semi-Supervised Semantic Segmentation with Pixel-Level Contrast

Iñigo Alonso Ruiz 58 Dec 15, 2022
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.

APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu

ielab 8 Nov 26, 2022
Rede Neural Convolucional feita durante o processo seletivo do Laboratório de Inteligência Artificial da FACOM (UFMS)

Primeira_Rede_Neural_Convolucional Rede Neural Convolucional feita durante o processo seletivo do Laboratório de Inteligência Artificial da FACOM (UFM

Roney_Felipe 1 Jan 13, 2022
Automated image registration. Registrationimation was too much of a mouthful.

alignimation Automated image registration. Registrationimation was too much of a mouthful. This repo contains the code used for my blog post Alignimat

Ethan Rosenthal 9 Oct 13, 2022
The coda and data for "Measuring Fine-Grained Domain Relevance of Terms: A Hierarchical Core-Fringe Approach" (ACL '21)

We propose a hierarchical core-fringe learning framework to measure fine-grained domain relevance of terms – the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., de

Jie Huang 14 Oct 21, 2022
Implementation of paper "Graph Condensation for Graph Neural Networks"

GCond A PyTorch implementation of paper "Graph Condensation for Graph Neural Networks" Code will be released soon. Stay tuned :) Abstract We propose a

Wei Jin 66 Dec 04, 2022
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

Kuan-Lin (Jason) Chen 2 Oct 02, 2022
Tensorflow 2 implementation of the paper: Learning and Evaluating Representations for Deep One-class Classification published at ICLR 2021

Deep Representation One-class Classification (DROC). This is not an officially supported Google product. Tensorflow 2 implementation of the paper: Lea

Google Research 137 Dec 23, 2022