TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks
This repository holds the source code, pretrained models, and pre-extracted features for the TSP method.
Please cite this work if you find TSP useful for your research.
@article{alwassel2020tsp,
title={TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks},
author={Alwassel, Humam and Giancola, Silvio and Ghanem, Bernard},
journal={arXiv preprint arXiv:2011.11479},
year={2020}
}
Pre-extracted TSP Features
We provide pre-extracted features for ActivityNet v1.3 and THUMOS14 videos. The feature files are saved in H5 format, where we map each video-name to a features tensor of size N x 512, where N is the number of features and 512 is the feature size. Use h5py python package to read the feature files. Not familiar with H5 files or h5py? here is a quick start guide.
For ActivityNet v1.3 dataset
Download: [train subset] [valid subset] [test subset]
Details: The features are extracted from the R(2+1)D-34 encoder pretrained with TSP on ActivityNet (released model) using clips of 16 frames at a frame rate of 15 fps and a stride of 16 frames (i.e., non-overlapping clips). This gives one feature vector per 16/15 ~= 1.067 seconds.
For THUMOS14 dataset
Download: [valid subset] [test subset]
Details: The features are extracted from the R(2+1)D-34 encoder pretrained with TSP on THUMOS14 (released model) using clips of 16 frames at a frame rate of 15 fps and a stride of 1 frame (i.e., dense overlapping clips). This gives one feature vector per 1/15 ~= 0.067 seconds.
Setup
Clone this repository and create the conda environment.
git clone https://github.com/HumamAlwassel/TSP.git
cd TSP
conda env create -f environment.yml
conda activate tsp
Data Preprocessing
Follow the instructions here to download and preprocess the input data.
Training
We provide training scripts for the TSP models and the TAC baselines here.
Feature Extraction
You can extract features from released pretrained models or from local checkpoints using the scripts here.
Acknowledgment: Our source code borrows implementation ideas from pytorch/vision and facebookresearch/VMZ repositories.


Here are the files for the training set and validation set:
What can I do to solve this problem?
I would like you to take a look, is the change I made in the code correct? Or should I replace the initial tac-on-kinetics Pretrained weights with this instead of using it in the resume?
However, after observation, I find that it does not seem to be the problem with the length of the video. Actions with a length of 0-1.5 seconds are in the video, but actions with a length of 1.5-3 seconds are not in the video. Why is this?

Do you know what caused it?