SwinTrack: A Simple and Strong Baseline for Transformer Tracking

Overview

SwinTrack

This is the official repo for SwinTrack.

banner

A Simple and Strong Baseline

performance

Prerequisites

Environment

conda (recommended)

conda create -y -n SwinTrack
conda activate SwinTrack
conda install -y anaconda
conda install -y pytorch torchvision cudatoolkit -c pytorch
conda install -y -c fvcore -c iopath -c conda-forge fvcore
pip install wandb
pip install timm

pip

pip install -r requirements.txt

Dataset

Download

Unzip

The paths should be organized as following:

lasot
├── airplane
├── basketball
...
├── training_set.txt
└── testing_set.txt

lasot_extension
├── atv
├── badminton
...
└── wingsuit

got-10k
├── train
│   ├── GOT-10k_Train_000001
│   ...
├── val
│   ├── GOT-10k_Val_000001
│   ...
└── test
    ├── GOT-10k_Test_000001
    ...
    
trackingnet
├── TEST
├── TRAIN_0
...
└── TRAIN_11

coco2017
├── annotations
│   ├── instances_train2017.json
│   └── instances_val2017.json
└── images
    ├── train2017
    │   ├── 000000000009.jpg
    │   ├── 000000000025.jpg
    │   ...
    └── val2017
        ├── 000000000139.jpg
        ├── 000000000285.jpg
        ...

Prepare path.yaml

Copy path.template.yaml as path.yaml and fill in the paths.

LaSOT_PATH: '/path/to/lasot'
LaSOT_Extension_PATH: '/path/to/lasot_ext'
GOT10k_PATH: '/path/to/got10k'
TrackingNet_PATH: '/path/to/trackingnet'
COCO_2017_PATH: '/path/to/coco2017'

Prepare dataset metadata cache (optional)

Download the metadata cache from google drive, and unzip it in datasets/cache/

datasets
└── cache
    ├── SingleObjectTrackingDataset_MemoryMapped
    │   └── filtered
    │       ├── got-10k-got10k_vot_train_split-train-3c1ffeb0c530522f0345d088b2f72168.np
    │       ...
    └── DetectionDataset_MemoryMapped
        └── filtered
            └── coco2017-nocrowd-train-bcd5bf68d4b87619ab451fe293098401.np

Login to wandb

Register an account at wandb, then login with command:

wandb login

Training & Evaluation

Train and evaluate on a single GPU

# Tiny
python main.py SwinTrack Tiny --output_dir /path/to/output -W $num_dataloader_workers

# Base
python main.py SwinTrack Base --output_dir /path/to/output -W $num_dataloader_workers

# Base-384
python main.py SwinTrack Base-384 --output_dir /path/to/output -W $num_dataloader_workers

--output_dir is optional, -W defaults to 4.

note: our code performs evaluation automatically when training is done, output is saved in /path/to/output/test_metrics.

Train and evaluate on multiple GPUs using DDP

# Tiny
python main.py SwinTrack Tiny --distributed_nproc_per_node $num_gpus --distributed_do_spawn_workers --output_dir /path/to/output -W $num_dataloader_workers

Train and evaluate on multiple nodes with multiple GPUs using DDP

# Tiny
python main.py SwinTrack Tiny --master_address $master_address --distributed_node_rank $node_rank distributed_nnodes $num_nodes --distributed_nproc_per_node $num_gpus --distributed_do_spawn_workers --output_dir /path/to/output -W $num_dataloader_workers 

Train and evaluate with run.sh helper script

# Train and evaluate on all GPUs
./run.sh SwinTrack Tiny --output_dir /path/to/output -W $num_dataloader_workers
# Train and evaluate on multiple nodes
NODE_RANK=$NODE_INDEX NUM_NODES=$NUM_NODES MASTER_ADDRESS=$MASTER_ADDRESS DATE_WITH_TIME=$DATE_WITH_TIME ./run.sh SwinTrack Tiny --output_dir /path/to/output -W $num_dataloader_workers 

Ablation study

The ablation study can be done by applying a small patch to the main config file.

Take the ResNet 50 backbone as the example, the rest parameters are the same as the above.

# Train and evaluate with resnet50 backbone
python main.py SwinTrack Tiny --mixin_config resnet.yaml
# or with run.sh
./run.sh SwinTrack Tiny --mixin resnet.yaml

All available config patches are listed in config/SwinTrack/Tiny/mixin.

Train and evaluate with GOT-10k dataset

python main.py SwinTrack Tiny --mixin_config got10k.yaml

Submit $output_dir/test_metrics/got10k/submit/*.zip to the GOT-10k evaluation server to get the result of GOT-10k test split.

Evaluate Existing Model

Download the pretrained model from google drive, then type:

python main.py SwinTrack Tiny --weight_path /path/to/weigth_file.pth --mixin_config evaluation.yaml --output_dir /path/to/output

Our code can evaluate the model on multiple GPUs in parallel, so all parameters above are also available.

Tracking results

Touch here google drive

Citation

@misc{lin2021swintrack,
      title={SwinTrack: A Simple and Strong Baseline for Transformer Tracking}, 
      author={Liting Lin and Heng Fan and Yong Xu and Haibin Ling},
      year={2021},
      eprint={2112.00995},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
LitingLin
LitingLin
ANEA: Distant Supervision for Low-Resource Named Entity Recognition

ANEA: Distant Supervision for Low-Resource Named Entity Recognition ANEA is a tool to automatically annotate named entities in unlabeled text based on

Saarland University Spoken Language Systems Group 15 Mar 30, 2022
Classification Modeling: Probability of Default

Credit Risk Modeling in Python Introduction: If you've ever applied for a credit card or loan, you know that financial firms process your information

Aktham Momani 2 Nov 07, 2022
A tutorial on training a DarkNet YOLOv4 model for the CrowdHuman dataset

YOLOv4 CrowdHuman Tutorial This is a tutorial demonstrating how to train a YOLOv4 people detector using Darknet and the CrowdHuman dataset. Table of c

JK Jung 118 Nov 10, 2022
Revisiting Weakly Supervised Pre-Training of Visual Perception Models

SWAG: Supervised Weakly from hashtAGs This repository contains SWAG models from the paper Revisiting Weakly Supervised Pre-Training of Visual Percepti

Meta Research 134 Jan 05, 2023
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Simon Niklaus 338 Dec 28, 2022
A computational block to solve entity alignment over textual attributes in a knowledge graph creation pipeline.

How to apply? Create your config.ini file following the example provided in config.ini Choose one of the options below to run: Run with Python3 pip in

Scientific Data Management Group 3 Jun 23, 2022
pytorch bert intent classification and slot filling

pytorch_bert_intent_classification_and_slot_filling 基于pytorch的中文意图识别和槽位填充 说明 基本思路就是:分类+序列标注(命名实体识别)同时训练。 使用的预训练模型:hugging face上的chinese-bert-wwm-ext 依

西西嘛呦 33 Dec 15, 2022
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
LSTM built using Keras Python package to predict time series steps and sequences. Includes sin wave and stock market data

LSTM Neural Network for Time Series Prediction LSTM built using the Keras Python package to predict time series steps and sequences. Includes sine wav

Jakob Aungiers 4.1k Jan 02, 2023
Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).

GD-VCR Code for Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning (EMNLP 2021). Research Questions and Aims: How well can a model perform o

Da Yin 24 Oct 13, 2022
FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control

FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control by Dimitri von Rütte, Luca Biggio, Yannic Kilcher, Thomas Hofmann FIGARO: Generat

Dimitri 83 Jan 07, 2023
PyTorch implementation of paper “Unbiased Scene Graph Generation from Biased Training”

A new codebase for popular Scene Graph Generation methods (2020). Visualization & Scene Graph Extraction on custom images/datasets are provided. It's also a PyTorch implementation of paper “Unbiased

Kaihua Tang 824 Jan 03, 2023
Keras-1D-ACGAN-Data-Augmentation

Keras-1D-ACGAN-Data-Augmentation What is the ACGAN(Auxiliary Classifier GANs) ? Related Paper : [Abstract : Synthesizing high resolution photorealisti

Jae-Hoon Shim 7 Dec 23, 2022
Unofficial implementation of MUSIQ (Multi-Scale Image Quality Transformer)

MUSIQ: Multi-Scale Image Quality Transformer Unofficial pytorch implementation of the paper "MUSIQ: Multi-Scale Image Quality Transformer" (paper link

41 Jan 02, 2023
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jianfei Guo 239 Dec 22, 2022
Rename Images with Auto Generated Neural Image Captions

Recaption Images with Generated Neural Image Caption Example Usage: Commandline: Recaption all images from folder /home/feng/Downloads/images to folde

feng wang 3 May 01, 2022
Unsupervised phone and word segmentation using dynamic programming on self-supervised VQ features.

Unsupervised Phone and Word Segmentation using Vector-Quantized Neural Networks Overview Unsupervised phone and word segmentation on speech data is pe

Herman Kamper 13 Dec 11, 2022
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 08, 2023
Lexical Substitution Framework

LexSubGen Lexical Substitution Framework This repository contains the code to reproduce the results from the paper: Arefyev Nikolay, Sheludko Boris, P

Samsung 37 Sep 15, 2022
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

35 Dec 28, 2022