A flexible and extensible framework for gait recognition.

Related tags

Deep LearningOpenGait
Overview

Note: This code is only used for academic purposes, people cannot use this code for anything that might be considered commercial use.

OpenGait

OpenGait is a flexible and extensible gait recognition project provided by the Shiqi Yu Group and supported in part by WATRIX.AI. Just the pre-beta version is released now, and more documentations as well as the reproduced methods will be offered as soon as possible.

Highlighted features:

  • Multiple Models Support: We reproduced several SOTA methods, and reached the same or even better performance.
  • DDP Support: The officially recommended Distributed Data Parallel (DDP) mode is used during the training and testing phases.
  • AMP Support: The Auto Mixed Precision (AMP) option is available.
  • Nice log: We use tensorboard and logging to log everything, which looks pretty.

Model Zoo

Model NM BG CL Configuration Input Size Inference Time Model Size
Baseline 96.3 92.2 77.6 baseline.yaml 64x44 12s 3.78M
GaitSet(AAAI2019) 95.8(95.0) 90.0(87.2) 75.4(70.4) gaitset.yaml 64x44 11s 2.59M
GaitPart(CVPR2020) 96.1(96.2) 90.7(91.5) 78.7(78.7) gaitpart.yaml 64x44 22s 1.20M
GLN*(ECCV2020) 96.1(95.6) 92.5(92.0) 80.4(77.2) gln_phase1.yaml, gln_phase2.yaml 128x88 14s 9.46M / 15.6214M
GaitGL(ICCV2021) 97.5(97.4) 95.1(94.5) 83.5(83.6) gaitgl.yaml 64x44 31s 3.10M

The results in the parentheses are mentioned in the papers

Note:

  • All the models were tested on CASIA-B ([email protected], excluding identical-view cases).
  • The shown result of GLN is implemented without compact block.
  • Only 2 RTX6000 are used during the inference phase.
  • The results on OUMVLP will be released soon. It's inference process just cost about 90 secs(Baseline & 8 RTX6000).

Get Started

Installation

  1. clone this repo.

    git clone https://github.com/ShiqiYu/OpenGait.git
    
  2. Install dependenices:

    • pytorch >= 1.6
    • torchvision
    • pyyaml
    • tensorboard
    • opencv-python
    • tqdm

    Install dependenices by Anaconda:

    conda install tqdm pyyaml tensorboard opencv
    conda install pytorch==1.6.0 torchvision -c pytorch
    

    Or, Install dependenices by pip:

    pip install tqdm pyyaml tensorboard opencv-python
    pip install torch==1.6.0 torchvision==0.7.0
    

Prepare dataset

See prepare dataset.

Train

Train a model by

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase train
  • python -m torch.distributed.launch Our implementation uses DistributedDataParallel.
  • --nproc_per_node The number of gpu to use, it must equal the length of CUDA_VISIBLE_DEVICES.
  • --cfgs The path of config file.
  • --phase Specified as train.
  • --iter You can specify a number of iterations or use restore_hint in the configuration file and resume training from there.
  • --log_to_file If specified, log will be written on disk simultaneously.

You can run commands in train.sh for training different models.

Test

Use trained model to evaluate by

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase test
  • --phase Specified as test.
  • --iter You can specify a number of iterations or or use restore_hint in the configuration file and restore model from there.

Tip: Other arguments are the same as train phase.

You can run commands in test.sh for testing different models.

Customize

If you want customize your own model, see here.

Warning

  • Some models may not be compatible with AMP, you can disable it by setting enable_float16 False.
  • In DDP mode, zombie processes may occur when the program terminates abnormally. You can use this command kill $(ps aux | grep main.py | grep -v grep | awk '{print $2}') to clear them.
  • We implemented the functionality of testing while training, but it slightly affected the results. None of our published models use this functionality. You can disable it by setting with_test False.

Authors:

Open Gait Team (OGT)

Acknowledgement

Comments
  • 关于GLN网络的实现细节

    关于GLN网络的实现细节

    Detailed description

    我看论文中在lateral connections中采用的是1X1卷积,但是看代码中实现的是3X3 kernel大小。是否这样实现的性能更高,就是单纯的求问一下哈

    Then at each stage, a 1 × 1 convolutional layer is taken to rearrange the features and adjust the channel dimension

    lateral_layer 代码

    https://github.com/ShiqiYu/OpenGait/blob/6a469fcd8d0fbe23262ec3d751cf74ec341f17b6/lib/modeling/models/gln.py#L54-L59

    opened by luwanglin 12
  • Unable to find a valid cuDNN algorithm to run convolution

    Unable to find a valid cuDNN algorithm to run convolution

    用笔记本测试的,只有一块显卡所以用的CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 lib/main.py --cfgs ./config/gaitset.yaml --phase train。报错Traceback (most recent call last): File "lib/main.py", line 66, in run_model(cfgs, training) File "lib/main.py", line 49, in run_model Model.run_train(model) File "/home/yq/OpenGait/lib/modeling/base_model.py", line 371, in run_train model.train_step(loss_sum) File "/home/yq/OpenGait/lib/modeling/base_model.py", line 307, in train_step self.Scaler.scale(loss_sum).backward() File "/home/yq/anaconda3/envs/pt14/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/yq/anaconda3/envs/pt14/lib/python3.7/site-packages/torch/autograd/init.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Unable to find a valid cuDNN algorithm to run convolution Exception raised from try_all at /pytorch/aten/src/ATen/native/cudnn/Conv.cpp:692 (most recent call first): 请问是怎么回事啊

    documentation 
    opened by scyzero 10
  • Some confusion about GaitEdge forward

    Some confusion about GaitEdge forward

    In the inference of the GaitEdge , why it still need silhouette as input ? Can't it get silhouette from the output of the Segmentation U-Net? Also ,I don't understand why it need to input ratios . Isn't the end to end network just an RGB image input?

    def forward(self, inputs):
            ipts, labs, _, _, seqL = inputs
    
            ratios = ipts[0]
            rgbs = ipts[1]
            sils = ipts[2]
    
            n, s, c, h, w = rgbs.size()
            rgbs = rgbs.view(n*s, c, h, w)
            sils = sils.view(n*s, 1, h, w)
            logis = self.Backbone(rgbs)  # [n, s, c, h, w]
            logits = torch.sigmoid(logis)
            mask = torch.round(logits).float()
            if self.is_edge:
                edge_mask, eroded_mask = self.preprocess(sils)
    
                # Gait Synthesis
                new_logits = edge_mask*logits+eroded_mask*sils
    
                if self.align:
                    cropped_logits = self.gait_align(
                        new_logits, sils, ratios)
                else:
                    cropped_logits = self.resize(new_logits)
            else:
                if self.align:
                    cropped_logits = self.gait_align(
                        logits, mask, ratios)
                else:
                    cropped_logits = self.resize(logits)
            _, c, H, W = cropped_logits.size()
            cropped_logits = cropped_logits.view(n, s, H, W)
            retval = super(GaitEdge, self).forward(
                [[cropped_logits], labs, None, None, seqL])
            retval['training_feat']['bce'] = {'logits': logits, 'labels': sils}
            retval['visual_summary']['image/roi'] = cropped_logits.view(
                n*s, 1, H, W)
    
            return retval
    
    opened by enemy1205 9
  • Can the code run in Windows 11 anaconda environment?

    Can the code run in Windows 11 anaconda environment?

    System information (version)

    • Pytorch = 1.6.0
    • Operating System / Platform = Windows 11
    • Cuda = 10.1.243

    Detailed description

    The problem shows when I run into my computer with the above system information. I create a new conda env and clone this code in the env. But I cannot run the code and the problems are below: (opengait) C:\Users\Lenovo\OpenGait>set CUDA_VISIBLE_DEVICES=1 & python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase train


    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


    Traceback (most recent call last): Traceback (most recent call last): File "lib/main.py", line 58, in File "lib/main.py", line 58, in torch.distributed.init_process_group('nccl', init_method='env://') torch.distributed.init_process_group('nccl', init_method='env://') AttributeError: module 'torch.distributed' has no attribute 'init_process_group' AttributeError: module 'torch.distributed' has no attribute 'init_process_group' Traceback (most recent call last): File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\site-packages\torch\distributed\launch.py", line 261, in main() File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\site-packages\torch\distributed\launch.py", line 256, in main raise subprocess.CalledProcessError(returncode=process.returncode, subprocess.CalledProcessError: Command '['C:\Users\Lenovo\anaconda3\envs\opengait\python.exe', '-u', 'lib/main.py', '--local_rank=1', '--cfgs', './config/baseline.yaml', '--phase', 'train']' returned non-zero exit status 1.

    opened by robot-droid 9
  • 使用baseline模型进行测试时报错

    使用baseline模型进行测试时报错

    System information (version)

    Detailed description

    [2022-02-17 12:42:57] [INFO]: {'dataset_name': 'CASIA-B', 'dataset_root': './dataset_output', 'num_workers': 1, 'dataset_partition': './misc/partitions/CASIA-B_include_005.json', 'remove_no_gallery': False, 'cache': False, 'test_dataset_name': 'CASIA-B'} [2022-02-17 12:42:57] [INFO]: -------- Test Pid List -------- [2022-02-17 12:42:57] [INFO]: ['075'] [2022-02-17 12:43:00] [INFO]: Restore Parameters from output/CASIA-B/Baseline/Baseline/checkpoints/Baseline-60000.pt !!! [2022-02-17 12:43:00] [INFO]: Parameters Count: 3.77914M [2022-02-17 12:43:00] [INFO]: Model Initialization Finished! Transforming: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 110/110 [00:01<00:00, 76.67it/s] Traceback (most recent call last): File "lib/main.py", line 70, in run_model(cfgs, training) File "lib/main.py", line 55, in run_model Model.run_test(model) File "/home/lalaland/PycharmProjects/OpenGait_/lib/modeling/base_model.py", line 470, in run_test return eval_func(info_dict, dataset_name, **valid_args) File "/home/lalaland/PycharmProjects/OpenGait_/lib/utils/evaluation.py", line 79, in identification 0) * 100 / dist.shape[0], 2) ValueError: could not broadcast input array from shape (4,) into shape (5,) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 10590) of binary: /home/lalaland/anaconda3/envs/torch/bin/python Traceback (most recent call last): File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in main() File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run )(*cmd_args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    lib/main.py FAILED

    Failures: <NO_OTHER_FAILURES>

    Root Cause (first observed failure): [0]: time : 2022-02-17_12:43:06 host : lalaland-Predator-PH317-55 rank : 0 (local_rank: 0) exitcode : 1 (pid: 10590) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

    question 
    opened by SerJamie 8
  • 这个函数的作用是什么呢?

    这个函数的作用是什么呢?

    https://github.com/ShiqiYu/OpenGait/blob/49cbc44069878210e11117697be507921db1e3fa/lib/modeling/modules.py#L34 这个函数的最后 return x.view(*_)是不是相当于 return x.view(n, s, c, h ,w)? 如果这么说的话,这个函数没有改变什么?到底发挥了什么作用?没有看懂,望指点

    question 
    opened by HiAleeYang 8
  • training GaitGL on OUMVLP with default config too slow  to tolerate.....

    training GaitGL on OUMVLP with default config too slow to tolerate.....

    I try GaitGL on OUMVLP dataset, 6000 id for training and rest for teting, using deafault config with Tesla v100*8, it's cost 165 seconds for each 100 iters. if I want 10 epochs, time = 10(epoch) x 130000(seqs) / 8(bs) / 100 * 165 = 3.1 days.

    I guess computing triplet loss cost might cost lot of time, I'm tring to modify the train procedure to, triplet loss on take effect at latter epochs, while early epochs only compute id loss, namely softmaxloss.

    Is there any other methods to speed up?

    thxs

    opened by silverlining21 8
  • GREW pretreatment `to_pickle` has size 0

    GREW pretreatment `to_pickle` has size 0

    System information (version)

    • Pytorch => 1.11
    • Operating System / Platform => Ubuntu 20.04
    • Cuda => 11.3

    Detailed description

    I'm trying to run GREW pretreatment code but it generates no GREW-pkl folder at the end of the process. I debugged myself and checked if the --dataset flag is set properly and the to_pickle list size before saving the pickle file. The flag is well set but the size of the list is always 0.

    I downloaded the GREW dataset from the link you guys sent me and made de GREW-rearranged folder using the code provided. I'll keep investigating what is causing such an error and if I find I'll set a fixing PR.

    Issue submission checklist

    • [x] I checked the problem with documentation, FAQ, issues, and have not found solution.
    no-issue-activity 
    opened by gosiqueira 7
  • what dose the 'SeqL' mean?

    what dose the 'SeqL' mean?

    Hi, https://github.com/ShiqiYu/OpenGait/blob/49cbc44069878210e11117697be507921db1e3fa/lib/modeling/modules.py#L61 I guess it means the length of the sequence .What is theseqL[0] means? By the way, what is the shape of the seqL? What does the each dimension of the seqL correspond to?

    opened by aleeyang 7
  • 如何使用Baseline-60000.pt进行测试

    如何使用Baseline-60000.pt进行测试

    我的运行命令没有反应 CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 lib/main.py --cfgs ./config/baseline.yaml --phase train 请问baseline.yaml该怎么配置

    question 
    opened by w229850 7
  • How to fix this problem when i want to test

    How to fix this problem when i want to test

    System information (version)

    • Pytorch => :grey_question:
    • Operating System / Platform => :grey_question:
    • Cuda => :grey_question:

    Detailed description

    image

    Steps to reproduce

    Issue submission checklist

    • [ ] I checked the problem with documentation, FAQ, issues, and have not found solution.
    opened by Flyooofly 6
  • Reconstruct LossAggregator and fix some typos in config files

    Reconstruct LossAggregator and fix some typos in config files

    1. new config files brought by Gait3D have some typos
    2. The old LossAggregator is a basic python class that cannot register parameters to the base_model. It is inconvenient to train models with those losses that require parameters, such as center loss.
      Therefore, it seems a better way that registers LossAggregator as an nn.Module and uses nn.ModuleDict to organize all losses. In this new version, LossAggregator can be indexed like a regular Python dictionary (keep it the same as the current version), but the modules it contains are properly registered and will be visible by all Module methods. All parameters registered in losses can be accessed by the method 'self.parameters()', enabling they can be trained properly.
    opened by wj1tr0y 1
  • ValueError: The input size of each GPU must be 1 in testing mode

    ValueError: The input size of each GPU must be 1 in testing mode

    After training GaitGL for 80000 epochs, I test GaitGL by running "CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=1 opengait/main.py --cfgs ./config/gaitgl/gaitgl.yaml --phase test", but get "ValueError: The input size of each GPU must be 1 in testing mode, but got 4!" image

    opened by zhang123-sys 5
  • What is the amount of computation and parameters of the GaitEdge?

    What is the amount of computation and parameters of the GaitEdge?

    System information (version)

    • Pytorch => :grey_question:
    • Operating System / Platform => :grey_question:
    • Cuda => :grey_question:

    Detailed description

    Steps to reproduce

    Issue submission checklist

    • [ ] I checked the problem with documentation, FAQ, issues, and have not found solution.
    opened by puyiwen 0
Releases(v1.1)
Owner
Shiqi Yu
Associate Professor, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
Shiqi Yu
An image classification app boilerplate to serve your deep learning models asap!

Image 🖼 Classification App Boilerplate Have you been puzzled by tons of videos, blogs and other resources on the internet and don't know where and ho

Smaranjit Ghose 27 Oct 06, 2022
The code release of paper Low-Light Image Enhancement with Normalizing Flow

[AAAI 2022] Low-Light Image Enhancement with Normalizing Flow Paper | Project Page Low-Light Image Enhancement with Normalizing Flow Yufei Wang, Renji

Yufei Wang 176 Jan 06, 2023
Stochastic Extragradient: General Analysis and Improved Rates

Stochastic Extragradient: General Analysis and Improved Rates This repository is the official implementation of the paper "Stochastic Extragradient: G

Hugo Berard 4 Nov 11, 2022
Federated learning on graph, especially on graph neural networks (GNNs), knowledge graph, and private GNN.

Federated learning on graph, especially on graph neural networks (GNNs), knowledge graph, and private GNN.

keven 198 Dec 20, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
Sequential GCN for Active Learning

Sequential GCN for Active Learning Please cite if using the code: Link to paper. Requirements: python 3.6+ torch 1.0+ pip libraries: tqdm, sklearn, sc

45 Dec 26, 2022
Code for "My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack" paper

Myo Keylogging This is the source code for our paper My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack by Matthias Ga

Secure Mobile Networking Lab 7 Jan 03, 2023
Caffe implementation for Hu et al. Segmentation for Natural Language Expressions

Segmentation from Natural Language Expressions This repository contains the Caffe reimplementation of the following paper: R. Hu, M. Rohrbach, T. Darr

10 Jul 27, 2021
A simple, fast, and efficient object detector without FPN

You Only Look One-level Feature (YOLOF), CVPR2021 A simple, fast, and efficient object detector without FPN. This repo provides an implementation for

789 Jan 09, 2023
Implementation of Gans

GAN Generative Adverserial Networks are an approach to generative data modelling using Deep learning methods. I have currently implemented : DCGAN on

Sibam Parida 5 Sep 07, 2021
[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore

[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore. The repository corresponds to the AI6101 of Semester 1, AY2021-2022, starting from 08/2021. The instructors of

AccSrd 1 Sep 22, 2022
The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer"

Shuffle Transformer The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer" Introduction Very recently, window-

87 Nov 29, 2022
Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition

Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition Official implementation of the Efficient Conforme

Maxime Burchi 145 Dec 30, 2022
How to Train a GAN? Tips and tricks to make GANs work

(this list is no longer maintained, and I am not sure how relevant it is in 2020) How to Train a GAN? Tips and tricks to make GANs work While research

Soumith Chintala 10.8k Dec 31, 2022
[Machine Learning Engineer Basic Guide] 부스트캠프 AI Tech - Product Serving 자료

Boostcamp-AI-Tech-Product-Serving 부스트캠프 AI Tech - Product Serving 자료 Repository 구조 part1(MLOps 개론, Model Serving, 머신러닝 프로젝트 라이프 사이클은 별도의 코드가 없으며, part

Sung Yun Byeon 269 Dec 21, 2022
Implémentation en pyhton de l'article Depixelizing pixel art de Johannes Kopf et Dani Lischinski

Implémentation en pyhton de l'article Depixelizing pixel art de Johannes Kopf et Dani Lischinski

TableauBits 3 May 29, 2022
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

1 May 24, 2022
Build tensorflow keras model pipelines in a single line of code. Created by Ram Seshadri. Collaborators welcome. Permission granted upon request.

deep_autoviml Build keras pipelines and models in a single line of code! Table of Contents Motivation How it works Technology Install Usage API Image

AutoViz and Auto_ViML 102 Dec 17, 2022
thundernet ncnn

MMDetection_Lite 基于mmdetection 实现一些轻量级检测模型,安装方式和mmdeteciton相同 voc0712 voc 0712训练 voc2007测试 coco预训练 thundernet_voc_shufflenetv2_1.5 input shape mAP 320

DayBreak 39 Dec 05, 2022
Deep learned, hardware-accelerated 3D object pose estimation

Isaac ROS Pose Estimation Overview This repository provides NVIDIA GPU-accelerated packages for 3D object pose estimation. Using a deep learned pose e

NVIDIA Isaac ROS 41 Dec 18, 2022