Unified learning approach for egocentric hand gesture recognition and fingertip detection

Overview

Unified Gesture Recognition and Fingertip Detection

A unified convolutional neural network (CNN) algorithm for both hand gesture recognition and fingertip detection at the same time. The proposed algorithm uses a single network to predict both finger class probabilities for classification and fingertips positional output for regression in one evaluation. From the finger class probabilities, the gesture is recognized, and using both of the information fingertips are localized. Instead of directly regressing the fingertips position from the fully connected (FC) layer of the CNN, we regress the ensemble of fingertips position from a fully convolutional network (FCN) and subsequently take ensemble average to regress the final fingertips positional output.

Update

Included robust real-time hand detection using yolo for better smooth performance in the first stage of the detection system and most of the code has been cleaned and restructured for ease of use. To get the previous versions, please visit the release section.

GitHub stars GitHub forks GitHub issues Version GitHub license

Requirements

  • TensorFlow-GPU==2.2.0
  • OpenCV==4.2.0
  • ImgAug==0.2.6
  • Weights: Download the pre-trained weights files of the unified gesture recognition and fingertip detection model and put the weights folder in the working directory.

Downloads Downloads

The weights folder contains three weights files. The fingertip.h5 is for unified gesture recignition and finertiop detection. yolo.h5 and solo.h5 are for the yolo and solo method of hand detection. (what is solo?)

Paper

Paper Paper

To get more information about the proposed method and experiments, please go through the paper. Cite the paper as:

@article{alam2021unified,
title = {Unified learning approach for egocentric hand gesture recognition and fingertip detection},
author={Alam, Mohammad Mahmudul and Islam, Mohammad Tariqul and Rahman, SM Mahbubur},
journal = {Pattern Recognition},
volume = {121},
pages = {108200},
year = {2021},
publisher={Elsevier},
}

Dataset

The proposed gesture recognition and fingertip detection model is trained by employing Scut-Ego-Gesture Dataset which has a total of eleven different single hand gesture datasets. Among the eleven different gesture datasets, eight of them are considered for experimentation. A detailed explanation about the partition of the dataset along with the list of the images used in the training, validation, and the test set is provided in the dataset/ folder.

Network Architecture

To implement the algorithm, the following network architecture is proposed where a single CNN is utilized for both hand gesture recognition and fingertip detection.

Prediction

To get the prediction on a single image run the predict.py file. It will run the prediction in the sample image stored in the data/ folder. Here is the output for the sample.jpg image.

Real-Time!

To run in real-time simply clone the repository and download the weights file and then run the real-time.py file.

directory > python real-time.py

In real-time execution, there are two stages. In the first stage, the hand can be detected by using either you only look once (yolo) or single object localization (solo) algorithm. By default, yolo will be used here. The detected hand portion is then cropped and fed to the second stage for gesture recognition and fingertip detection.

Output

Here is the output of the unified gesture recognition and fingertip detection model for all of the 8 classes of the dataset where not only each fingertip is detected but also each finger is classified.

Comments
  • Datasets

    Datasets

    Hello, I have a question about the dataset from your readme, I can't download the Scut-Ego-Gesture Dataset ,Because in China, this website has been banned. Can you share it with me in other ways? For example, Google or QQ email: [email protected]

    opened by CVUsers 10
  • how to download the weights, code not contain?

    how to download the weights, code not contain?

    The weights folder contains three weights files. The comparison.h5 is for first five classes and performance.h5 is for first eight classes. solo.h5 is for hand detection. but no link

    opened by mmxuan18 6
  • OSError: Unable to open file (unable to open file: name = 'yolo.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    OSError: Unable to open file (unable to open file: name = 'yolo.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    I use the Mac Os to run thereal-time.py file, and get the OSError, I also search on Google to find others' the same problem. It is probably the Keras problem. But I do not how to solve it

    opened by Hanswanglin 4
  • OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 88, in h5py.h5f.open OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    opened by Jasonmes 2
  • left hand?

    left hand?

    Hi, first it's really cool work!

    Is the left hand included in the training images? I have been playing around with some of my own images and it seems that it doesn't really recognize the left hand in a palm-down position...

    If I want to include the left hand, do you think it would be possible if I train the network with the image flipped?

    opened by myhjiang 1
  • why are there two hand detection provided?

    why are there two hand detection provided?

    A wonderful work!!As mentioned above, the Yolo and Solo detection models are provided. I wonder what is the advatange of each model comparing to the other and what is the dataset to train the detect.

    opened by DanielMao2015 1
  • Difference of classes5.h5 and classes8.h5

    Difference of classes5.h5 and classes8.h5

    Hi, May i know the difference when training classes5 and classes8? are the difference from the dataset used for training by excluding SingleSix, SingleSeven, SingleEight or there are other modification such as changing the model structure or parameters?

    Thanks

    opened by danieltanimanuel 1
  • Using old versions of tensorflow, can't install the dependencies on my macbook and with newer versions it's constatly failing.

    Using old versions of tensorflow, can't install the dependencies on my macbook and with newer versions it's constatly failing.

    When trying to install the required version of tensorflow:

    pip3 install tensorflow==1.15.0
    ERROR: Could not find a version that satisfies the requirement tensorflow==1.15.0 (from versions: 2.2.0rc3, 2.2.0rc4, 2.2.0, 2.2.1, 2.2.2, 2.3.0rc0, 2.3.0rc1, 2.3.0rc2, 2.3.0, 2.3.1, 2.3.2, 2.4.0rc0, 2.4.0rc1, 2.4.0rc2, 2.4.0rc3, 2.4.0rc4, 2.4.0, 2.4.1)
    ERROR: No matching distribution found for tensorflow==1.15.0
    

    I even tried downloading the .whl file from the pypi and try manually installing it, but that didn't work too:

    pip3 install ~/Downloads/tensorflow-1.15.0-cp37-cp37m-macosx_10_11_x86_64.whl
    ERROR: tensorflow-1.15.0-cp37-cp37m-macosx_10_11_x86_64.whl is not a supported wheel on this platform.
    

    Tried with both python3.6 and python3.8

    So it would be great to update the dependencies :)

    opened by KoStard 1
  • Custom Model keyword arguments Error

    Custom Model keyword arguments Error

    Change model = Model(input=model.input, outputs=[probability, position]) to model = Model(inputs=model.input, outputs=[probability, position]) on line 22 of net/network.py

    opened by Rohit-Jain-2801 1
  • Problem of weights

    Problem of weights

    Hi,when load the solo.h5(In solo.py line 14:"self.model.load_weights(weights)") it will report errors: Process finished with exit code -1073741819 (0xC0000005) keras2.2.5+tensorflow1.14.0+cuda10.0

    opened by MC-E 1
Releases(v2.0)
Owner
Mohammad
Machine Learning | Graduate Research Assistant at CORAL Lab
Mohammad
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022
Implementation of Gans

GAN Generative Adverserial Networks are an approach to generative data modelling using Deep learning methods. I have currently implemented : DCGAN on

Sibam Parida 5 Sep 07, 2021
Pixel Consensus Voting for Panoptic Segmentation (CVPR 2020)

Implementation for Pixel Consensus Voting (CVPR 2020). This codebase contains the essential ingredients of PCV, including various spatial discretizati

Haochen 23 Oct 25, 2022
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.

APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu

ielab 8 Nov 26, 2022
PyTorch implementation of Soft-DTW: a Differentiable Loss Function for Time-Series in CUDA

Soft DTW Loss Function for PyTorch in CUDA This is a Pytorch Implementation of Soft-DTW: a Differentiable Loss Function for Time-Series which is batch

Keon Lee 76 Dec 20, 2022
A Simplied Framework of GAN Inversion

Framework of GAN Inversion Introcuction You can implement your own inversion idea using our repo. We offer a full range of tuning settings (in hparams

Kangneng Zhou 13 Sep 27, 2022
Privacy as Code for DSAR Orchestration: Privacy Request automation to fulfill GDPR, CCPA, and LGPD data subject requests.

Meet Fidesops: Privacy as Code for DSAR Orchestration A part of the greater Fides ecosystem. ⚡ Overview Fidesops (fee-dez-äps, combination of the Lati

Ethyca 44 Dec 06, 2022
Disagreement-Regularized Imitation Learning

Due to a normalization bug the expert trajectories have lower performance than the rl_baseline_zoo reported experts. Please see the following link in

Kianté Brantley 25 Apr 28, 2022
[ ICCV 2021 Oral ] Our method can estimate camera poses and neural radiance fields jointly when the cameras are initialized at random poses in complex scenarios (outside-in scenes, even with less texture or intense noise )

GNeRF This repository contains official code for the ICCV 2021 paper: GNeRF: GAN-based Neural Radiance Field without Posed Camera. This implementation

Quan Meng 191 Dec 26, 2022
Source code of the paper Meta-learning with an Adaptive Task Scheduler.

ATS About Source code of the paper Meta-learning with an Adaptive Task Scheduler. If you find this repository useful in your research, please cite the

Huaxiu Yao 16 Dec 26, 2022
This repo is about implementing different approaches of pose estimation and also is a sub-task of the smart hospital bed project :smile:

Pose-Estimation This repo is a sub-task of the smart hospital bed project which is about implementing the task of pose estimation 😄 Many thanks to th

Max 11 Oct 17, 2022
Parris, the automated infrastructure setup tool for machine learning algorithms.

README Parris, the automated infrastructure setup tool for machine learning algorithms. What Is This Tool? Parris is a tool for automating the trainin

Joseph Greene 319 Aug 02, 2022
ICCV2021 Papers with Code

ICCV2021 Papers with Code

Amusi 1.4k Jan 02, 2023
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
B-cos Networks: Attention is All we Need for Interpretability

Convolutional Dynamic Alignment Networks for Interpretable Classifications M. Böhle, M. Fritz, B. Schiele. B-cos Networks: Alignment is All we Need fo

58 Dec 23, 2022
This repository contains a CBIR system that uses swin transformer to extract image's feature.

Swin-transformer based CBIR This repository contains a CBIR(content-based image retrieval) system. Here we use Swin-transformer to extract query image

JsHou 12 Nov 17, 2022
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

111 Dec 27, 2022
PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation

deep-hist PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation PyT

Winfried Lötzsch 10 Dec 06, 2022
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome

bottom-up-attention This code implements a bottom-up attention model, based on multi-gpu training of Faster R-CNN with ResNet-101, using object and at

Peter Anderson 1.3k Jan 09, 2023