Face recognition. Redefined.

Overview

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

FaceFinder

Use a powerful CNN to identify faces in images!

TABLE OF CONTENTS
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgements

About The Project

screenshot

There is lots of face recognition software out there on github, but most of it focuses on speed over accuracy and uses models such as 'hog'. However, FaceFinder is one of the most powerful face recognition programs which uses a very large CNN to make accurate predictions.

Here's why:

  • Several modern technologies make use of face recognition and its importance in the world is constantly increasing.
  • You shouldn't have to train a full neural net of your own every time you want to perform face recognition.
  • FaceFinder contains code which runs approximately 3.7 times faster than average.

If you're making an app of your own and want it to perform face recognition, this is your go-to option.

A list of commonly used resources that I find helpful are listed in the acknowledgements.

Built With

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

  • Latest versions of pip and setuptools
    pip install --upgrade pip setuptools
  • Conda
    pip install conda
  • Dlib
    python -m conda install dlib
  • Required packages
    pip install -r requirements.txt

Installation

  1. Ensure you're in your home directory:

    cd ~

    When you clone the repository it should show up as a subfolder in your home folder. You can change its location whenever you want.

  2. Clone the repo:

    git clone https://github.com/BleepLogger/FaceFinder

    Clone the repository by its URL.

  3. Navigate to cloned repository:

    cd FaceFinder

    Commands that you run should be run within the cloned repository.

  4. To run the program, execute tasks.py with command line arguments:

    python Scripts/tasks.py --data-dir '<data folder path>' --input_image '<path to image>'

    Replace the and with the real paths. They're just placeholders.

Usage

To run it from the command line, you will need to pass two arguments.

python Scripts/tasks.py --data-dir '<data folder path>' --input_image '<path to image>'

Replace the and with the real paths.

This program needs one directory containing different images labelled with whose face is present in the image. And then, you need an input image which will be classified.

So if you want to check whether an image is an image of your mom or your dad, then this is how you could do it:

  1. Create a directory called dataset/ in the FaceFinder directory in ~.
  2. Create two subdirectories, dataset/mom and dataset/dad.
  3. Add images of your mother to the mom subdir and your father to your dad subdir.
  4. Click an image of either your mom or your dad, the one you want to classify. Title it 2bclassified.jpg and put it in the FaceFinder directory.
  5. Run this command:
    python Scripts/tasks.py --data-dir 'dataset/' --input_image '2bclassified.jpg'

Then, after about 20 minutes of processing (6-7 if you have a GPU), a window will open up displaying your image, with a box highlighting the detected face and a box of text saying either "Mom" or saying "Dad".

You will have to install dlib from source if you want your GPU to be utilized. Look up the instructions to do that.

Roadmap

See the open issues for a list of proposed features (and known issues).

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Kanav Bhasin - @kanav_bhasin - [email protected]

Project Link: https://github.com/BleepLogger/FaceFinder


# Thank you!
Owner
BleepLogger
App/system developer specializing in C, Python, and JavaScript. Writes unreadable but very fast code. Skills include AI/ML, Web Scraping, and The Cloud.
BleepLogger
A PyTorch implementation of the paper "Semantic Image Synthesis via Adversarial Learning" in ICCV 2017

Semantic Image Synthesis via Adversarial Learning This is a PyTorch implementation of the paper Semantic Image Synthesis via Adversarial Learning. Req

Seonghyeon Nam 146 Nov 25, 2022
a practicable framework used in Deep Learning. So far UDL only provide DCFNet implementation for the ICCV paper (Dynamic Cross Feature Fusion for Remote Sensing Pansharpening)

UDL UDL is a practicable framework used in Deep Learning (computer vision). Benchmark codes, results and models are available in UDL, please contact @

Xiao Wu 11 Sep 30, 2022
This is the official implementation of Elaborative Rehearsal for Zero-shot Action Recognition (ICCV2021)

Elaborative Rehearsal for Zero-shot Action Recognition This is an official implementation of: Shizhe Chen and Dong Huang, Elaborative Rehearsal for Ze

DeLightCMU 26 Sep 24, 2022
SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking

SPLADE 🍴 + 🥄 = 🔎 This repository contains the weights for four models as well as the code for running inference for our two papers: [v1]: SPLADE: S

NAVER 170 Dec 28, 2022
Codes for our paper The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders published to EMNLP 2021.

The Stem Cell Hypothesis Codes for our paper The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders published to EMNLP

Emory NLP 5 Jul 08, 2022
Playable Video Generation

Playable Video Generation Playable Video Generation Willi Menapace, Stéphane Lathuilière, Sergey Tulyakov, Aliaksandr Siarohin, Elisa Ricci Paper: ArX

Willi Menapace 136 Dec 31, 2022
The Multi-Mission Maximum Likelihood framework (3ML)

PyPi Conda The Multi-Mission Maximum Likelihood framework (3ML) A framework for multi-wavelength/multi-messenger analysis for astronomy/astrophysics.

The Multi-Mission Maximum Likelihood (3ML) 62 Dec 30, 2022
working repo for my xumx-sliCQ submissions to the ISMIR 2021 MDX

Music Demixing Challenge - xumx-sliCQ This repository is the GitHub mirror of my working submission repository for the AICrowd ISMIR 2021 Music Demixi

4 Aug 25, 2021
Visualizing lattice vibration information from phonon dispersion to atoms (For GPUMD)

Phonon-Vibration-Viewer (For GPUMD) Visualizing lattice vibration information from phonon dispersion for primitive atoms. In this tutorial, we will in

Liangting 6 Dec 10, 2022
FFTNet vocoder implementation

Unofficial Implementation of FFTNet vocode paper. implement the model. implement tests. overfit on a single batch (sanity check). linearize weights fo

Eren Gölge 81 Dec 08, 2022
Make a Turtlebot3 follow a figure 8 trajectory and create a robot arm and make it follow a trajectory

HW2 - ME 495 Overview Part 1: Makes the robot move in a figure 8 shape. The robot starts moving when launched on a real turtlebot3 and can be paused a

Devesh Bhura 0 Oct 21, 2022
CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields

CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields Paper | Supplementary | Video | Poster If you find our code or paper useful, please

26 Nov 29, 2022
Implementation of SegNet: A Deep Convolutional Encoder-Decoder Architecture for Semantic Pixel-Wise Labelling

Caffe SegNet This is a modified version of Caffe which supports the SegNet architecture As described in SegNet: A Deep Convolutional Encoder-Decoder A

Alex Kendall 1.1k Jan 02, 2023
A Deep Learning Framework for Neural Derivative Hedging

NNHedge NNHedge is a PyTorch based framework for Neural Derivative Hedging. The following repository was implemented to ease the experiments of our pa

GUIJIN SON 17 Nov 14, 2022
Towhee is a flexible machine learning framework currently focused on computing deep learning embeddings over unstructured data.

Towhee is a flexible machine learning framework currently focused on computing deep learning embeddings over unstructured data.

1.7k Jan 08, 2023
Plug and play transformer you can find network structure and official complete code by clicking List

Plug-and-play Module Plug and play transformer you can find network structure and official complete code by clicking List The following is to quickly

8 Mar 27, 2022
Human annotated noisy labels for CIFAR-10 and CIFAR-100.

Dataloader for CIFAR-N CIFAR-10N noise_label = torch.load('./data/CIFAR-10_human.pt') clean_label = noise_label['clean_label'] worst_label = noise_lab

<a href=[email protected]"> 117 Nov 30, 2022
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
Pixray is an image generation system

Pixray is an image generation system

pixray 883 Jan 07, 2023
An implementation of the 1. Parallel, 2. Streaming, 3. Randomized SVD using MPI4Py

PYPARSVD This implementation allows for a singular value decomposition which is: Distributed using MPI4Py Streaming - data can be shown in batches to

Romit Maulik 44 Dec 31, 2022