Make OpenCV camera loops less of a chore by skipping the boilerplate and getting right to the interesting stuff

Overview

License


camloop

Forget the boilerplate from OpenCV camera loops and get to coding the interesting stuff

Table of Contents

Usage

This is a simple project developed to reduce complexity and time writing boilerplate code when prototyping computer vision applications. Stop worrying about opening/closing video caps, handling key presses, etc, and just focus on doing the cool stuff!

The project was developed in Python 3.8 and tested with physical local webcams. If you end up using it in any other context, please consider letting me know if it worked or not for whatever use case you had :)

Install

The project is distributed by pypi, so just:

$ pip install pycamloop

As usual, conda or venv are recommended to manage your local environments.

Quickstart

To run a webcam loop and process each frame, just define a function that takes as argument the frame as obtained from cv2.VideoCapture's cap() method (i.e: a np.array) and wrap it with the @camloop decorator. You just need to make sure your function takes the frame as an argument, and returns it so the loop can show it:

from camloop import camloop

@camloop()
def grayscale_example(frame):
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    return frame

# calling the function will start the loop and show the results with the cv2.imshow method
grayscale_example()

The window can be exited at any time by pressing "q" on the keyboard. You can also take screenshots at any time by pressing the "s" key. By default they will be saved in the current directory (see configuring the loop for information on how to customize this and other options).

More advanced use cases

Now, let's say that instead of just converting the frame to grayscale and visualizing it, you want to pass some other arguments, perform more complex operations, and/or persist information every loop. All of this can be done inside the function wrapped by the camloop decorator, and external dependencies can be passed as arguments to your function. For example, let's say we want to run a face detector and save the results to a file called "face-detection-results.txt":

from camloop import camloop

# for simplicity, we use cv2's own haar face detector
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")

@camloop()
def face_detection_example(frame, face_cascade, results_fp=None):
    grayscale_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(grayscale_frame, 1.2, 5)
    for bbox in faces:
        x1, y1 = bbox[:2]
        x2 = x1 + bbox[2]
        y2 = y1 + bbox[3]
        cv2.rectangle(frame, (x1, y1), (x2, y2), (180, 0, 180), 5)

    if results_fp is not None:
	    with open(results_fp, 'a+') as f:
	        f.write(f"{datetime.datetime.now().isoformat()} - {len(faces)} face(s) found: {faces}\n")
    return frame

face_detection_example(face_cascade, results_fp="face-detection-results.txt")

Camloop can handle any arguments and keyword arguments you define in your function, as long as the frame is the first one. In calling the wrapped function, pass the extra arguments with the exception of the frame which is handled implicitly.

Configuring the loop

Since most of the boilerplate is now hidden, camloop exposes a configuration object that allows the user to modify several aspects of it's behavior. The options are:

parameter type default description
source int 0 Index of the camera to use as source for the loop (passed to cv2.VideoCapture())
mirror bool False Whether to flip the frames horizontally
resolution tuple[int, int] None Desired resolution (H,W) of the frames. Passed to the cv2.VideoCapture.set method. Default values and acceptance of custom ones depend on the webcam.
output string '.' Directory where to save artifacts by default (ex: captured screenshots)
sequence_format string None Format for rendering sequence of frames. Acceptable formats are "gif" or "mp4". If specified a video/gif will be saved to the output folder
fps float None FPS value used for the rendering of the sequence of frames. If unspecified, the program will try to estimate if from the length of the recording and number of frames
exit_key string 'q' Keyboard key used to exit the loop
screenshot_key string 's' Keyboard key used to capture a screenshot

If you want to use something other than the defaults, define a dictionary object with the desired configuration and pass it to the camloop decorator.

For example, here we want to mirror the frames horizontally, and save an MP4 video of the recording at 23.7 FPS to the test directory:

from camloop import camloop

config = {
    'mirror': True,
    'output': "test/",
    'fps': 23.7,
    'sequence_format': "mp4",
}

@camloop(config)
def grayscale_example(frame):
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    return frame

grayscale_example()

Demo

Included in the repo is a demonstration script that can be run out-of-the-box to verify camloop and see it's main functionalities. There are a few different samples you can check out, including the grayscale and face detection examples seen in this README).

To run the demo, install camloop and clone the repo:

$ pip install pycamloop
$ git clone https://github.com/glefundes/pycamloop.git
$ cd pycamloop/

Then run it by specifying which demo you want and passing any of the optional arguments (python3 demo.py -h for more info on them). In this case, we're mirroring the frames from the "face detection" demo and saving the a video of the recording in the "demo-videos" directory:

$ mkdir demo-videos
$ python3 demo.py face-detection --mirror --save-sequence mp4 -o demo-videos/

About The Project

I work as a computer vision engineer and often find myself having to prototype or debug projects locally using my own webcam as a source. This, of course, means I have to frequently code the same boilerplate OpenCV camera loop in multiple places. Eventually I got tired of copy-pasting the same 20 lines from file to file and decided to write a 100-ish lines package to make my work a little more efficient, less boring and code overall less bloated. That's pretty much it. Also, it was a nice chance to practice playing with decorators.

TODO

  • Verify functionality with other types of video sources (video files, streams, etc)

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Gabriel Lefundes Vieira - [email protected]

Owner
Gabriel Lefundes
Data Scientist, Computer Vision Engineer @ Amigo Edu.
Gabriel Lefundes
Select range and every time the screen changes, OCR is activated.

ASOCR(Auto Screen OCR) Select range and every time you press Space key, OCR is activated. 範囲を選ぶと、あなたがスペースキーを押すたびに、画面が変わる度にOCRが起動します。 usage1: simple OC

1 Feb 13, 2022
Regions sanitàries (RS), Sectors Sanitàris (SS) i Àrees Bàsiques de Salut (ABS) de Catalunya

Regions sanitàries (RS), Sectors Sanitaris (SS), Àrees de Gestió Assistencial (AGA) i Àrees Bàsiques de Salut (ABS) de Catalunya Fitxers GeoJSON de le

Glòria Macià Muñoz 2 Jan 23, 2022
Recognizing cropped text in natural images.

ASTER: Attentional Scene Text Recognizer with Flexible Rectification ASTER is an accurate scene text recognizer with flexible rectification mechanism.

Baoguang Shi 681 Jan 02, 2023
Introduction to image processing, most used and popular functions of OpenCV

👀 OpenCV 101 Introduction to image processing, most used and popular functions of OpenCV go here.

Vusal Ismayilov 3 Jul 02, 2022
RRD: Rotation-Sensitive Regression for Oriented Scene Text Detection

RRD: Rotation-Sensitive Regression for Oriented Scene Text Detection For more details, please refer to our paper. Citing Please cite the related works

Minghui Liao 102 Jun 29, 2022
Augmenting Anchors by the Detector Itself

Augmenting Anchors by the Detector Itself Introduction It is difficult to determine the scale and aspect ratio of anchors for anchor-based object dete

4 Nov 06, 2022
Steve Tu 71 Dec 30, 2022
This is a repository to learn and get more computer vision skills, make robotics projects integrating the computer vision as a perception tool and create a lot of awesome advanced controllers for the robots of the future.

This is a repository to learn and get more computer vision skills, make robotics projects integrating the computer vision as a perception tool and create a lot of awesome advanced controllers for the

Elkin Javier Guerra Galeano 17 Nov 03, 2022
PyQT5 app that colorize black & white pictures using CNN(use pre-trained model which was made with OpenCV)

About PyQT5 app that colorize black & white pictures using CNN(use pre-trained model which was made with OpenCV) Colorizor Приложение для проекта Yand

1 Apr 04, 2022
Smart computer vision application

Smart-computer-vision-application Backend : opencv and python Library required:

2 Jan 31, 2022
Motion detector, Full body detection, Upper body detection, Cat face detection, Smile detection, Face detection (haar cascade), Silverware detection, Face detection (lbp), and Sending email notifications

Security camera running OpenCV for object and motion detection. The camera will send email with image of any objects it detects. It also runs a server that provides web interface with live stream vid

Peace 10 Jun 30, 2021
Optical character recognition for Japanese text, with the main focus being Japanese manga

Manga OCR Optical character recognition for Japanese text, with the main focus being Japanese manga. It uses a custom end-to-end model built with Tran

Maciej Budyś 327 Jan 01, 2023
A facial recognition device is a device that takes an image or a video of a human face and compares it to another image faces in a database.

A facial recognition device is a device that takes an image or a video of a human face and compares it to another image faces in a database. The structure, shape and proportions of the faces are comp

Pavankumar Khot 4 Mar 19, 2022
Balabobapy - Using artificial intelligence algorithms to continue the text

Balabobapy - Using artificial intelligence algorithms to continue the text

qxtony 1 Feb 04, 2022
code for our ICCV 2021 paper "DeepCAD: A Deep Generative Network for Computer-Aided Design Models"

DeepCAD This repository provides source code for our paper: DeepCAD: A Deep Generative Network for Computer-Aided Design Models Rundi Wu, Chang Xiao,

Rundi Wu 85 Dec 31, 2022
An organized collection of tutorials and projects created for aspriring computer vision students.

A repository created with the purpose of teaching students in BME lab 308A- Hanoi University of Science and Technology

Givralnguyen 5 Nov 24, 2021
Thresholding-and-masking-using-OpenCV - Image Thresholding is used for image segmentation

Image Thresholding is used for image segmentation. From a grayscale image, thresholding can be used to create binary images. In thresholding we pick a threshold T.

Grace Ugochi Nneji 3 Feb 15, 2022
Deep learning based page layout analysis

Deep Learning Based Page Layout Analyze This is a Python implementaion of page layout analyze tool. The goal of page layout analyze is to segment page

186 Dec 29, 2022
[EMNLP 2021] Improving and Simplifying Pattern Exploiting Training

ADAPET This repository contains the official code for the paper: "Improving and Simplifying Pattern Exploiting Training". The model improves and simpl

Rakesh R Menon 138 Dec 26, 2022
A simple document layout analysis using Python-OpenCV

Run the application: python main.py *Note: For first time running the application, create a folder named "output". The application is a simple documen

Roinand Aguila 109 Dec 12, 2022