Free like Freedom

Overview

This is all very much a work in progress! More to come!

( We're working on it though! Stay tuned!)

Installation

  • Open an Anaconda Prompt (in Windows, or any terminal on Mac/Linux) and enter the following comands

conda create -n freemocap-env python=3.7

conda activate freemocap-env

pip install freemocap -v

ipython

import freemocap as fmc
fmc.RunMe() #this is where the magic happens.
2021-06-12_FreeMoCap_Clips_16MB.mp4

Prerequisites -

Required

  • A Python 3.7 environment: We recommend installing Anaconda from here (https://www.anaconda.com/products/individual#Downloads) to create your Python environment.

  • Two or more USB webcams attached to viable USB ports

    • (USB hubs typically don't work)
  • Each recording must (for now) start with an unobstructed view of a Charuco board generated with python commands (or equivalent):

     import cv2
     
     aruco_dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_4X4_250) #note `cv2.aruco` can be installed via `pip install opencv-contrib-python`
     
     board = cv2.aruco.CharucoBoard_create(7, 5, 1, .8, aruco_dict)
     
     charuco_board_image = board.draw((2000,2000)) #`2000` is the resolution of the resulting image. Increase this number if printing a large board (bigger is better! Esp for large spaces!
     
     cv2.imwrite('charuco_board_image.png',charuco_board_image)
     
    

Optional If you would like to use OpenPose for body tracking, install Cude and the Windows Portable Demo of OpenPose.

Follow the GitHub Repository and/or Join the Discord (https://discord.gg/HX7MTprYsK) for updates!

Stay Tuned for more soon!

Comments
  • Ubuntu Support?

    Ubuntu Support?

    The following diffs were needed to make it work under Linux

    +++ b/freemocap/webcam/camsetup.py
    @@ -21,7 +21,7 @@ class VideoSetup(threading.Thread):
             camName = "Camera" + str(self.camID)
     
             cv2.namedWindow(camName)
    -        cap = cv2.VideoCapture(self.camID, cv2.CAP_DSHOW)
    +        cap = cv2.VideoCapture(self.camID, cv2.CAP_ANY)
             cap.set(cv2.CAP_PROP_FRAME_WIDTH, resWidth)
             cap.set(cv2.CAP_PROP_FRAME_HEIGHT, resHeight)
             cap.set(cv2.CAP_PROP_EXPOSURE, exposure)
    diff --git a/freemocap/webcam/checkcams.py b/freemocap/webcam/checkcams.py
    index dda1348..fb4a6a0 100644
    --- a/freemocap/webcam/checkcams.py
    +++ b/freemocap/webcam/checkcams.py
    @@ -3,7 +3,7 @@ import cv2
     
     
     def TestDevice(source):
    -    cap = cv2.VideoCapture(source, cv2.CAP_DSHOW)
    +    cap = cv2.VideoCapture(source, cv2.CAP_ANY)
         # if cap is None or not cap.isOpened():
         # print('Warning: unable to open video source: ', source)
     
    diff --git a/freemocap/webcam/startcamrecording.py b/freemocap/webcam/startcamrecording.py
    index a0cdec1..011d853 100644
    --- a/freemocap/webcam/startcamrecording.py
    +++ b/freemocap/webcam/startcamrecording.py
    @@ -44,7 +44,7 @@ def CamRecording(
         flag = False
     
         cv2.namedWindow(camID)  # name the preview window for the camera its showing
    -    cam = cv2.VideoCapture(camInput, cv2.CAP_DSHOW)  # create the video capture object
    +    cam = cv2.VideoCapture(camInput, cv2.CAP_ANY)  # create the video capture object
         # if not cam.isOpened():
         #         raise RuntimeError('No camera found at input '+ str(camID))
         # pulling out all the dictionary paramters
    
    

    But while everything works with one camera at a time, it seems like it blocks with two identical cameras

    The error manifests itself outside of your code

    I get

    [  806.191512] uvcvideo: Failed to query (SET_CUR) UVC control 4 on unit 1: -32 (exp. 4).
    

    Here is lsusb

    [email protected]:~/Desktop$ lsusb 
    Bus 001 Device 005: ID 046d:09a4 Logitech, Inc. QuickCam E 3500
    Bus 001 Device 004: ID 046d:09a4 Logitech, Inc. QuickCam E 3500
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 004 Device 002: ID 046d:c542 Logitech, Inc. Wireless Receiver
    Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 002 Device 004: ID 060b:7a16 Solid Year MD800
    Bus 002 Device 003: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Hub
    Bus 002 Device 002: ID 0409:55aa NEC Corp. Hub
    Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    

    do each of the cameras need to be on a separate bus?

    enhancement question 
    opened by kognat-docs 27
  • Multi Camera Calibration Failure

    Multi Camera Calibration Failure

    My team and I are attempting to use FreeMoCap for a system that records eight views in synchrony. We are currently having issues calibrating all eight cameras, which face opposing directions in our hallway set-up, making it difficult for them to see one CharUco board at once. To resolve this, we printed a double-sided CharUco board, allowing us to create a 3D skeleton output; however, the skeleton seems to be both inverted and translated compared to the subject from certain camera angles. We then tried to calibrate our system using hallway cameras on one side, which all face each other and can observe a charUco board simultaneously. Still, we saw the output skeleton as translated and inverted, ruling out the double-sided board as the reason for the translation. Any suggestions on how to calibrate a system set up like this or why the skeletons seem to be inverted/displaced?

    Attached is a link with the videos and data array/calibration files for the 8 camera set-up. https://drive.google.com/drive/folders/1wv4RHzPFDLeIXr9xONC72fhbW3MKQT_q?usp=sharing

    opened by DestroytheCity 18
  • Using Blender from Wrong Session

    Using Blender from Wrong Session

    I started a new FMC session, yet when running Stage 5 FMC keeps using the .blend from my original project instead of the new one i made for it in the right session folder. It then fails saying that the file doesn't exist. Any thoughts? *replaced login with X

    Running sesh 2 from /home/X/bac_mj_ar/FreeMocap_Data ──────────────────────────────────────────────────────────────────────────────── Using blender executable located at: /home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend Skipping Video Recording Skipping Video Syncing Skipping Calibration Skipping 2d point tracking ──────────────────────────────────────────────────────────────────────────────── ────────────────────────────── EXPORTING FILES... ────────────────────────────── ─ Hijacking Blender's file format converters to export FreeMoCap data as vari… ─ ──────────────────────────────────────────────────────────────────────────────── ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/freemoca │ │ p/fmc_runme.py:335 in RunMe │ │ │ │ 332 │ │ │ │ │ │ │ │ │ │ command_str, │ │ 333 │ │ │ │ │ │ │ │ │ │ shell=False, │ │ 334 │ │ │ │ │ │ │ │ │ │ stdout=subprocess.PIPE, │ │ ❱ 335 │ │ │ │ │ │ │ │ │ │ stderr=subprocess.PIPE │ │ 336 │ │ │ │ │ │ │ │ │ │ ) │ │ 337 │ │ │ │ while True: │ │ 338 │ │ │ │ │ output = blender_process.stdout.readline() │ │ │ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/subprocess.py:800 in │ │ init │ │ │ │ 797 │ │ │ │ │ │ │ │ p2cread, p2cwrite, │ │ 798 │ │ │ │ │ │ │ │ c2pread, c2pwrite, │ │ 799 │ │ │ │ │ │ │ │ errread, errwrite, │ │ ❱ 800 │ │ │ │ │ │ │ │ restore_signals, start_new_session) │ │ 801 │ │ except: │ │ 802 │ │ │ # Cleanup if the child failed starting. │ │ 803 │ │ │ for f in filter(None, (self.stdin, self.stdout, self.stde │ │ │ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/subprocess.py:1551 in │ │ _execute_child │ │ │ │ 1548 │ │ │ │ │ │ err_msg = os.strerror(errno_num) │ │ 1549 │ │ │ │ │ │ if errno_num == errno.ENOENT: │ │ 1550 │ │ │ │ │ │ │ err_msg += ': ' + repr(err_filename) │ │ ❱ 1551 │ │ │ │ │ raise child_exception_type(errno_num, err_msg, er │ │ 1552 │ │ │ │ raise child_exception_type(err_msg) │ │ 1553 │ │ 1554 │ ╰──────────────────────────────────────────────────────────────────────────────╯ FileNotFoundError: [Errno 2] No such file or directory: '/home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend --background --python /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/fre emocap/freemocap_blender_megascript.py -- /home/X/bac_mj_ar/FreeMocap_Data/sesh 2 0': '/home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend --background --python /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/fre emocap/freemocap_blender_megascript.py -- /home/X/bac_mj_ar/FreeMocap_Data/sesh 2 0'

    opened by DestroytheCity 13
  • macOS BigSur Support

    macOS BigSur Support

    Seems like the threading is the cause of the issue here.

    (freemocap) MacBook-Pro:freemocap sam$ python runme_FreeMoCap.py 
    Starting initialization for stage 1
      0%|                                                    | 0/20 [00:00<?, ?it/s]Oct  2 15:40:31  ThetaUVC_blender[33664] <Notice>: ------------ ThetaUVC_blender plugin start (version:2.0.1 built:Fri Aug 19 15:54:46 JST 2016 pid=33664 RELEASE). ------------ #thetauvc
      5%|██▏                                         | 1/20 [00:01<00:37,  1.98s/it]OpenCV: out device of bound (0-0): 1
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 2
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 3
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 4
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 5
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 6
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 7
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 8
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 9
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 10
    OpenCV: camera failed to properly initialize!
     55%|███████████████████████▋                   | 11/20 [00:02<00:01,  7.18it/s]OpenCV: out device of bound (0-0): 11
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 12
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 13
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 14
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 15
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 16
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 17
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 18
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 19
    OpenCV: camera failed to properly initialize!
    100%|███████████████████████████████████████████| 20/20 [00:02<00:00,  9.17it/s]
    2021-10-02 15:40:41.571 python[33664:261625353] WARNING: NSWindow drag regions should only be invalidated on the Main Thread! This will throw an exception in the future. Called from (
    	0   AppKit                              0x00007fff22b6ded1 -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 352
    	1   AppKit                              0x00007fff22b58aa2 -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1296
    	2   AppKit                              0x00007fff22b5858b -[NSWindow initWithContentRect:styleMask:backing:defer:] + 42
    	3   AppKit                              0x00007fff22e6283c -[NSWindow initWithContentRect:styleMask:backing:defer:screen:] + 52
    	4   cv2.cpython-37m-darwin.so           0x000000010ead0c94 cvNamedWindow + 564
    	5   cv2.cpython-37m-darwin.so           0x000000010eacdd1a _ZN2cv11namedWindowERKNS_6StringEi + 58
    	6   cv2.cpython-37m-darwin.so           0x000000010dc4a487 _ZL23pyopencv_cv_namedWindowP7_objectS0_S0_ + 231
    	7   python                              0x0000000104cd3f2b _PyMethodDef_RawFastCallKeywords + 395
    	8   python                              0x0000000104e0b9bb call_function + 251
    	9   python                              0x0000000104e034eb _PyEval_EvalFrameDefault + 20171
    	10  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	11  python                              0x0000000104e0b977 call_function + 183
    	12  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	13  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	14  python                              0x0000000104e0b977 call_function + 183
    	15  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	16  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	17  python                              0x0000000104e0b977 call_function + 183
    	18  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	19  python                              0x0000000104cd1fea _PyFunction_FastCallDict + 234
    	20  python                              0x0000000104cd66ba method_call + 122
    	21  python                              0x0000000104cd445f PyObject_Call + 127
    	22  python                              0x0000000104ef7e3a t_bootstrap + 122
    	23  python                              0x0000000104e7c764 pythread_wrapper + 36
    	24  libsystem_pthread.dylib             0x00007fff2031a8fc _pthread_start + 224
    	25  libsystem_pthread.dylib             0x00007fff20316443 thread_start + 15
    )
    __________________________________________
    cv2::videocapture properties for Camera# 0
    CV_CAP_PROP_FRAME_WIDTH: '1280.0'
    CV_CAP_PROP_FRAME_HEIGHT : '720.0'
    CAP_PROP_FPS : '30.0'
    CAP_PROP_EXPOSURE : '0.0'
    CAP_PROP_POS_MSEC : '0.0'
    CAP_PROP_FRAME_COUNT  : '0.0'
    CAP_PROP_BRIGHTNESS : '0.0'
    CAP_PROP_CONTRAST : '0.0'
    CAP_PROP_SATURATION : '0.0'
    CAP_PROP_HUE : '0.0'
    CAP_PROP_GAIN  : '0.0'
    CAP_PROP_CONVERT_RGB : '0.0'
    __________________________________________
    2021-10-02 15:40:42.898 python[33664:261625353] WARNING: nextEventMatchingMask should only be called from the Main Thread! This will throw an exception in the future.
    
    opened by kognat-docs 13
  • installation requirements or instructions

    installation requirements or instructions

    I have some undergrad students using this for a class project, and they ran into two installation issues that were easily solved, and should be solvable by either modifying the installation requirements, or just editing the instructions.

    All on windows.

    The first is that ipython is not automatically installed, so the start instructions fail. For those with new conda installations, after activating the env, conda install ipython is a simple fix.

    The second error comes after recording, with the end of the Traceback being:

    ~\Miniconda3\envs\freemocap\lib\site-packages\moviepy\video\io\ffmpeg_writer.py in __init__(self, filename, size, fps, codec, audiofile, preset, bitrate, withmask, logfile, threads, ffmpeg_params)
         86             '-s', '%dx%d' % (size[0], size[1]),
         87             '-pix_fmt', 'rgba' if withmask else 'rgb24',
    ---> 88             '-r', '%.02f' % fps,
         89             '-an', '-i', '-'
         90         ]
    
    TypeError: must be real number, not NoneType
    

    This is a problem with moviepy not finding ffmpeg. It's possible to install ffmpeg on windows and add to the path independently, but I prefer to install it in conda with conda install ffmpeg. However, moviepy can't find it, so after installing ffmpeg, running

    pip uninstall moviepy
    pip install moviepy
    

    works. I didn't have time to try to install ffmpeg first, but I think just adding it to the env creation line should fix the problem.

    opened by backyardbiomech 8
  • not enough values to unpack (expected 2, got 0)

    not enough values to unpack (expected 2, got 0)

    Hello,

    Could someone help me solve this problem, please! Thank you in advance for your help.

    Here is what displays to me, knowing that I used two webcam type cameras. image image image image

    opened by Ramdane-HACHOUR 7
  • FMC on Greyscale Videos

    FMC on Greyscale Videos

    Hello! We are running into an issue in which FMC seems to be swapping the skeletons generated from one camera onto the view of another camera (see video). Is this a potential result of using greyscale cameras? The calibration seems to work fine as the video generated reflects accurate skeletons, just placed on the wrong camera (ie camera 1 generates skeleton 1, but skeleton 1 is placed onto the view of camera 3). Has anyone encountered similar issues/have any suggestions on how to resolve this issue? Thank you in advance!

    https://user-images.githubusercontent.com/114196168/201419033-18be83ad-e852-4a82-b116-9fbfe470bec2.mp4

    opened by DestroytheCity 6
  • Need to raise a more informative Exception/Error when Charuco points not detected (and allow to continue if only 1 camera is selected)

    Need to raise a more informative Exception/Error when Charuco points not detected (and allow to continue if only 1 camera is selected)

    Currently, if no charuco points are detected, the code fails in this way -

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\__init__.py", line 124, in RunMe
        sesh, board
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\calibrate.py", line 84, in CalibrateCaptureVolume
        error,merged = cgroup.calibrate_videos(vidnames, board)
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\fmc_anipose.py", line 1740, in calibrate_videos
        **kwargs
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\fmc_anipose.py", line 1672, in calibrate_rows
        objp, imgp = zip(*mixed)
    ValueError: not enough values to unpack (expected 2, got 0)
    

    ...which should instead -

    • [ ] Raise an informative Error/Exception/Whatever thing
    • [ ] (Optional) If only one camera is selected, this is a warning. If more than one selected, this is an Error
    • [ ] (Optional) maybe in general there should be a 'single-camera' mode? so that people can just use this wrapper on their single webcam to play with OpenPose, MediaPipe, DLC, etc?
    opened by jonmatthis 6
  • Can't use multiple cameras on Ubuntu 20.04

    Can't use multiple cameras on Ubuntu 20.04

    Hi, thanks for all your work on this project. I'm excited to see where it goes.

    I'm having an issue where I can't seem to get this working with multiple cameras. I have three webcams plugged directly into my laptop (I also tried plugging into a hub, but didn't change anything).

    When I go through the camera setup process and click "Submit", only the first camera lights up and only a blank rectangular window appears.

    This is the output I see:

    __________________________________________
    cv2::videocapture properties for Camera# 0
    CV_CAP_PROP_FRAME_WIDTH: '640.0'
    CV_CAP_PROP_FRAME_HEIGHT : '480.0'
    CAP_PROP_FPS : '30.0'
    CAP_PROP_EXPOSURE : '0.008200820082008202'
    CAP_PROP_POS_MSEC : '0.0'
    CAP_PROP_FRAME_COUNT  : '-1.0'
    CAP_PROP_BRIGHTNESS : '0.5019607843137255'
    CAP_PROP_CONTRAST : '0.12549019607843137'
    CAP_PROP_SATURATION : '0.12549019607843137'
    CAP_PROP_HUE : '-1.0'
    CAP_PROP_GAIN  : '0.20392156862745098'
    CAP_PROP_CONVERT_RGB : '1.0'
    __________________________________________
    

    It looks like it's getting stuck on the first camera and never manages to start up the other cameras?

    It works fine if I select just one camera.

    opened by frnsys 4
  • SPIKE: Look into HackMD / MkDocs Integration for Knowledge Base

    SPIKE: Look into HackMD / MkDocs Integration for Knowledge Base

    Workflow:

    1. Use mkdocs to create our knowledge base skin
    2. Use HackMd with MkDocs configuration protocols to write KB articles

    Notes: https://www.mkdocs.org/getting-started/

    PR #212

    opened by endurance 4
  • list index out or range on stage 3

    list index out or range on stage 3

    Hey, I've followed the setup but I'm having the following on Stage 3 - Calibrate Capture Volume: list index out of range.

    Here are some screenshots of the error: Image1 Image2.

    Thanks a lot for any help.

    opened by tomazsaraiva 4
  • Pre-recorded MP4s are not recognized by alpha GUI on Mac/Linux

    Pre-recorded MP4s are not recognized by alpha GUI on Mac/Linux

    Following the process in the documentation for processing previously recorded videos, videos with the file extension MP4 are not recognized on Mac/Linux. The non-capitalized variant mp4 is recognized by Mac/Linux, and both cases should be recognized by windows (although I can't test this personally). The case sensitivity issue is due to the changing behavior of glob.glob() across operating systems, as explained here.

    To resolve this issue, the file search should either be made case insensitive on Mac/Linux to match the Windows behavior, or case sensitive on Windows to match the Mac/Linux behavior (the linked article above describes a function to make glob case sensitive on Windows).

    opened by philipqueen 0
  • Question Regarding Output Files

    Question Regarding Output Files

    Have successfully gotten FMC off the ground for our project, but we had some questions regarding the output files produced.

    1. In the Mediapipe_body+3d+xyz.csv file produced via the Alpha GUI, what is the unit for time? We see that our videos are roughly 19 seconds long and contain 2063 frames, yet we have 1753 measurements of each key point. Is this the number when all cameras are active and tracking?
    2. How are the x,y,z coordinates established? Are they consistent between videos using the same calibration or different calibrations within the same hallway?

    EDIT: For the Mediapipe_body+3d+xyz.csv, also wondering what the coordinates represent/what the unit for each measurement is.

    opened by DestroytheCity 4
  • Set timer to start and stop recording

    Set timer to start and stop recording

    Life improvement:

    What? Make an option to start and stop a recording after a set duration.

    e.g. pres button, recording starts after 5 seconds and ends 30 seconds after that.

    Why? Because it makes recording alone easier (pres button, walk to recording volume, record) Because it can help standardize patient recordings (e.g. 30 seconds sit-to-stand test)

    BONUS POINTS if FreeMoCap plays a sound when the recordings start and stop.

    opened by steenharsted 0
  • Chose camera to use for orientation and start of global coordinate system

    Chose camera to use for orientation and start of global coordinate system

    Down the line, users should be able to control the start and orientation of the global coordinate system using the Charuco board (https://github.com/freemocap/freemocap/issues/282).

    Until that is implemented, a workable fix could be to allow users to select what camera is being used for the origin and orientation of the global coordinate system.

    This will allow users to have some control over the global coordinate system by having one camera set at a specific height and with a level orientation.

    It's not perfect, but it would be a great improvement.

    opened by steenharsted 0
  • Set Floor with Charuco board (global coordinate system)

    Set Floor with Charuco board (global coordinate system)

    Add a final option during calibration, "Set Floor"

    Place the charuco board on the floor and use the middle (or a set corner) as 0,0,0 in the global coordinate system. Use the orientation of the charuco board to assign x and z axis (y - axis will straight up from the board).

    This would greatly improve FreeMoCap's use in biomechanical settings.

    opened by steenharsted 1
  • Add sample `.blend` file output to repo

    Add sample `.blend` file output to repo

    Adding a sample of the blender output somewhere in the repository (or in the documentation...) would be great for helping folks see what freemocap produces!

    Minor 
    opened by trentwirth 0
Releases(v0.0.54)
  • v0.0.54(Jul 16, 2022)

    This is the Freemocap Pre-Alpha Release to create raw 3d skeletons from USB connected webcams

    Here is the relevant README for this version of freemocap v0.0.54: https://github.com/freemocap/freemocap/blob/main/OLD_README.md

    Source code(tar.gz)
    Source code(zip)
A collection of semantic image segmentation models implemented in TensorFlow

A collection of semantic image segmentation models implemented in TensorFlow. Contains data-loaders for the generic and medical benchmark datasets.

bobby 16 Dec 06, 2019
Planner_backend - Academic planner application designed for students and counselors.

Planner (backend) Academic planner application designed for students and advisors.

2 Dec 31, 2021
Temporally Coherent GAN SIGGRAPH project.

TecoGAN This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution

Duc Linh Nguyen 2 Jan 18, 2022
Python scripts for performing stereo depth estimation using the MobileStereoNet model in Tensorflow Lite.

TFLite-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in Tensorflow Lite. Stereo depth estimati

Ibai Gorordo 4 Feb 14, 2022
Evolving neural network parameters in JAX.

Evolving Neural Networks in JAX This repository holds code displaying techniques for applying evolutionary network training strategies in JAX. Each sc

Trevor Thackston 6 Feb 12, 2022
A scanpy extension to analyse single-cell TCR and BCR data.

Scirpy: A Scanpy extension for analyzing single-cell immune-cell receptor sequencing data Scirpy is a scalable python-toolkit to analyse T cell recept

ICBI 145 Jan 03, 2023
Dataset for the Research2Clinics @ NeurIPS 2021 Paper: What Do You See in this Patient? Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
Code for "Continuous-Time Meta-Learning with Forward Mode Differentiation" (ICLR 2022)

Continuous-Time Meta-Learning with Forward Mode Differentiation ICLR 2022 (Spotlight) - Installation - Example - Citation This repository contains the

Tristan Deleu 25 Oct 20, 2022
Net2net - Network-to-Network Translation with Conditional Invertible Neural Networks

Net2Net Code accompanying the NeurIPS 2020 oral paper Network-to-Network Translation with Conditional Invertible Neural Networks Robin Rombach*, Patri

CompVis Heidelberg 206 Dec 20, 2022
1st Place Solution to ECCV-TAO-2020: Detect and Represent Any Object for Tracking

Instead, two models for appearance modeling are included, together with the open-source BAGS model and the full set of code for inference. With this code, you can achieve around 79 Oct 08, 2022

A Pytorch reproduction of Range Loss, which is proposed in paper 《Range Loss for Deep Face Recognition with Long-Tailed Training Data》

RangeLoss Pytorch This is a Pytorch reproduction of Range Loss, which is proposed in paper 《Range Loss for Deep Face Recognition with Long-Tailed Trai

Youzhi Gu 7 Nov 27, 2021
ElasticFace: Elastic Margin Loss for Deep Face Recognition

This is the official repository of the paper: ElasticFace: Elastic Margin Loss for Deep Face Recognition Paper on arxiv: arxiv Model Log file Pretrain

Fadi Boutros 113 Dec 14, 2022
Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

HamasKhan 3 Jul 08, 2022
TalkingHead-1KH is a talking-head dataset consisting of YouTube videos

TalkingHead-1KH Dataset TalkingHead-1KH is a talking-head dataset consisting of YouTube videos, originally created as a benchmark for face-vid2vid: On

173 Dec 29, 2022
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
Training RNNs as Fast as CNNs

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
Public repository created to store my custom-made tools for Just Dance (UbiArt Engine)

Woody's Just Dance Tools Public repository created to store my custom-made tools for Just Dance (UbiArt Engine) Development and updates Almost all of

Wodson de Andrade 8 Dec 24, 2022
Cancer metastasis detection with neural conditional random field (NCRF)

NCRF Prerequisites Data Whole slide images Annotations Patch images Model Training Testing Tissue mask Probability map Tumor localization FROC evaluat

Baidu Research 731 Jan 01, 2023
An Intelligent Self-driving Truck System For Highway Transportation

Inceptio Intelligent Truck System An Intelligent Self-driving Truck System For Highway Transportation Note The code is still in development. OS requir

InceptioResearch 11 Jul 13, 2022