Synthetic data for the people.

Overview

zpy: Synthetic data in Blender.

WebsiteInstallDocsExamplesCLIContributeLicence

Discord Twitter Youtube PyPI Docs

Synthetic raspberry pi

Abstract

Collecting, labeling, and cleaning data for computer vision is a pain. Jump into the future and create your own data instead! Synthetic data is faster to develop with, effectively infinite, and gives you full control to prevent bias and privacy issues from creeping in. We created zpy to make synthetic data easy, by simplifying the scene creation process and providing an easy way to generate synthetic data at scale.

Install

Install: Using Blender GUI

First download the latest zip (you want the one called zpy_addon-v*.zip). Then open up Blender. Navigate to Edit -> Preferences -> Add-ons. You should be able to install and enable the addon from there.

Enabling the addon

Install: Linux: Using Install Script

$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/ZumoLabs/zpy/main/install.sh)"

Set these environment variables for specific versions:

export BLENDER_VERSION="2.91"
export BLENDER_VERSION_FULL="2.91.0"
export ZPY_VERSION="v1.0.0"

Documentation

More documentation can be found here

Examples

Tutorials

Projects

CLI

We provide a simple CLI, you can find documentation here.

Contributing

We welcome community contributions! Search through the current issues or open your own.

Licence

This release of zpy is under the GPLv3 license, a free copyleft license used by Blender. TLDR: Its free, use it!

BibTeX

If you use zpy in your research, we would appreciate the citation!

@article{zpy,
  title={zpy: Synthetic data for Blender.},
  author={Ponte, H. and Ponte, N. and Karatas, K},
  journal={GitHub. Note: https://github.com/ZumoLabs/zpy},
  volume={1},
  year={2020}
}
Comments
  • Improving Rendering & Processing Speed with AWS

    Improving Rendering & Processing Speed with AWS

    Is your feature request related to a problem? Please describe. I'm frustrated by the time it is taking to process images on my local device (the rendering isn't the worst but the annotations are taking a long time at the end). I would like to use AWS EC2 instances to reduce the time required to create images and annotations (especially the category segmentation which seems to take a long time to encode into the annotation).

    Do you have any suggestions as to the kinds of instances or methods on AWS can be used to reduce rendering and processing time? That would be immensely helpful. Thank you.

    question 
    opened by tgorm1965 14
  • Is it possible to segment parts of an object based on it's material?

    Is it possible to segment parts of an object based on it's material?

    Is your feature request related to a problem? Please describe. I would like to segment a complex mesh that is made up of different components with different materials for every component. Is it possible to segment based off the material? Or do i need to separate the single mesh into multiple components?

    question 
    opened by tgorm1965 13
  • Bounding Box generation Error

    Bounding Box generation Error

    I've created this Blender scene where I have a dice, and I'm using ZPY to generate a dataset composed of images obtained by rotating around the object and jittering both the dice position and the camera. Everything seems to be working properly, but the bounding-boxes generated on the annotation file get progressively worse with each picture.

    For example this is the first image's bounding-box: image

    This one we get halfway through: image

    And this is one of the last ones: image

    This is my code (I've cut some stuff, I can't paste it all for some reason):

    
    def run(num_steps = 20):
        
        # Random seed results in unique behavior
        zpy.blender.set_seed()
    
        # Create the saver object
        saver = zpy.saver_image.ImageSaver(description="Domain randomized dado")
    
        # Add the dado category
        dado_seg_color = zpy.color.random_color(output_style="frgb")
        saver.add_category(name="dado", color=dado_seg_color)
    
        # Segment Suzzanne (make sure a material exists for the object!)
        zpy.objects.segment("dado", color=dado_seg_color)
        
        # Original dice pose
        zpy.objects.save_pose("dado", "dado_pose_og")
        
        #Original camera pose
        zpy.objects.save_pose("Camera", "Camera_pose_og")
    
        # Save the positions of objects so we can jitter them later
        zpy.objects.save_pose("Camera", "cam_pose")
        zpy.objects.save_pose("dado", "dado_pose")
        
        asset_dir = Path(bpy.data.filepath).parent
        texture_paths = [
            asset_dir / Path("textures/madera.png"),
            asset_dir / Path("textures/ladrillo.png"),
        ]
        
    
        # Run the sim.
        for step_idx in range(num_steps):
            # Example logging
            # stp = zpy.blender.step()
            # print("BLENDER STEPS: ", stp.num_steps)
            # log.debug("This is a debug log")
    
            # Return camera and dado to original positions
            zpy.objects.restore_pose("Camera", "cam_pose")
            zpy.objects.restore_pose("dado", "dado_pose")
            
            # Rotate camera
            location = bpy.context.scene.objects["Camera"].location
            angle = step_idx*360/num_steps
            location = rotate(location, angle, axis=(0, 0, 1))
            bpy.data.objects["Camera"].location = location
            
            # Jitter dado pose
            zpy.objects.jitter(
                "dado",
                translate_range=((-300, 300), (-300, 300), (0, 0)),
                rotate_range=(
                    (0, 0),
                    (0, 0),
                    (-math.pi, math.pi),
                ),
            )
    
            # Jitter the camera pose
            zpy.objects.jitter(
                "Camera",
                translate_range=(
                    (-5, 5),
                    (-5, 5),
                    (-5, 5),
                ),
            )
    
            # Camera should be looking at dado
            zpy.camera.look_at("Camera", bpy.data.objects["dado"].location)
            
            texture_path = random.choice(texture_paths)
            
            # HDRIs are like a pre-made background with lighting
            # zpy.hdris.random_hdri()
    
            # Pick a random texture from the 'textures' folder (relative to blendfile)
            # Textures are images that we will map onto a material
            new_mat = zpy.material.make_mat_from_texture(texture_path)
            # zpy.material.set_mat("dado", new_mat)
    
            # Have to segment the new material
            zpy.objects.segment("dado", color=dado_seg_color)
    
            # Jitter the dado material
            # zpy.material.jitter(bpy.data.objects["dado"].active_material)
    
            # Jitter the HSV for empty and full images
            '''
            hsv = (
                random.uniform(0.49, 0.51),  # (hue)
                random.uniform(0.95, 1.1),  # (saturation)
                random.uniform(0.75, 1.2),  # (value)
            )
            '''
    
            # Name for each of the output images
            rgb_image_name = zpy.files.make_rgb_image_name(step_idx)
            iseg_image_name = zpy.files.make_iseg_image_name(step_idx)
            depth_image_name = zpy.files.make_depth_image_name(step_idx)
    
            # Render image
            zpy.render.render(
                rgb_path=saver.output_dir / rgb_image_name,
                iseg_path=saver.output_dir / iseg_image_name,
                depth_path=saver.output_dir / depth_image_name,
                # hsv=hsv,
            )
    
            # Add images to saver
            saver.add_image(
                name=rgb_image_name,
                style="default",
                output_path=saver.output_dir / rgb_image_name,
                frame=step_idx,
            )
            saver.add_image(
                name=iseg_image_name,
                style="segmentation",
                output_path=saver.output_dir / iseg_image_name,
                frame=step_idx,
            )
            saver.add_image(
                name=depth_image_name,
                style="depth",
                output_path=saver.output_dir / depth_image_name,
                frame=step_idx,
            )
    
            # Add annotation to segmentation image
            saver.add_annotation(
                image=rgb_image_name,
                seg_image=iseg_image_name,
                seg_color=dado_seg_color,
                category="dado",
            )
            
    
        # Write out annotations
        saver.output_annotated_images()
        saver.output_meta_analysis()
    
        # ZUMO Annotations
        zpy.output_zumo.OutputZUMO(saver).output_annotations()
    
        # COCO Annotations
        zpy.output_coco.OutputCOCO(saver).output_annotations()
        
        # Volver al estado inicial
        zpy.objects.restore_pose("dado", "dado_pose_og")
        zpy.objects.restore_pose("Camera", "Camera_pose_og")
    

    Is this my fault or an actual bug?

    • OS: Ubuntu 20.04
    • Python 3.9
    • Blender 2.93
    • zpy: latest
    bug 
    opened by franferraz98 7
  • Better Run button

    Better Run button

    To properly run the "run" script, a zpy user has to click the "Run" button under the ZPY panel. If they click the "Run" button in the Blender script panel, it will not save and reset the scene, resulting in simulation drift. We should provide a warning, or simply make the "run" button in the blender script also a valid option.

    bug enhancement 
    opened by hu-po 7
  • Question: Is there a method to obtain the height and width of every pixel of a depth map?

    Question: Is there a method to obtain the height and width of every pixel of a depth map?

    Hi, I would like to obtain this information to accompany the depth maps produced by ZPY. Is this possible using ZPY's implementation? How would I go about this? Thank you!

    question 
    opened by tgorm1965 6
  • Code Formatting

    Code Formatting

    Ran automatic code formatting with the following git pre-commit hooks:

    • black
    • trailing-whitespace-fixer
    • end-of-file-fixer
    • add-trailing-comma

    Added a pre-commit config file in case any contributors would like to use these pre-commit git hooks going forward.

    opened by zkneupper 6
  • Cannot create COCO annotations when RGB and Segmentation images are in Separate folder

    Cannot create COCO annotations when RGB and Segmentation images are in Separate folder

    Hi , I am trying to generate data with annotations, where my output_dir is the parent folder and I save my RGB and ISEG images in the children folders for example data/rgb_images and data/iseg_images. I also have another folder for annotations as data/coco_annotations. I am successfully able to render images however, I cannot generate COCO annotations. As far as I understand from the source code, it parses image path w.r.t. the saver.output_dir + filename instead of the directories where images are actually stored.

    Thanks!!!

    opened by aadi-mishra 4
  • Allow Custom scaling for HDRI Background

    Allow Custom scaling for HDRI Background

    Hi, This is a small feature I'd like to be added to "zpy.hdris.random_hdri()" function. Currently, the function doesn't accept a custom scale from user and renders the HDRI background in the default scale w.r.t the 3D object.

    It would helpful if a parameter can be added to random_hdri() so that it can be passed to load_hdri(). This would allow us to play with the HDRI background scaling.

    hdri enhancement 
    opened by Resh-97 4
  • Give multiple objects the same tag

    Give multiple objects the same tag

    Describe the solution you'd like Not sure if this already exists but I'd like to be able to give numerous object meshes have one label. If it does, please let me know!

    question 
    opened by yashdutt20 4
  • AOV Pass Error

    AOV Pass Error

    Describe the bug A key error is occurring in zpy. Can you help me figure out how to fix this?

    To Reproduce Steps to reproduce the behavior: I have followed the Suzanne script from the first video of the youtube channel.

    Expected behavior To save and render the images.

    Screenshots File "/Applications/Blender.app/Contents/Resources/2.91/python/lib/python3.7/site-packages/zpy/render.py", line 42, in make_aov_pass view_layer['aovs'][-1]['name'] = style KeyError: 'bpy_struct[key]: key "aovs" not found' In call to configurable 'make_aov_pass' (<function make_aov_pass at 0x1576194d0>) In call to configurable 'make_aov_file_output_node' (<function make_aov_file_output_node at 0x15761db00>) In call to configurable 'render_aov' (<function render_aov at 0x157623560>) Error: Python script failed, check the message in the system console Executing script failed with exception Error: Python script failed, check the message in the system console

    Desktop (please complete the following information):

    • OS: Mac OS Catalina 10.15.5
    • latest zpy
    opened by yashdutt20 3
  • KeyError: 'bpy_struct[key]: key

    KeyError: 'bpy_struct[key]: key "aovs" not found'

    Describe the bug I downloaded Suzanne 1 example from your repository and run it using Blender 2.91 and your library.

    I got following error:

    Traceback (most recent call last):
      File "/run.py", line 78, in <module>
      File "/run.py", line 35, in run
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 204, in render_aov
        output_node = make_aov_file_output_node(style=style)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 82, in make_aov_file_output_node
        zpy.render.make_aov_pass(style)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 42, in make_aov_pass
        view_layer['aovs'][-1]['name'] = style
    KeyError: 'bpy_struct[key]: key "aovs" not found'
      In call to configurable 'make_aov_pass' (<function make_aov_pass at 0x7f73bfd25f80>)
      In call to configurable 'make_aov_file_output_node' (<function make_aov_file_output_node at 0x7f73bfd25ef0>)
      In call to configurable 'render_aov' (<function render_aov at 0x7f73bfd28f80>)
    Error: Python script failed, check the message in the system console
    

    To Reproduce Steps to reproduce the behavior:

    1. Install Blender 2.91.
    2. Install zpy addon and zpy-zumo Python library.
    3. Download Suzanne 1 run.py script.
    4. Run Suzanne 1 run.py script.

    Expected behavior Suzanne 1 run.py script run without errors.

    bug 
    opened by VictorAtPL 3
  • Material jitter

    Material jitter

    Hi

    How would I got about jittering just one object material, whilst leaving others constant ??

    So for example- say I've got a car object. Car tyres are always matt black, and the taillights are always red- but the paintwork varies- might be gloss red or matt blue.

    Is this to do with the way that materials are assigned in Blender? i.e tyres would be fixed as black & paintwork would be inherited or determined by ZPY

    thank you

    Andrew

    opened by barney2074 0
  • HDRI & texture randomisation

    HDRI & texture randomisation

    Hi,

    thank you for ZPY- loving it & just what I need- thank you..!

    I've worked through the tutorials. Suzanne 3 YouTube content seems to differ from the latest Suzanne 3 python run.py on GitHub. Youtube has a Python list of explicit texture and HDRI paths, whereas Github seems to have moved it into a function ??

    To get it to work- from the documentation I though I just needed to create a \textures directory and a \hdris\1k directory in the same folder as the .blend file ??

    so: path/foo.blend path/textures/1.jpg path/textures/2.jpg path/hdris/1k/a.exr path/hdris/1k/b.exr

    However- I get a bunch of errors- looks like this is not correct I would be very grateful if you could point me in the right direction thanks again

    Andrew

    opened by barney2074 1
  • How to get the Normalised segmentation?

    How to get the Normalised segmentation?

    How to get the normalised segmentation?

    I want to generate the segmentation_float with the zpy.output_coco.OutputCOCO(saver).output_annotations().

    When I changed the code, it doesn't return any segmentation, just return the bbox.

    The reason I want to normalize these is I am using 4K image size for rendering and I am getting a large annotation files. For example for 5 images, the annotation file was 17Mb.

            {
                "category_id": 0,
                "image_id": 0,
                "id": 0,
                "iscrowd": false,
                "bbox": [
                    311.01,
                    239.01,
                    16.980000000000018,
                    240.98000000000002
                ],
                "area": 4091.8404000000046
            },
            {
                "category_id": 1,
                "image_id": 0,
                "id": 1,
                "iscrowd": false,
                "bbox": [
                    279.01,
                    223.01,
                    79.98000000000002,
                    120.98000000000002
                ],
                "area": 9675.980400000004
            },
    
    opened by c3210927 0
Releases(v1.4.1rc9)
Owner
Zumo Labs
The world is a simulation.
Zumo Labs
A fast, efficient universal vector embedding utility package.

Magnitude: a fast, simple vector embedding utility library A feature-packed Python package and vector storage file format for utilizing vector embeddi

Plasticity 1.5k Jan 02, 2023
[NeurIPS 2021] Code for Learning Signal-Agnostic Manifolds of Neural Fields

Learning Signal-Agnostic Manifolds of Neural Fields This is the uncleaned code for the paper Learning Signal-Agnostic Manifolds of Neural Fields. The

60 Dec 12, 2022
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Jan 02, 2023
To create a deep learning model which can explain the content of an image in the form of speech through caption generation with attention mechanism on Flickr8K dataset.

To create a deep learning model which can explain the content of an image in the form of speech through caption generation with attention mechanism on Flickr8K dataset.

Ragesh Hajela 0 Feb 08, 2022
Pretrained Japanese BERT models

Pretrained Japanese BERT models This is a repository of pretrained Japanese BERT models. The models are available in Transformers by Hugging Face. Mod

Inui Laboratory 387 Dec 30, 2022
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

GPT Neo 🎉 1T or bust my dudes 🎉 An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here t

EleutherAI 6.7k Dec 28, 2022
Search-Engine - 📖 AI based search engine

Search Engine AI based search engine that was trained on 25000 samples, feel free to train on up to 1.2M sample from kaggle dataset, link below StackS

Vladislav Kruglikov 2 Nov 29, 2022
AI_Assistant - This is a Python based Voice Assistant.

This is a Python based Voice Assistant. This was programmed to increase my understanding of python and also how the in-general Voice Assistants work.

1 Jan 06, 2022
Twitter Sentiment Analysis using #tag, words and username

Twitter Sentment Analysis Web App using #tag, words and username to fetch data finds Insides of data and Tells Sentiment of the perticular #tag, words or username.

Kumar Saksham 26 Dec 25, 2022
CoNLL-English NER Task (NER in English)

CoNLL-English NER Task en | ch Motivation Course Project review the pytorch framework and sequence-labeling task practice using the transformers of Hu

Kevin 2 Jan 14, 2022
Code for the paper "Flexible Generation of Natural Language Deductions"

Code for the paper "Flexible Generation of Natural Language Deductions"

Kaj Bostrom 12 Nov 11, 2022
novel deep learning research works with PaddlePaddle

Research 发布基于飞桨的前沿研究工作,包括CV、NLP、KG、STDM等领域的顶会论文和比赛冠军模型。 目录 计算机视觉(Computer Vision) 自然语言处理(Natrual Language Processing) 知识图谱(Knowledge Graph) 时空数据挖掘(Spa

1.5k Jan 03, 2023
LightSpeech: Lightweight and Fast Text to Speech with Neural Architecture Search

LightSpeech UnOfficial PyTorch implementation of LightSpeech: Lightweight and Fast Text to Speech with Neural Architecture Search.

Rishikesh (ऋषिकेश) 54 Dec 03, 2022
BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia.

BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia. Its intended use is as input for neural models in natural languag

Benjamin Heinzerling 1.1k Jan 03, 2023
Plugin repository for Macast

Macast-plugins Plugin repository for Macast. How to use third-party player plugin Download Macast from GitHub Release. Download the plugin you want fr

109 Jan 04, 2023
jel - Japanese Entity Linker - is Bi-encoder based entity linker for japanese.

jel: Japanese Entity Linker jel - Japanese Entity Linker - is Bi-encoder based entity linker for japanese. Usage Currently, link and question methods

izuna385 10 Jan 06, 2023
A library for Multilingual Unsupervised or Supervised word Embeddings

MUSE: Multilingual Unsupervised and Supervised Embeddings MUSE is a Python library for multilingual word embeddings, whose goal is to provide the comm

Facebook Research 3k Jan 06, 2023
Machine learning models from Singapore's NLP research community

SG-NLP Machine learning models from Singapore's natural language processing (NLP) research community. sgnlp is a Python package that allows you to eas

AI Singapore | AI Makerspace 21 Dec 17, 2022
Search for documents in a domain through Google. The objective is to extract metadata

MetaFinder - Metadata search through Google _____ __ ___________ .__ .___ / \

Josué Encinar 85 Dec 16, 2022
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Bethge Lab 61 Dec 21, 2022