A Simulation Environment to train Robots in Large Realistic Interactive Scenes

Overview

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson is a simulation environment providing fast visual rendering and physics simulation based on Bullet. iGibson is equipped with fifteen fully interactive high quality scenes, hundreds of large 3D scenes reconstructed from real homes and offices, and compatibility with datasets like CubiCasa5K and 3D-Front, providing 8000+ additional interactive scenes. Some of the features of iGibson include domain randomization, integration with motion planners and easy-to-use tools to collect human demonstrations. With these scenes and features, iGibson allows researchers to train and evaluate robotic agents that use visual signals to solve navigation and manipulation tasks such as opening doors, picking up and placing objects, or searching in cabinets.

Latest Updates

[8/9/2021] Major update to iGibson to reach iGibson 2.0, for details please refer to our arxiv preprint.

  • iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks.
  • iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked.
  • iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.

[12/1/2020] Major update to iGibson to reach iGibson 1.0, for details please refer to our arxiv preprint.

  • Release of iGibson dataset that includes 15 fully interactive scenes and 500+ object models annotated with materials and physical attributes on top of existing 3D articulated models.
  • Compatibility to import CubiCasa5K and 3D-Front scene descriptions leading to more than 8000 extra interactive scenes!
  • New features in iGibson: Physically based rendering, 1-beam and 16-beam LiDAR, domain randomization, motion planning integration, tools to collect human demos and more!
  • Code refactoring, better class structure and cleanup.

[05/14/2020] Added dynamic light support 🔦

[04/28/2020] Added support for Mac OSX 💻

Citation

If you use iGibson or its assets and models, consider citing the following publication:

@misc{li2021igibson,
      title={iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks}, 
      author={Chengshu Li and Fei Xia and Roberto Mart\'in-Mart\'in and Michael Lingelbach and Sanjana Srivastava and Bokui Shen and Kent Vainio and Cem Gokmen and Gokul Dharan and Tanish Jain and Andrey Kurenkov and Karen Liu and Hyowon Gweon and Jiajun Wu and Li Fei-Fei and Silvio Savarese},
      year={2021},
      eprint={2108.03272},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}
@inproceedings{shen2021igibson,
      title={iGibson 1.0: a Simulation Environment for Interactive Tasks in Large Realistic Scenes}, 
      author={Bokui Shen and Fei Xia and Chengshu Li and Roberto Mart\'in-Mart\'in and Linxi Fan and Guanzhi Wang and Claudia P\'erez-D'Arpino and Shyamal Buch and Sanjana Srivastava and Lyne P. Tchapmi and Micael E. Tchapmi and Kent Vainio and Josiah Wong and Li Fei-Fei and Silvio Savarese},
      booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year={2021},
      pages={accepted},
      organization={IEEE}
}

Documentation

The documentation for iGibson can be found here: iGibson Documentation. It includes installation guide (including data download instructions), quickstart guide, code examples, and APIs.

If you want to know more about iGibson, you can also check out our webpage, iGibson 2.0 arxiv preprint and iGibson 1.0 arxiv preprint.

Dowloading the Dataset of 3D Scenes

For instructions to install iGibson and download dataset, you can visit installation guide and dataset download guide.

There are other datasets we link to iGibson. We include support to use CubiCasa5K and 3DFront scenes, adding up more than 10000 extra interactive scenes to use in iGibson! Check our documentation on how to use those.

We also maintain compatibility with datasets of 3D reconstructed large real-world scenes (homes and offices) that you can download and use with iGibson. For Gibson Dataset and Stanford 2D-3D-Semantics Dataset, please fill out this form. For Matterport3D Dataset, please fill in this form and send it to [email protected]. Please put "use with iGibson simulator" in your email. Check our dataset download guide for more details.

Using iGibson with VR

If you want to use iGibson VR interface, please visit the [VR guide (TBA)].

Contributing

This is the github repository for iGibson (pip package igibson) 2.0 release. (For iGibson 1.0, please use 1.0 branch.) Bug reports, suggestions for improvement, as well as community developments are encouraged and appreciated. Please, consider creating an issue or sending us an email.

The support for our previous version of the environment, Gibson, can be found in the following repository.

Acknowledgments

iGibson uses code from a few open source repositories. Without the efforts of these folks (and their willingness to release their implementations under permissable copyleft licenses), iGibson would not be possible. We thanks these authors for their efforts!

Comments
  • Motion planning doesn't avoid obstacles

    Motion planning doesn't avoid obstacles

    Motion-planned arm movement will not avoid walls in an interactive scene. Do walls have a body ID like floors that should be appended to the MotionPlanningWrapper's obstacles list?

    opened by CharlesAverill 31
  • get_lidar_all

    get_lidar_all

    Hello, https://github.com/StanfordVL/iGibson/blob/5f8d253694b23b41c53959774203ba5787578b74/igibson/render/mesh_renderer/mesh_renderer_cpu.py#L1390 The function get_lidar_all is not working. The camera does not turn during the 4 iterations. So the result of the readings is the same chair scene rotated 90 degrees, 4 times and patched together. I am trying to reconstruct a 360 degree scene by transforming 3d streams to the global coordianate system and patching them together but nothing is working. Please help.

    opened by elhamAm 22
  •  Exception: floors.txt cannot be found in model: area1

    Exception: floors.txt cannot be found in model: area1

    Hi, there is something wrong with me,when I run roslaunch gibson2-ros turtlebot_rgbd.launch
    it shows: Exception:floors.txt cannot be found in model: area1 I have downloaded the entire gibson_v2 dataset, and the area1 subset does not contain the file floors.txt. How can I get floors.txt?

    opened by Jingjinganhao 18
  • ERROR: Unable to initialize EGL

    ERROR: Unable to initialize EGL

    Hi team, thank you for maintaining this project.

    My iGibson installation went fine, but I am facing an issue that seems common among many iGibson beginners.

    (igib) ➜  ~ python -m igibson.examples.environments.env_nonint_example
    
     _   _____  _  _
    (_) / ____|(_)| |
     _ | |  __  _ | |__   ___   ___   _ __
    | || | |_ || || '_ \ / __| / _ \ | '_ \
    | || |__| || || |_) |\__ \| (_) || | | |
    |_| \_____||_||_.__/ |___/ \___/ |_| |_|
    
    ********************************************************************************
    Description:
        Creates an iGibson environment from a config file with a turtlebot in Rs (not interactive).
        It steps the environment 100 times with random actions sampled from the action space,
        using the Gym interface, resetting it 10 times.
        ********************************************************************************
    INFO:igibson.render.mesh_renderer.get_available_devices:Command '['/home/mukul/iGibson/igibson/render/mesh_renderer/build/test_device', '0']' returned non-zero exit status 1.
    INFO:igibson.render.mesh_renderer.get_available_devices:Device 0 is not available for rendering
    WARNING:igibson.render.mesh_renderer.mesh_renderer_cpu:Device index is larger than number of devices, falling back to use 0
    WARNING:igibson.render.mesh_renderer.mesh_renderer_cpu:If you have trouble using EGL, please visit our trouble shooting guideat http://svl.stanford.edu/igibson/docs/issues.html
    libEGL warning: DRI2: failed to create dri screen
    libEGL warning: DRI2: failed to create dri screen
    ERROR: Unable to initialize EGL
    

    I went through all the closed issues related to this, but nothing helped. I also went through the troubleshooting guide and things seemed fine to me. Here are the outputs of some commands I ran to check the EGL installation:

    • (igib) ➜  ~ ldconfig -p | grep EGL
      	libEGL_nvidia.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.0
      	libEGL_mesa.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL_mesa.so.0
      	libEGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL.so.1
      	libEGL.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL.so
      
    • (igib) ➜  ~ nvidia-smi
      Thu Mar 31 15:28:55 2022       
      +-----------------------------------------------------------------------------+
      | NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
      |-------------------------------+----------------------+----------------------+
      | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
      | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
      |                               |                      |               MIG M. |
      |===============================+======================+======================|
      |   0  NVIDIA GeForce ...  On   | 00000000:01:00.0  On |                  N/A |
      | 41%   39C    P8    20W / 215W |    331MiB /  8192MiB |      2%      Default |
      |                               |                      |                  N/A |
      +-------------------------------+----------------------+----------------------+
      
    • Reinstalling after USE_GLAD set to FALSE didn't work either.

    • (base) ➜  ~ ./iGibson/igibson/render/mesh_renderer/build/query_devices
      2
      
      (base) ➜  ~ ./iGibson/igibson/render/mesh_renderer/build/test_device 0
      libEGL warning: DRI2: failed to create dri screen
      libEGL warning: DRI2: failed to create dri screen
      INFO: Unable to initialize EGL
      
      
      (base) ➜  ~ ./iGibson/igibson/render/mesh_renderer/build/test_device 1
      INFO: Loaded EGL 1.5 after reload.
      INFO: GL_VENDOR=Mesa/X.org
      INFO: GL_RENDERER=llvmpipe (LLVM 12.0.0, 256 bits)
      INFO: GL_VERSION=3.1 Mesa 21.2.6
      INFO: GL_SHADING_LANGUAGE_VERSION=1.40
      

    Please let me know if I can share any more information that could be helpful in debugging this.

    Thanks!

    opened by mukulkhanna 16
  • GLSL 1.5.0 is not supported

    GLSL 1.5.0 is not supported

    Hi,

    I followed the instructions for Gibson2 installation. When I run the demo code test:

    I get this error: GLSL 1.5.0 is not supported. Supported versions are ....

    I did retry the installation with USE_GLAD set to FALSE in CMakeLists, but this resulted in the installation crashing.

    Any ideas on the next steps I can take?

    opened by sanjeevkoppal 14
  • Could you please update your tutorial for ros integration?

    Could you please update your tutorial for ros integration?

    the demo is using ros1, turtlebot1 and python 2.7, which are all out of date. By using miniconda env based on python2.7, you even can not properly install igibson2!!

    opened by MRWANG995 13
  • 4 Questions for iGibson 2.0 / Bheavior Challenge

    4 Questions for iGibson 2.0 / Bheavior Challenge

    Thanks, your recent help was great! I am amazed by your support, thank you!

    Here a few more points:

    • I tried to use a different activity. Therefore I changed behavior_onboard_sensing.yaml by setting task: boxing_books_up_for_storage, but then I got an error message that ...fixed_furniture file can't be found. So I activated online_sampling. in the yaml-file. Does this randomize which objects are loaded and where they are placed?

    But then I got:

    Traceback (most recent call last):
      File "stable_baselines3_behavior_example.py", line 202, in <module>
        main()
      File "stable_baselines3_behavior_example.py", line 137, in main
        env = make_env(0)()
      File "stable_baselines3_behavior_example.py", line 129, in _init
        physics_timestep=1 / 300.0,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_mp_env.py", line 108, in __init__
        automatic_reset=automatic_reset,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_env.py", line 64, in __init__
        render_to_tensor=render_to_tensor,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/igibson_env.py", line 60, in __init__
        render_to_tensor=render_to_tensor,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/env_base.py", line 78, in __init__
        self.load()
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_env.py", line 175, in load
        self.load_task_setup()
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_env.py", line 164, in load_task_setup
        self.load_behavior_task_setup()
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_env.py", line 132, in load_behavior_task_setup
        online_sampling=online_sampling,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/activity/activity_base.py", line 92, in initialize_simulator
        self.initial_state = self.save_scene()
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/activity/activity_base.py", line 98, in save_scene
        self.state_history[snapshot_id] = save_internal_states(self.simulator)
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/utils/checkpoint_utils.py", line 38, in save_internal_states
        for name, obj in simulator.scene.objects_by_name.items():
    AttributeError: 'NoneType' object has no attribute 'objects_by_name'
    

    Can you help me such that I can load other activities? Do I have to take additional steps when I want to load my own activities besides placing them in bddl/activity_definitions/? Or would you recommend me to place it somewhere else?

    • I would like to use the editor of behavior challenge to create a custom activity, but it seems not accessible. Can you already say, when we can use it again? https://behavior-annotations.herokuapp.com/. If the support for BehaviorChallenge's github is as quick as yours, I also don't mind to post it there ;-)

    • A theoretical question: Is it possible to transport an object that can carry objects itself. e.g. is it possible to put an object into a bin and then transport the bin including the object in one hand?

    • Is it possible to do all the 100 activities in the discrete action space? If so, how would I remove dust e.g.?

    opened by meier-johannes94 13
  • Inverse Kinematics example is not up-to-date

    Inverse Kinematics example is not up-to-date

    The Inverse Kinematics example script does not work out-of-the-box, and will error out with a message about control_freq being specified in Fetch's configuration file.

    When this error is bypassed by commenting out the assertion, errors still occur. Fetch does not have a "robot_body" attribute, so

    fetch.robot_body.reset_position([0, 0, 0])
    

    should become

    fetch.reset_position([0, 0, 0])
    

    which is the standard in the functioning examples.

    Similarly, it seems that

    fetch.get_end_effector_position()
    

    should become

    fetch.links["gripper_link"].get_position()
    

    RobotLink does not have a body_part_index, so

    robot_id, fetch.links["gripper_link"].body_part_index, [x, y, z], threshold, maxIter
    

    should become something like

    robot_id, fetch.links["gripper_link"].(body/link)_id, [x, y, z], threshold, maxIter
    

    After all of these changes, the example wildly flails around Fetch's arm, which I wouldn't imagine is the intended purpose of the example.

    This script is fairly important for outlining the usage of IK in iGibson. If I fix it, I will submit a PR. Just wanted to outline the issue here as well.

    opened by CharlesAverill 12
  • PointNav Task

    PointNav Task

    Hi, I was trying to train pointnav agent using the given example 'stable_baselines3_example.py' but It gives me a memory error(attached). I solve this by reducing 'num_environments' from 8 to 1. But it isn't converging. I also attached the tensorboard logs. Do I need to change any other parameters (e.g. learning rate etc) to make it work with 1 environment.? Screenshot 2022-02-02 at 11 02 08

    opened by asfandasfo 11
  • Cannot download the dataset from Gibson Database of 3D Spaces

    Cannot download the dataset from Gibson Database of 3D Spaces

    Hi @fxia22 and @ChengshuLi, I tried to download the Gibson2 Room Dataset from https://docs.google.com/forms/d/e/1FAIpQLScWlx5Z1DM1M-wTSXaa6zV8lTFkPmTHW1LqMsoCBDWsTDjBkQ/viewform, and I couldn't access the cloud storage because of the following issue.

    This XML file does not appear to have any style information associated with it. The document tree is shown below. UserProjectAccountProblem User project billing account not in good standing.

    The billing account for the owning project is disabled in state absent

    Could you please check if the payment was properly made?

    opened by jjanixe 11
  • docker: Error response from daemon: could not select device driver

    docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]

    Hi there,

    I am unable to get either docker or pip installation to run with GUI on a remote server (Ubuntu 18.04.5 LTS). nvidia-smi shows NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 With a GeForce RTX 2080 SUPER

    After installing docker according to these direction: https://docs.docker.com/engine/install/ubuntu/
    sudo docker run hello-world runs successfully I cloned the repository

    git clone [email protected]:StanfordVL/iGibson.git cd iGibson ./docker/pull-images.sh

    docker images shows that I have these repositories download: igibson/igibson-gui latest f1609b44544a 6 days ago 8.11GB igibson/igibson latest e2d4fafb189b 6 days ago 7.48GB

    But sudo ./docker/headless-gui/run.sh elicits this error: Starting VNC server on port 5900 with password 112358 please run "python simulator_example.py" once you see the docker command prompt: docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

    sudo ./docker/base/run.sh also elicits: docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

    One guess is that something is wrong with OpenGL, but I don't know how to fix it. If I run glxinfo -B, I get name of display: localhost:12.0 libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast display: localhost:12 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) OpenGL vendor string: Intel Inc. OpenGL renderer string: Intel(R) Iris(TM) Plus Graphics 655 OpenGL version string: 1.4 (2.1 INTEL-14.7.8)

    Note: I can successfully run xeyes on the server and have it show up on my local machine. And glxgears shows the gears image but the gears are not rotating. (and returns this error: libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast )

    I also tried the steps from the trouble shooting page: ldconfig -p | grep EGL yields libEGL_nvidia.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.0 libEGL_nvidia.so.0 (libc6) => /usr/lib/i386-linux-gnu/libEGL_nvidia.so.0 libEGL_mesa.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL_mesa.so.0 libEGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL.so.1 libEGL.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL.so And I checked that /usr/lib/x86_64-linux-gnu/libEGL.so -> libEGL.so.1.0.0

    I also do not appear to have any directories such as /usr/lib/nvidia-vvv (I only have /usr/lib/nvidia, /usr/lib/nvidia-cuda-toolkit, and /usr/lib/nvidia-visual-profiler)

    Any help would be very much appreciated! Thank you so much.

    opened by izkula 10
  • Angular velocity improperly calculated for TwoWheelRobot for proprioception dictionary

    Angular velocity improperly calculated for TwoWheelRobot for proprioception dictionary

    The TwoWheelRobot seems to be incorrectly calculating the base angular velocity that is returned in the proprioception dictionary.

    $\omega$ = angular velocity $\upsilon$ = linear velocity $V_r$ = right wheel velocity $V_l$ = left wheel velocity $R$ = wheel radius $l$ = wheel axle length

    The incorrect formula can be found here and is

    \omega=\frac{V_r-V_l}{l}
    

    The equations to get wheel velocities from linear and angular velocities that are applied to a DD controller are here. These equations seem to be the source of truth that the proprioception calculation should match. The equations are the following:

    V_l = \frac{\upsilon - \omega \cdot l/2}{R}
    
    V_r = \frac{\upsilon + \omega \cdot l/2}{R}
    

    Solving for $\omega$ and $\upsilon$ results in the following equations:

    \omega = \frac{(V_l - V_r) \cdot R }{2 \cdot l}
    
    \upsilon = \frac{(V_l + V_r) \cdot R}{2}
    

    Ultimately, I think the angular velocity formula needs to be updated here to this $\omega = \frac{(V_l - V_r) \cdot R }{2 \cdot l}$

    opened by sujaygarlanka 0
  • Error in Mesh Renderer cpu file

    Error in Mesh Renderer cpu file

    When running the ext_object scripts, I encountered an error in the mesh renderer. On line 1094 of mesh_renderer_cpu.py the code refers to an attribute of an InstanceGroup object called pose_rot. The actual attribute as defined in the InstanceGroup object is poses_rot. The line below is similarly effected with the pose_trans call needing to be poses_trans. My code works when I fix the typo on line 1094 but I wanted to let you know so you can fix it for others.

    opened by mullenj 0
  • Vision sensor issue in VR environment

    Vision sensor issue in VR environment

    When I put both a Fetch robot and Behavior Robot in a VR environment (Behavior robot is the VR avatar) and have a vision sensor in environment YAML file, I get the issue below. I believe this may be a bug in mesh_renderer_cpu.py where it tries to get RGB data for all robots in the scene and fails when it reaches the Behavior Robot. I think it needs to skip the Behavior Robots. Is this in fact a bug or an issue on my end? Thanks.

    Traceback (most recent call last):
      File "main.py", line 79, in <module>
        main()
      File "main.py", line 73, in main
        state, reward, done, _ = env.step(action)
      File "C:\Users\icaro\513-final-project\igibson\envs\igibson_env.py", line 360, in step
        state = self.get_state()
      File "C:\Users\icaro\513-final-project\igibson\envs\igibson_env.py", line 279, in get_state
        vision_obs = self.sensors["vision"].get_obs(self)
      File "C:\Users\icaro\513-final-project\igibson\sensors\vision_sensor.py", line 155, in get_obs
        raw_vision_obs = env.simulator.renderer.render_robot_cameras(modes=self.raw_modalities)
      File "C:\Users\icaro\513-final-project\igibson\render\mesh_renderer\mesh_renderer_cpu.py", line 1256, in render_robot_cameras
        frames.extend(self.render_single_robot_camera(robot, modes=modes, cache=cache))
      File "C:\Users\icaro\513-final-project\igibson\render\mesh_renderer\mesh_renderer_cpu.py", line 1270, in render_single_robot_camera
        for item in self.render(modes=modes, hidden=hide_instances):
    TypeError: 'NoneType' object is not iterable
    
    opened by sujaygarlanka 0
  • I want to use quadrotor in iGibson1.0, But I didn't found the correspond yaml file

    I want to use quadrotor in iGibson1.0, But I didn't found the correspond yaml file

    image As show in above picture, I want use the example code‘ igibson/examples/demo/robot_example.py.’ to achieve the four robots will have a fun cocktail party. But I want replace one as a quadrotor, then I didn't found the quadrotor's yaml file in /igibson/examples/configs. What shoud I do in the next step? image

    opened by YigaoWang 0
  • BehaviorRobot issue when use_tracked_body set to false

    BehaviorRobot issue when use_tracked_body set to false

    The robot misaligns the body with the hands and head when the user_tracked_body parameter is false. Also, the body falls as if it is disconnected from the hands and head. Neither happens when user_tracked_body is true. The picture attached shows how the robot is rendered in the beginning. Do you know why this may be the case or is it a bug?

    I am trying to have the robot move about the space using an Oculus joystick, so I assume that setting this parameter to false is required.

    Screenshot 2022-11-06 181854

    bug 
    opened by sujaygarlanka 5
Releases(2.2.1)
  • 2.2.1(Oct 27, 2022)

    iGibson 2.2.1 is a new patch version with the below changes:

    Changelog:

    • Restores support for legacy BehaviorRobot proprioception dimensionality to match BEHAVIOR baselines, using the legacy_proprioception constructor flag.
    • Fixes setuptools build issues.
    • Remove references to non-dataset scenes.
    • Fix BehaviorRobot saving/loading bugs.

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.2.0...2.2.1

    Source code(tar.gz)
    Source code(zip)
  • 2.2.0(May 9, 2022)

    iGibson 2.2.0 is a new minor version with the below features:

    Changelog:

    • Fixes iGibson ROS integration
    • Adds the Tiago robot
    • Adds primitive action interface and a sample set of (work-in-progress) object-centric action primitives
    • Fixes some bugs around point nav task robot pose sampling
    • Fixes some bugs around occupancy maps

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.1.0...2.2.0

    Source code(tar.gz)
    Source code(zip)
  • 2.1.0(Mar 10, 2022)

    iGibson 2.1.0 is a bugfix release (that is numbered as a minor version because 2.0.6, which was a breaking change, was incorrectly numbered as a patch).

    Changelog:

    • Fixed performance regression in scenes with large numbers of markers (see #169)
    • Fixed broken iGibson logo
    • Fixed Docker images
    • Removed vendored OpenVR to drastically shrink package size
    • Add better dataset version checking

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.0.6...2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.6(Feb 17, 2022)

    Bug-fixes

    • Fix texture randomization
    • Renderer updates object poses when the objects' islands are awake
    • Set ignore_visual_shape to True by default
    • EmptyScene render_floor_plane set to True by default
    • Fix shadow rendering for openGL 4.1
    • Fix VR demo scripts

    Improvements

    • Major refactoring of Scene saving and loading
    • Major refactoring of unifying Robots into Objects
    • Make BehaviorRobot inherit BaseRobot
    • Clean up robot demos
    • Add optical flow example
    • Improve AG (assistive grasping)
    • Support for multi-arm robots
    • Handle hidden instances for optimized renderer
    • Unify semantic class ID
    • Clean up ray examples
    • Move VR activation out of BehaviorRobot
    • Base motion planning using onboard sensing, global 2d map, or full observability
    • Add gripper to JR2
    • Add dataset / assets version validation

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.0.5...2.0.6

    Source code(tar.gz)
    Source code(zip)
  • 2.0.5(Jan 21, 2022)

    Re-release of iGibson 2.0.4 due to issue in PyPI distribution pipeline.

    Bug-fixes

    • Robot camera rendering where there is non-zero rotation in the x-axis (forward direction)
    • Rendering floor plane in StaticIndoorScene
    • BehaviorRobot assisted grasping ray-casting incorrect
    • BehaviorRobot head rotation incorrect (moving faster than it's supposed to)
    • URDFObject bounding box computation incorrect
    • EGL context error if pybullet GUI created before EGL context
    • Rendering on retina screens
    • Viewer breaks in planning mode when no robot
    • LiDAR rendering

    Improvements

    • Major refactoring of Simualtor (including rendering mode), Task, Environment, Robot, sampling code, scene/object/robot importing logic, etc.
    • Better CI and automation
    • Add predicates of BehaviorTask to info of Env
    • Major updates of examples
    • Minor updates of docs

    New Features

    • Add Controller interface to all robots

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.0.3...2.0.5

    Source code(tar.gz)
    Source code(zip)
  • 2.0.3(Nov 10, 2021)

    Bug-fixes

    • pybullet retore state
    • adjacency ray casting
    • link CoM frame computation
    • sem/ins segmentation rendering
    • simulator force_sync renderer
    • material id for objects without valid MTL
    • Open state checking for windows
    • BehaviorRobot trigger fraction out of bound
    • BehaviorRobot AG joint frame not at contact point

    Improvements

    • Refactor iG object inheritance
    • Improve documentation
    • Improve sampling
    • scene caches support FetchGripper robot
    • BehaviorRobot action space: delta action on top of actual pose, not "ghost" pose
    • Upgrade shader version to 460
    • Minify docker container size

    New Features

    • VR Linux support
    • GitHub action CI
    Source code(tar.gz)
    Source code(zip)
  • 2.0.2(Oct 19, 2021)

  • 2.0.1(Sep 8, 2021)

  • 2.0.0(Aug 11, 2021)

    Major update to iGibson to reach iGibson 2.0, for details please refer to our arxiv preprint.

    • iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks.
    • iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked.
    • iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.

    iGibson 2.0 is also the version to use with BEHAVIOR Challenge. For more information please visit: http://svl.stanford.edu/behavior/challenge.html

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0rc4(Jul 19, 2021)

  • 1.0.3(Jul 19, 2021)

  • 1.0.1(Dec 24, 2020)

    Changes:

    • Fix python2 compatibility issue.
    • Ship examples and config files with the pip package.
    • Fix shape caching issue.

    Note: if you need to download source code, please download from gibson2-1.0.1.tar.gz, instead of the one GitHub provides, since the latter doesn't include submodules.

    Source code(tar.gz)
    Source code(zip)
    gibson2-1.0.1-cp27-cp27mu-manylinux1_x86_64.whl(23.74 MB)
    gibson2-1.0.1-cp35-cp35m-manylinux1_x86_64.whl(23.74 MB)
    gibson2-1.0.1-cp36-cp36m-manylinux1_x86_64.whl(23.74 MB)
    gibson2-1.0.1-cp37-cp37m-manylinux1_x86_64.whl(23.74 MB)
    gibson2-1.0.1-cp38-cp38-manylinux1_x86_64.whl(23.19 MB)
    gibson2-1.0.1.tar.gz(21.23 MB)
  • 1.0.0(Dec 8, 2020)

    Major update to iGibson to reach iGibson v1.0, for details please refer to our technical report.

    • Release of iGibson dataset, which consists of 15 fully interactive scenes and 500+ object models.
    • New features of the Simulator: Physically-based rendering; 1-beam and 16-beam lidar simulation; Domain randomization support.
    • Code refactoring and cleanup.
    Source code(tar.gz)
    Source code(zip)
    gibson2-1.0.0-cp35-cp35m-manylinux1_x86_64.whl(15.45 MB)
    gibson2-1.0.0-cp36-cp36m-manylinux1_x86_64.whl(15.45 MB)
    gibson2-1.0.0-cp38-cp38-manylinux1_x86_64.whl(15.45 MB)
    gibson2-1.0.0.tar.gz(13.09 MB)
  • 0.0.4(Apr 7, 2020)

    iGibson, the Interactive Gibson Environment, is a simulation environment providing fast visual rendering and physics simulation (based on Bullet). It is packed with a dataset with hundreds of large 3D environments reconstructed from real homes and offices, and interactive objects that can be pushed and actuated. iGibson allows researchers to train and evaluate robotic agents that use RGB images and/or other visual sensors to solve indoor (interactive) navigation and manipulation tasks such as opening doors, picking and placing objects, or searching in cabinets.

    Major changes since original GibsonEnv:

    • Support agent interaction with the environment
    • Support faster rendering, rendering to tensor support
    • Removed dependencies of PyOpenGL, better support for headless rendering
    • Support our latest version of assets.
    Source code(tar.gz)
    Source code(zip)
    gibson2-0.0.4-cp27-cp27mu-manylinux1_x86_64.whl(3.41 MB)
    gibson2-0.0.4-cp35-cp35m-manylinux1_x86_64.whl(3.41 MB)
Owner
Stanford Vision and Learning Lab
Research Codebase
Stanford Vision and Learning Lab
App customer segmentation cohort rfm clustering

CUSTOMER SEGMENTATION COHORT RFM CLUSTERING TỔNG QUAN VỀ HỆ THỐNG DỮ LIỆU Nên chuyển qua theme màu dark thì sẽ nhìn đẹp hơn https://customer-segmentat

hieulmsc 3 Dec 18, 2021
Two types of Recommender System : Content-based Recommender System and Colaborating filtering based recommender system

Recommender-Systems Two types of Recommender System : Content-based Recommender System and Colaborating filtering based recommender system So the data

Yash Kumar 0 Jan 20, 2022
Illuminated3D This project participates in the Nasa Space Apps Challenge 2021.

Illuminated3D This project participates in the Nasa Space Apps Challenge 2021.

Eleftheriadis Emmanouil 1 Oct 09, 2021
StyleGAN2 Webtoon / Anime Style Toonify

StyleGAN2 Webtoon / Anime Style Toonify Korea Webtoon or Japanese Anime Character Stylegan2 base high Quality 1024x1024 / 512x512 Generate and Transfe

121 Dec 21, 2022
Simply enable or disable your Nvidia dGPU

EnvyControl (WIP) Simply enable or disable your Nvidia dGPU Usage First clone this repo and install envycontrol with sudo pip install . CLI Turn off y

Victor Bayas 292 Jan 03, 2023
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Dongkwon Jin 106 Dec 29, 2022
Python based Advanced AI Assistant

Knick is a virtual artificial intelligence project, fully developed in python. The objective of this project is to develop a virtual assistant that can handle our minor, intermediate as well as heavy

19 Nov 15, 2022
Light-Head R-CNN

Light-head R-CNN Introduction We release code for Light-Head R-CNN. This is my best practice for my research. This repo is organized as follows: light

jemmy li 835 Dec 06, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

2 Oct 20, 2022
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
Implicit Graph Neural Networks

Implicit Graph Neural Networks This repository is the official PyTorch implementation of "Implicit Graph Neural Networks". Fangda Gu*, Heng Chang*, We

Heng Chang 48 Nov 29, 2022
Unrolled Generative Adversarial Networks

Unrolled Generative Adversarial Networks Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein arxiv:1611.02163 This repo contains an example notebo

Ben Poole 292 Dec 06, 2022
Delving into Localization Errors for Monocular 3D Object Detection, CVPR'2021

Delving into Localization Errors for Monocular 3D Detection By Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, Wanli Ouyang. Intr

XINZHU.MA 124 Jan 04, 2023
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context Code in both PyTorch and TensorFlow

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context This repository contains the code in both PyTorch and TensorFlow for our paper

Zhilin Yang 3.3k Jan 06, 2023
A TensorFlow implementation of the Mnemonic Descent Method.

MDM A Tensorflow implementation of the Mnemonic Descent Method. Mnemonic Descent Method: A recurrent process applied for end-to-end face alignment G.

123 Oct 07, 2022
An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

Kakao Brain 72 Dec 28, 2022
Code for our CVPR 2021 Paper "Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes".

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (CVPR 2021) Project page | Paper | Colab | Colab for Drawing App Rethinking Style

CompVis Heidelberg 153 Jan 04, 2023
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
ECLARE: Extreme Classification with Label Graph Correlations

ECLARE ECLARE: Extreme Classification with Label Graph Correlations @InProceedings{Mittal21b, author = "Mittal, A. and Sachdeva, N. and Agrawal

Extreme Classification 35 Nov 06, 2022
PyTorch implementation of the Flow Gaussian Mixture Model (FlowGMM) model from our paper

Flow Gaussian Mixture Model (FlowGMM) This repository contains a PyTorch implementation of the Flow Gaussian Mixture Model (FlowGMM) model from our pa

Pavel Izmailov 124 Nov 06, 2022