AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

Overview

AtlasNet [Project Page] [Paper] [Talk]

AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation
Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry
In CVPR, 2018.

🚀 New branch : AtlasNet + Shape Reconstruction by Learning Differentiable Surface Representations

chair.png chair.gif

Install

This implementation uses Python 3.6, Pytorch, Pymesh, Cuda 10.1.

# Copy/Paste the snippet in a terminal
git clone --recurse-submodules https://github.com/ThibaultGROUEIX/AtlasNet.git
cd AtlasNet 

#Dependencies
conda create -n atlasnet python=3.6 --yes
conda activate atlasnet
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1 -c pytorch --yes
pip install --user --requirement  requirements.txt # pip dependencies
Optional : Compile Chamfer (MIT) + Metro Distance (GPL3 Licence)
# Copy/Paste the snippet in a terminal
python auxiliary/ChamferDistancePytorch/chamfer3D/setup.py install #MIT
cd auxiliary
git clone https://github.com/ThibaultGROUEIX/metro_sources.git
cd metro_sources; python setup.py --build # build metro distance #GPL3
cd ../..

A note on data.

Data download should be automatic. However, due to the new google drive traffic caps, you may have to download manually. If you run into an error running the demo, you can refer to #61.

You can manually download the data from three sources (there are the same) :

Please make sure to unzip the archives in the right places :

cd AtlasNet
mkdir data
unzip ShapeNetV1PointCloud.zip -d ./data/
unzip ShapeNetV1Renderings.zip -d ./data/
unzip metro_files.zip -d ./data/
unzip trained_models.zip -d ./training/

Usage

  • Demo : python train.py --demo
  • Training : python train.py --shapenet13 Monitor on http://localhost:8890/
  • Latest Refacto 12-2019 - [x] Factorize Single View Reconstruction and autoencoder in same class
    - [x] Factorise Square and Sphere template in same class
    - [x] Add latent vector as bias after first layer(30% speedup)
    - [x] Remove last th in decoder
    - [x] Make large .pth tensor with all pointclouds in cache(drop the nasty Chunk_reader)
    - [x] Make-it multi-gpu
    - [x] Add netvision visualization of the results
    - [x] Rewrite main script object-oriented
    - [x] Check that everything works in latest pytorch version
    - [x] Add more layer by default and flag for the number of layers and hidden neurons
    - [x] Add a flag to generate a mesh directly
    - [x] Add a python setup install
    - [x] Make sure GPU are used at 100%
    - [x] Add f-score in Chamfer + report f-score
    - [x] Get rid of shapenet_v2 data and use v1!
    - [x] Fix path issues no more sys.path.append
    - [x] Preprocess shapenet 55 and add it in dataloader
    - [x] Make minimal dependencies

Quantitative Results

Method Chamfer (*1) Fscore (*2) Metro (*3) Total Train time (min)
Autoencoder 25 Squares 1.35 82.3% 6.82 731
Autoencoder 1 Sphere 1.35 83.3% 6.94 548
SingleView 25 Squares 3.78 63.1% 8.94 1422
SingleView 1 Sphere 3.76 64.4% 9.01 1297
  • (*1) x1000. Computed between 2500 ground truth points and 2500 reconstructed points.
  • (*2) The threshold is 0.001
  • (*3) x100. Metro is ran on unormalized point clouds (which explains a difference with the paper's numbers)

Related projects

Citing this work

@inproceedings{groueix2018,
          title={{AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation}},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
          year={2018}
        }

Comments
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    Thank you for the great work! I get this error below when I run: ./training/train_AE_AtlasNet.py

    I checked two more similar issues but this looks different. Any idea how to solve it? Any help appreciated!

    File "./training/train_AE_AtlasNet.py", line 151, in dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory

    I run pytorch:0.4.1 / Ubuntu 18.04

    FULL CODE:

    (pytorch-atlasnet) [email protected]:~/AtlasNet$ python ./training/train_AE_AtlasNet.py --env $env --nb_primitives $nb_primitives |& tee ${env}.txt Setting up a new session... Namespace(accelerated_chamfer=0, batchSize=32, env='AE_AtlasNet', model='', nb_primitives=25, nepoch=120, num_points=2500, super_points=2500, workers=12) Random Seed: 314 {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % training set 31747 testing set 7943 **Traceback (most recent call last): File "./training/train_AE_AtlasNet.py", line 151, in <module> dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory**

    help wanted 
    opened by spha-code 15
  • To be honest, the latest code is very hard to understand

    To be honest, the latest code is very hard to understand

    I compare our method with AtlasNet several times. I need to edit the source code each time. However, the latest code is very hard to understand because it is of high abstraction. It takes me an hour to understand the relationship between each module.

    help wanted 
    opened by hzxie 10
  • Stuck after launching visdom server

    Stuck after launching visdom server

    I run the demo succesfully but after I launch the visdom server

    python -m visdom.server -p 8888

    I am stuck, I can't write any command anymore in my anaconda window. How do I continue? thanks!

    help wanted 
    opened by spha-code 10
  • [BUG] Chamfer Distance is not Correct

    [BUG] Chamfer Distance is not Correct

    I tried to debug the chamfer.cu by printing the values of tensors. I created two point clouds containing 3 and 5 points, respectively. The values are shown as below.

    (1,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -20.4838  4.4935  6.1395
      -3.7283 -0.7629  1.7736
    
    (2,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -17.4992  4.4902  5.0518
      -1.6003 -1.2430  0.8040
    [ Variable[CUDAType]{2,3,3} ]
    (1,.,.) = 
      0.0051  0.1850  0.0004
      0.0051  0.1850  0.0093
      0.0096  0.1850  0.0081
      0.0096  0.1850  0.0016
      0.0075  0.1850  0.0004
    
    (2,.,.) = 
     -0.1486 -0.0932 -0.0014
     -0.0406 -0.0932 -0.0017
     -0.2057 -0.0932 -0.0001
     -0.0915 -0.0932 -0.0001
      0.0103 -0.0932 -0.0001
    [ Variable[CUDAType]{2,5,3} ]
    

    I also add print statements in CUDA functions, and I got the following output.

    2i = 0, n = 3, j = 0, k = 0, d = 0.03425420, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 1, k = 0, d = 0.06742091, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 2, k = 0, d = 0.03920735, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00038808)
    2i = 1, n = 3, j = 0, k = 0, d = 0.03573948, x = (-0.08192606 0.01907521 0.02376382) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 1, k = 0, d = 0.03405631, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 2, k = 0, d = 0.03437031, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00928491)
    2i = 0, n = 3, j = 0, k = 1, d = 0.03434026, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 1, k = 1, d = 0.06641452, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 2, k = 1, d = 0.03897782, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00928491)
    2i = 1, n = 3, j = 0, k = 1, d = 0.03490656, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 1, k = 1, d = 0.03394968, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 2, k = 1, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00713918)
    2i = 0, n = 3, j = 0, k = 2, d = 0.03438481, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 1, k = 2, d = 0.06842789, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 2, k = 2, d = 0.03939636, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00809300)
    2i = 1, n = 3, j = 0, k = 2, d = 0.03508088, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 1, k = 2, d = 0.03389502, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 2, k = 2, d = 0.03423970, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00253408)
    2i = 0, n = 3, j = 0, k = 3, d = 0.03432181, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 1, k = 3, d = 0.06916460, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 2, k = 3, d = 0.03956439, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00158027)
    2i = 1, n = 3, j = 0, k = 3, d = 0.03652760, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 1, k = 3, d = 0.03404036, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 2, k = 3, d = 0.03435681, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00364473)
    3i = 0, n = 3, j = 0, k = 4, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 1, k = 4, d = 0.06842767, x = (-0.20483765 0.04493479 0.06139540) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 2, k = 4, d = 0.03941518, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00749534 0.18500790 0.00038808)
    3i = 1, n = 3, j = 0, k = 4, d = 0.03643737, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 1, k = 4, d = 0.03406866, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 2, k = 4, d = 0.03437987, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00602855)
    i = 0, n = 3, j = 0, best = 0.03425420, best_i = 0
    i = 0, n = 3, j = 1, best = 0.06641452, best_i = 1
    i = 0, n = 3, j = 2, best = 0.03897782, best_i = 1
    i = 1, n = 3, j = 0, best = 0.03490656, best_i = 1
    i = 1, n = 3, j = 1, best = 0.03389502, best_i = 2
    i = 1, n = 3, j = 2, best = 0.03423970, best_i = 2
    

    For batch 0 (i = 0), everything seems correct. However, for batch 1 (i = 1), the values of point clouds are not in the tensors. Is there something wrong with the code?

    chamfer 
    opened by hzxie 10
  • Evaluate RGB image with pretrained model

    Evaluate RGB image with pretrained model

    Hi Iam actually try to evaluate SVR Atlas pretrained model on RGB image(chair), my parameters are really similar to the demo and i got wird result (by view in chrom 3D viewer ). i used the demo grid generation. when i run your demo plane.jpg im my network i got good results in the 3Dviewer . Demo plain wird_pic_1 wird_pic_2 wird_pic_3 can you please direct my how to evaluate RGB image?

    testing 
    opened by Itamare1982 9
  • Test set used as validation to choose best model

    Test set used as validation to choose best model

    In train_AE_Atlasnet.py, the test set is used as the validation set to choose the best model. The test set should never be used during training and especially not to choose the best model as this biases the results. It's probably more appropriate to report the results on the last training epoch if there was no validation set.

    bug 
    opened by lynetcha 9
  • The corresponding normalized mesh

    The corresponding normalized mesh

    I downloaded the corresponding normalized mesh (Only 58Mb) from the link you provided. I found that the number of the mesh was much smaller than the corresponding point cloud. Could you please provide the full dataset of the corresponding normalized mesh? Thank you!

    data 
    opened by wang-ps 9
  • Cannot download the point cloud data

    Cannot download the point cloud data

    Hi! I'm trying to download the point cloud data provided in this link: https://cloud.enpc.fr/s/j2ECcKleA1IKNzk but the network fails every time I try to download.

    Do you know what's going on or how to download them?

    Thank you in advance!

    data 
    opened by jjpark 8
  • validation loss explodes

    validation loss explodes

    4cb6332fbe946eaa6a317f9f2ddc3b6 I directly run the script 'train_AE_Atlasnet.py' without any modification. As you can see above, the performance is good on the training set, but quite poor on the validation set. The validation loss increases quickly and doesn't decrease.

    pytorch 
    opened by AkonLau 8
  • About the point cloud dataset

    About the point cloud dataset

    I found that some of your point cloud dataset provided are missing. Could you provide all the point cloud dataset? or Could you tell me how to generate the point cloud dataset? Thank you!

    data 
    opened by guoyan1991 7
  • Memory Leak

    Memory Leak

    I found that the unused self.dist1 and self.dist2 in the file "nndistance/functions/nnd.py" cause memory leaking in my environment. (Python 3.5.2 with Pytorch 0.4.0)

    class NNDFunction(Function):
        def forward(self, xyz1, xyz2):
            dist1,dist2=cuda_compute_from(xyz1,xyz2)
            # following two lines cause memory leak
            self.dist1 = dist1
            self.dist2 = dist2
            return dist1, dist2
    
        def backward(self, graddist1, graddist2):
            gradxyz1,gradxyz2=grad_cuda_compute_from(graddist1,graddist2)
            return gradxyz1, gradxyz2
    
    chamfer pytorch 
    opened by liuyuan-pal 7
  • Question about running

    Question about running

    Hi,i'm soory to bother you again. When I ran the code as you explained, I got the following error: sh: 1: tmux: not found Setting up a new session... Exception in user code:

    could you give me some advice? The overall operation results in the terminal are as follows:

    /home/yukon/anaconda3/envs/pymesh/bin/python "/media/yukon/Extreme SSD/AtlasNet-master/train.py" anshu: Namespace(SVR=False, activation='relu', anisotropic_scaling=False, batch_size=32, batch_size_test=32, bottleneck_size=1024, class_choice=['airplane'], data_augmentation_axis_rotation=False, data_augmentation_random_flips=False, demo=True, demo_input_path='./doc/pictures/plane_input_demo.png', dir_name='', env='Atlasnet', hidden_neurons=512, http_port=8891, id='0', loop_per_epoch=1, lr_decay_1=120, lr_decay_2=140, lr_decay_3=145, lrate=0.001, multi_gpu=[0], nb_primitives=1, nepoch=150, no_learning=False, no_metro=False, normalization='UnitBall', num_layers=2, number_points=2500, number_points_eval=2500, random_rotation=False, random_seed=False, random_translation=False, reload_decoder_path='', reload_model_path='', remove_all_batchNorms=False, run_single_eval=False, sample=True, shapenet13=False, start_epoch=0, template_type='SPHERE', train_only_encoder=False, visdom_port=8890, workers=0) Loaded compiled 3D CUDA chamfer distance Launching new visdom instance in port 8890 TMUX=0 tmux new-session -d -s visdom_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m visdom.server -p 8890 > /dev/null 2>&1" Enter sh: 1: tmux: not found Launching new HTTP instance in port 8891 TMUX=0 tmux new-session -d -s http_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m http.server -p 8891 > /dev/null 2>&1" Enter sh: 1: tmux: not found Setting up a new session... Exception in user code:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 710, in urlopen chunked=chunked, File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 398, in _make_request conn.request(method, url, **httplib_request_kw) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1291, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1337, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1286, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1046, in _send_output self.send(msg) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 984, in send self.connect() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect conn = self._new_conn() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 450, in send timeout=timeout File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 788, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 695, in _send data=json.dumps(msg), File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 656, in _handle_post r = self.session.post(url, data=data) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 577, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 645, in send r = adapter.send(request, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) [Errno 111] Connection refused on_close() takes 1 positional argument but 3 were given New MLP decoder : hidden size 512, num_layers 2, activation relu Network weights loaded from ./training/trained_models/atlasnet_singleview_1_sphere/network.pth! Atlasnet generated mesh at ./doc/pictures/plane_input_demoAtlasnetReconstruction.ply!

    Process finished with exit code 0

    opened by tang-y-q 2
  • Question About Visulization

    Question About Visulization

    Hey! Sorry to disturb you again!

    I want to know if there any effective tools by python to visulize .obj file and save it to .png (except for meshlab).

    Thanks for your reply!

    opened by yufeng9819 1
  • Compile Metro Distance (GPL3 Licence)

    Compile Metro Distance (GPL3 Licence)

    Hi i'm sorry to bother you again.
    when I used the code you give me to trying to build metro distance, I found that it could not be compiled successfully. 
    It will indicate that the system path cannot be found. The results are shown as follows:
    

    捕获 could you please give some advice to solve this problem. thanks a lot!

    opened by tang-y-q 1
  • Question about train and test strategy

    Question about train and test strategy

    Hi! Sorry for disturb you again.

    I want to ask questions about train and test strategy. In your code, you set opt.shapenet13==True. So does it means that you first train your network on all categories and then test on each class to get the experiment metrics data of every single class.

    Looking forward to your reply!

    opened by yufeng9819 1
  • AtlasNet checkpoint not available

    AtlasNet checkpoint not available

    Hi @ThibaultGROUEIX, thank you for sharing the code.

    When downloading checkpoint of the model using the trained_models/download_models.sh (https://cloud.enpc.fr/s/c27Df7fRNXW2uG3/download) related to the version 2.2 of the source code, the link seems to be broken or no longer available. Could you please help me with this?

    Thanks.

    opened by apicis 4
Releases(v3.0)
Owner
also here : https://bitbucket.org/ThibaultGROUEIX/
Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System

Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System The possibilities to involve

Babu Kumaran Nalini 0 Nov 19, 2021
Wandb-predictions - WANDB Predictions With Python

WANDB API CI/CD Below we capture the CI/CD scenarios that we would expect with o

Anish Shah 6 Oct 07, 2022
Simple, efficient and flexible vision toolbox for mxnet framework.

MXbox: Simple, efficient and flexible vision toolbox for mxnet framework. MXbox is a toolbox aiming to provide a general and simple interface for visi

Ligeng Zhu 31 Oct 19, 2019
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 02, 2022
FedML: A Research Library and Benchmark for Federated Machine Learning

FedML: A Research Library and Benchmark for Federated Machine Learning 📄 https://arxiv.org/abs/2007.13518 News 2021-02-01 (Award): #NeurIPS 2020# Fed

FedML-AI 2.3k Jan 08, 2023
Robot Hacking Manual (RHM). From robotics to cybersecurity. Papers, notes and writeups from a journey into robot cybersecurity.

RHM: Robot Hacking Manual Download in PDF RHM v0.4 ┃ Read online The Robot Hacking Manual (RHM) is an introductory series about cybersecurity for robo

Víctor Mayoral Vilches 233 Dec 30, 2022
Implement some metaheuristics and cost functions

Metaheuristics This repot implement some metaheuristics and cost functions. Metaheuristics JAYA Implement Jaya optimizer without constraints. Cost fun

Adri1G 1 Mar 23, 2022
Streamlit app demonstrating an image browser for the Udacity self-driving-car dataset with realtime object detection using YOLO.

Streamlit Demo: The Udacity Self-driving Car Image Browser This project demonstrates the Udacity self-driving-car dataset and YOLO object detection in

Streamlit 992 Jan 04, 2023
Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".

nvdiffrec Joint optimization of topology, materials and lighting from multi-view image observations as described in the paper Extracting Triangular 3D

NVIDIA Research Projects 1.4k Jan 01, 2023
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

1 Oct 11, 2021
An Open-Source Tool for Automatic Disease Diagnosis..

OpenMedicalChatbox An Open-Source Package for Automatic Disease Diagnosis. Overview Due to the lack of open source for existing RL-base automated diag

8 Nov 08, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

524 Jan 08, 2023
OrienMask: Real-time Instance Segmentation with Discriminative Orientation Maps

OrienMask This repository implements the framework OrienMask for real-time instance segmentation. It achieves 34.8 mask AP on COCO test-dev at the spe

45 Dec 13, 2022
An updated version of virtual model making

Model-Swap-Face v2   这个项目是基于stylegan2 pSp制作的,比v1版本Model-Swap-Face在推理速度和图像质量上有一定提升。主要的功能是将虚拟模特进行环球不同区域的风格转换,目前转换器提供西欧模特、东亚模特和北非模特三种主流的风格样式,可帮我们实现生产资料零成

seeprettyface.com 62 Dec 09, 2022
Complete U-net Implementation with keras

U Net Lowered with Keras Complete U-net Implementation with keras Original Paper Link : https://arxiv.org/abs/1505.04597 Special Implementations : The

Sagnik Roy 14 Oct 10, 2022
Research code of ICCV 2021 paper "Mesh Graphormer"

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 08, 2023
Turi Create simplifies the development of custom machine learning models.

Quick Links: Installation | Documentation | WWDC 2019 | WWDC 2018 Turi Create Check out our talks at WWDC 2019 and at WWDC 2018! Turi Create simplifie

Apple 10.9k Jan 01, 2023
Fast and robust clustering of point clouds generated with a Velodyne sensor.

Depth Clustering This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velo

Photogrammetry & Robotics Bonn 957 Dec 21, 2022