AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

Overview

AtlasNet [Project Page] [Paper] [Talk]

AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation
Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry
In CVPR, 2018.

🚀 New branch : AtlasNet + Shape Reconstruction by Learning Differentiable Surface Representations

chair.png chair.gif

Install

This implementation uses Python 3.6, Pytorch, Pymesh, Cuda 10.1.

# Copy/Paste the snippet in a terminal
git clone --recurse-submodules https://github.com/ThibaultGROUEIX/AtlasNet.git
cd AtlasNet 

#Dependencies
conda create -n atlasnet python=3.6 --yes
conda activate atlasnet
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1 -c pytorch --yes
pip install --user --requirement  requirements.txt # pip dependencies
Optional : Compile Chamfer (MIT) + Metro Distance (GPL3 Licence)
# Copy/Paste the snippet in a terminal
python auxiliary/ChamferDistancePytorch/chamfer3D/setup.py install #MIT
cd auxiliary
git clone https://github.com/ThibaultGROUEIX/metro_sources.git
cd metro_sources; python setup.py --build # build metro distance #GPL3
cd ../..

A note on data.

Data download should be automatic. However, due to the new google drive traffic caps, you may have to download manually. If you run into an error running the demo, you can refer to #61.

You can manually download the data from three sources (there are the same) :

Please make sure to unzip the archives in the right places :

cd AtlasNet
mkdir data
unzip ShapeNetV1PointCloud.zip -d ./data/
unzip ShapeNetV1Renderings.zip -d ./data/
unzip metro_files.zip -d ./data/
unzip trained_models.zip -d ./training/

Usage

  • Demo : python train.py --demo
  • Training : python train.py --shapenet13 Monitor on http://localhost:8890/
  • Latest Refacto 12-2019 - [x] Factorize Single View Reconstruction and autoencoder in same class
    - [x] Factorise Square and Sphere template in same class
    - [x] Add latent vector as bias after first layer(30% speedup)
    - [x] Remove last th in decoder
    - [x] Make large .pth tensor with all pointclouds in cache(drop the nasty Chunk_reader)
    - [x] Make-it multi-gpu
    - [x] Add netvision visualization of the results
    - [x] Rewrite main script object-oriented
    - [x] Check that everything works in latest pytorch version
    - [x] Add more layer by default and flag for the number of layers and hidden neurons
    - [x] Add a flag to generate a mesh directly
    - [x] Add a python setup install
    - [x] Make sure GPU are used at 100%
    - [x] Add f-score in Chamfer + report f-score
    - [x] Get rid of shapenet_v2 data and use v1!
    - [x] Fix path issues no more sys.path.append
    - [x] Preprocess shapenet 55 and add it in dataloader
    - [x] Make minimal dependencies

Quantitative Results

Method Chamfer (*1) Fscore (*2) Metro (*3) Total Train time (min)
Autoencoder 25 Squares 1.35 82.3% 6.82 731
Autoencoder 1 Sphere 1.35 83.3% 6.94 548
SingleView 25 Squares 3.78 63.1% 8.94 1422
SingleView 1 Sphere 3.76 64.4% 9.01 1297
  • (*1) x1000. Computed between 2500 ground truth points and 2500 reconstructed points.
  • (*2) The threshold is 0.001
  • (*3) x100. Metro is ran on unormalized point clouds (which explains a difference with the paper's numbers)

Related projects

Citing this work

@inproceedings{groueix2018,
          title={{AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation}},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
          year={2018}
        }

Comments
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    Thank you for the great work! I get this error below when I run: ./training/train_AE_AtlasNet.py

    I checked two more similar issues but this looks different. Any idea how to solve it? Any help appreciated!

    File "./training/train_AE_AtlasNet.py", line 151, in dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory

    I run pytorch:0.4.1 / Ubuntu 18.04

    FULL CODE:

    (pytorch-atlasnet) [email protected]:~/AtlasNet$ python ./training/train_AE_AtlasNet.py --env $env --nb_primitives $nb_primitives |& tee ${env}.txt Setting up a new session... Namespace(accelerated_chamfer=0, batchSize=32, env='AE_AtlasNet', model='', nb_primitives=25, nepoch=120, num_points=2500, super_points=2500, workers=12) Random Seed: 314 {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % training set 31747 testing set 7943 **Traceback (most recent call last): File "./training/train_AE_AtlasNet.py", line 151, in <module> dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory**

    help wanted 
    opened by spha-code 15
  • To be honest, the latest code is very hard to understand

    To be honest, the latest code is very hard to understand

    I compare our method with AtlasNet several times. I need to edit the source code each time. However, the latest code is very hard to understand because it is of high abstraction. It takes me an hour to understand the relationship between each module.

    help wanted 
    opened by hzxie 10
  • Stuck after launching visdom server

    Stuck after launching visdom server

    I run the demo succesfully but after I launch the visdom server

    python -m visdom.server -p 8888

    I am stuck, I can't write any command anymore in my anaconda window. How do I continue? thanks!

    help wanted 
    opened by spha-code 10
  • [BUG] Chamfer Distance is not Correct

    [BUG] Chamfer Distance is not Correct

    I tried to debug the chamfer.cu by printing the values of tensors. I created two point clouds containing 3 and 5 points, respectively. The values are shown as below.

    (1,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -20.4838  4.4935  6.1395
      -3.7283 -0.7629  1.7736
    
    (2,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -17.4992  4.4902  5.0518
      -1.6003 -1.2430  0.8040
    [ Variable[CUDAType]{2,3,3} ]
    (1,.,.) = 
      0.0051  0.1850  0.0004
      0.0051  0.1850  0.0093
      0.0096  0.1850  0.0081
      0.0096  0.1850  0.0016
      0.0075  0.1850  0.0004
    
    (2,.,.) = 
     -0.1486 -0.0932 -0.0014
     -0.0406 -0.0932 -0.0017
     -0.2057 -0.0932 -0.0001
     -0.0915 -0.0932 -0.0001
      0.0103 -0.0932 -0.0001
    [ Variable[CUDAType]{2,5,3} ]
    

    I also add print statements in CUDA functions, and I got the following output.

    2i = 0, n = 3, j = 0, k = 0, d = 0.03425420, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 1, k = 0, d = 0.06742091, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 2, k = 0, d = 0.03920735, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00038808)
    2i = 1, n = 3, j = 0, k = 0, d = 0.03573948, x = (-0.08192606 0.01907521 0.02376382) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 1, k = 0, d = 0.03405631, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 2, k = 0, d = 0.03437031, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00928491)
    2i = 0, n = 3, j = 0, k = 1, d = 0.03434026, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 1, k = 1, d = 0.06641452, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 2, k = 1, d = 0.03897782, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00928491)
    2i = 1, n = 3, j = 0, k = 1, d = 0.03490656, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 1, k = 1, d = 0.03394968, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 2, k = 1, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00713918)
    2i = 0, n = 3, j = 0, k = 2, d = 0.03438481, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 1, k = 2, d = 0.06842789, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 2, k = 2, d = 0.03939636, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00809300)
    2i = 1, n = 3, j = 0, k = 2, d = 0.03508088, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 1, k = 2, d = 0.03389502, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 2, k = 2, d = 0.03423970, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00253408)
    2i = 0, n = 3, j = 0, k = 3, d = 0.03432181, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 1, k = 3, d = 0.06916460, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 2, k = 3, d = 0.03956439, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00158027)
    2i = 1, n = 3, j = 0, k = 3, d = 0.03652760, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 1, k = 3, d = 0.03404036, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 2, k = 3, d = 0.03435681, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00364473)
    3i = 0, n = 3, j = 0, k = 4, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 1, k = 4, d = 0.06842767, x = (-0.20483765 0.04493479 0.06139540) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 2, k = 4, d = 0.03941518, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00749534 0.18500790 0.00038808)
    3i = 1, n = 3, j = 0, k = 4, d = 0.03643737, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 1, k = 4, d = 0.03406866, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 2, k = 4, d = 0.03437987, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00602855)
    i = 0, n = 3, j = 0, best = 0.03425420, best_i = 0
    i = 0, n = 3, j = 1, best = 0.06641452, best_i = 1
    i = 0, n = 3, j = 2, best = 0.03897782, best_i = 1
    i = 1, n = 3, j = 0, best = 0.03490656, best_i = 1
    i = 1, n = 3, j = 1, best = 0.03389502, best_i = 2
    i = 1, n = 3, j = 2, best = 0.03423970, best_i = 2
    

    For batch 0 (i = 0), everything seems correct. However, for batch 1 (i = 1), the values of point clouds are not in the tensors. Is there something wrong with the code?

    chamfer 
    opened by hzxie 10
  • Evaluate RGB image with pretrained model

    Evaluate RGB image with pretrained model

    Hi Iam actually try to evaluate SVR Atlas pretrained model on RGB image(chair), my parameters are really similar to the demo and i got wird result (by view in chrom 3D viewer ). i used the demo grid generation. when i run your demo plane.jpg im my network i got good results in the 3Dviewer . Demo plain wird_pic_1 wird_pic_2 wird_pic_3 can you please direct my how to evaluate RGB image?

    testing 
    opened by Itamare1982 9
  • Test set used as validation to choose best model

    Test set used as validation to choose best model

    In train_AE_Atlasnet.py, the test set is used as the validation set to choose the best model. The test set should never be used during training and especially not to choose the best model as this biases the results. It's probably more appropriate to report the results on the last training epoch if there was no validation set.

    bug 
    opened by lynetcha 9
  • The corresponding normalized mesh

    The corresponding normalized mesh

    I downloaded the corresponding normalized mesh (Only 58Mb) from the link you provided. I found that the number of the mesh was much smaller than the corresponding point cloud. Could you please provide the full dataset of the corresponding normalized mesh? Thank you!

    data 
    opened by wang-ps 9
  • Cannot download the point cloud data

    Cannot download the point cloud data

    Hi! I'm trying to download the point cloud data provided in this link: https://cloud.enpc.fr/s/j2ECcKleA1IKNzk but the network fails every time I try to download.

    Do you know what's going on or how to download them?

    Thank you in advance!

    data 
    opened by jjpark 8
  • validation loss explodes

    validation loss explodes

    4cb6332fbe946eaa6a317f9f2ddc3b6 I directly run the script 'train_AE_Atlasnet.py' without any modification. As you can see above, the performance is good on the training set, but quite poor on the validation set. The validation loss increases quickly and doesn't decrease.

    pytorch 
    opened by AkonLau 8
  • About the point cloud dataset

    About the point cloud dataset

    I found that some of your point cloud dataset provided are missing. Could you provide all the point cloud dataset? or Could you tell me how to generate the point cloud dataset? Thank you!

    data 
    opened by guoyan1991 7
  • Memory Leak

    Memory Leak

    I found that the unused self.dist1 and self.dist2 in the file "nndistance/functions/nnd.py" cause memory leaking in my environment. (Python 3.5.2 with Pytorch 0.4.0)

    class NNDFunction(Function):
        def forward(self, xyz1, xyz2):
            dist1,dist2=cuda_compute_from(xyz1,xyz2)
            # following two lines cause memory leak
            self.dist1 = dist1
            self.dist2 = dist2
            return dist1, dist2
    
        def backward(self, graddist1, graddist2):
            gradxyz1,gradxyz2=grad_cuda_compute_from(graddist1,graddist2)
            return gradxyz1, gradxyz2
    
    chamfer pytorch 
    opened by liuyuan-pal 7
  • Question about running

    Question about running

    Hi,i'm soory to bother you again. When I ran the code as you explained, I got the following error: sh: 1: tmux: not found Setting up a new session... Exception in user code:

    could you give me some advice? The overall operation results in the terminal are as follows:

    /home/yukon/anaconda3/envs/pymesh/bin/python "/media/yukon/Extreme SSD/AtlasNet-master/train.py" anshu: Namespace(SVR=False, activation='relu', anisotropic_scaling=False, batch_size=32, batch_size_test=32, bottleneck_size=1024, class_choice=['airplane'], data_augmentation_axis_rotation=False, data_augmentation_random_flips=False, demo=True, demo_input_path='./doc/pictures/plane_input_demo.png', dir_name='', env='Atlasnet', hidden_neurons=512, http_port=8891, id='0', loop_per_epoch=1, lr_decay_1=120, lr_decay_2=140, lr_decay_3=145, lrate=0.001, multi_gpu=[0], nb_primitives=1, nepoch=150, no_learning=False, no_metro=False, normalization='UnitBall', num_layers=2, number_points=2500, number_points_eval=2500, random_rotation=False, random_seed=False, random_translation=False, reload_decoder_path='', reload_model_path='', remove_all_batchNorms=False, run_single_eval=False, sample=True, shapenet13=False, start_epoch=0, template_type='SPHERE', train_only_encoder=False, visdom_port=8890, workers=0) Loaded compiled 3D CUDA chamfer distance Launching new visdom instance in port 8890 TMUX=0 tmux new-session -d -s visdom_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m visdom.server -p 8890 > /dev/null 2>&1" Enter sh: 1: tmux: not found Launching new HTTP instance in port 8891 TMUX=0 tmux new-session -d -s http_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m http.server -p 8891 > /dev/null 2>&1" Enter sh: 1: tmux: not found Setting up a new session... Exception in user code:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 710, in urlopen chunked=chunked, File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 398, in _make_request conn.request(method, url, **httplib_request_kw) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1291, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1337, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1286, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1046, in _send_output self.send(msg) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 984, in send self.connect() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect conn = self._new_conn() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 450, in send timeout=timeout File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 788, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 695, in _send data=json.dumps(msg), File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 656, in _handle_post r = self.session.post(url, data=data) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 577, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 645, in send r = adapter.send(request, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) [Errno 111] Connection refused on_close() takes 1 positional argument but 3 were given New MLP decoder : hidden size 512, num_layers 2, activation relu Network weights loaded from ./training/trained_models/atlasnet_singleview_1_sphere/network.pth! Atlasnet generated mesh at ./doc/pictures/plane_input_demoAtlasnetReconstruction.ply!

    Process finished with exit code 0

    opened by tang-y-q 2
  • Question About Visulization

    Question About Visulization

    Hey! Sorry to disturb you again!

    I want to know if there any effective tools by python to visulize .obj file and save it to .png (except for meshlab).

    Thanks for your reply!

    opened by yufeng9819 1
  • Compile Metro Distance (GPL3 Licence)

    Compile Metro Distance (GPL3 Licence)

    Hi i'm sorry to bother you again.
    when I used the code you give me to trying to build metro distance, I found that it could not be compiled successfully. 
    It will indicate that the system path cannot be found. The results are shown as follows:
    

    捕获 could you please give some advice to solve this problem. thanks a lot!

    opened by tang-y-q 1
  • Question about train and test strategy

    Question about train and test strategy

    Hi! Sorry for disturb you again.

    I want to ask questions about train and test strategy. In your code, you set opt.shapenet13==True. So does it means that you first train your network on all categories and then test on each class to get the experiment metrics data of every single class.

    Looking forward to your reply!

    opened by yufeng9819 1
  • AtlasNet checkpoint not available

    AtlasNet checkpoint not available

    Hi @ThibaultGROUEIX, thank you for sharing the code.

    When downloading checkpoint of the model using the trained_models/download_models.sh (https://cloud.enpc.fr/s/c27Df7fRNXW2uG3/download) related to the version 2.2 of the source code, the link seems to be broken or no longer available. Could you please help me with this?

    Thanks.

    opened by apicis 4
Releases(v3.0)
Owner
also here : https://bitbucket.org/ThibaultGROUEIX/
duralava is a neural network which can simulate a lava lamp in an infinite loop.

duralava duralava is a neural network which can simulate a lava lamp in an infinite loop. Example This is not a real lava lamp but a "fake" one genera

Maximilian Bachl 87 Dec 20, 2022
(NeurIPS '21 Spotlight) IQ-Learn: Inverse Q-Learning for Imitation

Inverse Q-Learning (IQ-Learn) Official code base for IQ-Learn: Inverse soft-Q Learning for Imitation, NeurIPS '21 Spotlight IQ-Learn is an easy-to-use

Divyansh Garg 102 Dec 20, 2022
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

57 Nov 28, 2022
https://arxiv.org/abs/2102.11005

LogME LogME: Practical Assessment of Pre-trained Models for Transfer Learning How to use Just feed the features f and labels y to the function, and yo

THUML: Machine Learning Group @ THSS 149 Dec 19, 2022
Depth-Aware Video Frame Interpolation (CVPR 2019)

DAIN (Depth-Aware Video Frame Interpolation) Project | Paper Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang IEEE C

Wenbo Bao 7.7k Dec 31, 2022
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
Code for ACM MM2021 paper "Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection"

CTDNet The PyTorch code for ACM MM2021 paper "Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection" Requirements Python 3.6

CVTEAM 28 Oct 20, 2022
Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL)

Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL) A preprint version of our paper: Link here This is a samp

Di Zhuang 3 Jan 08, 2023
Combinatorial model of ligand-receptor binding

Combinatorial model of ligand-receptor binding The binding of ligands to receptors is the starting point for many import signal pathways within a cell

Mobolaji Williams 0 Jan 09, 2022
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch

Rotary Embeddings - Pytorch A standalone library for adding rotary embeddings to transformers in Pytorch, following its success as relative positional

Phil Wang 110 Dec 30, 2022
A PyTorch library for Vision Transformers

VFormer A PyTorch library for Vision Transformers Getting Started Read the contributing guidelines in CONTRIBUTING.rst to learn how to start contribut

Society for Artificial Intelligence and Deep Learning 142 Nov 28, 2022
Point Cloud Registration using Representative Overlapping Points.

Point Cloud Registration using Representative Overlapping Points (ROPNet) Abstract 3D point cloud registration is a fundamental task in robotics and c

ZhuLifa 36 Dec 16, 2022
OpenVisionAPI server

🚀 Quick start An instance of ova-server is free and publicly available here: https://api.openvisionapi.com Checkout ova-client for a quick demo. Inst

Open Vision API 93 Nov 24, 2022
Implementation of a Transformer, but completely in Triton

Transformer in Triton (wip) Implementation of a Transformer, but completely in Triton. I'm completely new to lower-level neural net code, so this repo

Phil Wang 152 Dec 22, 2022
Awesome Monocular 3D detection

Awesome Monocular 3D detection Paper list of 3D detetction, keep updating! Contents Paper List 2022 2021 2020 2019 2018 2017 2016 KITTI Results Paper

Zhikang Zou 184 Jan 04, 2023
A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning

CLEVR Dataset Generation This is the code used to generate the CLEVR dataset as described in the paper: CLEVR: A Diagnostic Dataset for Compositional

Facebook Research 503 Jan 04, 2023
Source code for the plant extraction workflow introduced in the paper “Agricultural Plant Cataloging and Establishment of a Data Framework from UAV-based Crop Images by Computer Vision”

Plant extraction workflow Source code for the plant extraction workflow introduced in the paper "Agricultural Plant Cataloging and Establishment of a

Maurice Günder 0 Apr 22, 2022
Teaching end to end workflow of deep learning

Deep-Education This repository is now available for public use for teaching end to end workflow of deep learning. This implies that learners/researche

Data Lab at College of William and Mary 2 Sep 26, 2022
Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Frank Noé and Djork-Arné Clevert

Parameterized Hypercomplex Graph Neural Networks (PHC-GNNs) PHC-GNNs (Le et al., 2021): https://arxiv.org/abs/2103.16584 PHM Linear Layer Illustration

Bayer AG 26 Aug 11, 2022