Texture mapping with variational auto-encoders

Overview

vae-textures

This is an experiment with using variational autoencoders (VAEs) to perform mesh parameterization. This was also my first project using JAX and Flax, and I found them both quite intuitive and easy to use.

To get straight to the results, check out the Results section. The Background section describes the goals of this project in a bit more detail.

Background

In geometry processing, mesh parameterization allows high-resolution details of a 3D object, such as color and material variations, to be stored in a highly-optimized 2D image format. The strategy is to map each vertex of the 3D model's mesh to a unique 2D location in the plane, with the constraint that nearby points in 3D are also nearby in 2D. In general, we want this mapping to distort the geometry of the surface as little as possible, so for example large features on the 3D surface get a lot of pixels in the 2D image.

This might ring a bell to those familiar with machine learning. In ML, mapping a higher-dimensional space to a lower-dimensional space is called "embedding" and is often performed to aid in visualization or to remove extraneous information. VAEs are one technique in ML for mapping a high-dimensional space to a well-behaved latent space, and have the desirable property that probability densities are (approximately) preserved between the two spaces.

Given the above observations, here is how we can use VAEs for mesh parameterization:

  1. For a given 3D model, create a "surface dataset" with random points on the surface and their respective normals.
  2. Train a VAE to generate points on the surface using a 2D Gaussian latent space.
  3. Use the gaussian CDF to convert the above latents to the uniform distribution, so that "probability preservation" becomes "area preservation".
  4. Apply the 3D -> 2D mapping from the VAE encoder + gaussian CDF to map the vertices of the original mesh to the unit square.
  5. Render the resulting model with some test 2D texture image acting as the unit square.

The above process sounds pretty solid, but there are some quirks to getting it to work. Coming into this project, I predicted two possible reasons it would fail. It turns out that number 2 isn't that big of an issue (an extra orthogonality loss helps a lot), and there was a third issue I didn't think of (described in the Results section).

  1. Some triangles will be messed up because of cuts/seams. In particular, the VAE will have to "cut up" the surface to place it into the latent space, and we won't know exactly where these cuts are when mapping texture coordinates to triangle vertices. As a result, a few triangles must have points which are very far away in latent space.
  2. It will be difficult to force the mapping to be conformal. The VAE objective will mostly attempt to preserve areas (i.e. density), and ideally we care about conformality as well.

Results

This was my first time using JAX. Nevertheless, I was able to get interesting results right out of the gate. I ran most of my experiments on a torus 3D model, but I have since verified that it works for more complex models as well.

Initially, I trained VAEs with a Gaussian decoder loss. I also played around with an orthogonality bonus based on the eigenvalues of the Jacobian of the encoder. This resulted in texture mappings like this one:

Torus with orthogonality bonus and Gaussian loss

The above picture looks like a clean mapping, but it isn't actually bijective. To see why, let's sample from this VAE. If everything works as expected, we should get points on the surface of the torus. For this "sampling", I'll use the mean prediction from the decoder (even though its output is a Gaussian distribution) since we really just want a deterministic mapping:

A flat disk with a hole in the middle

It might be hard to tell from a single rendering, but this is just a flat disk with a low-density hole in the middle. In particular, the VAE isn't encoding the z axis at all, but rather just the x and y axes. The resulting texture map looks smooth, but every point in the texture is reused on each side of the torus, so the mapping is not bijective.

I discovered that this caused by the Gaussian likelihood loss on the decoder. It is possible for the model to reduce this loss arbitrarily by shrinking the standard deviations of the x and y axes, so there is little incentive to actually capture every axis accurately.

To achieve better results, we can drop the Gaussian likelihood loss and instead use pure MSE for the decoder. This isn't very well-principled, and we now have to select a reasonable coefficient for the KL term of the VAE to balance the reconstruction accuracy with the quality of the latent distribution. I found good hyperparameters for the torus, but these will likely require tuning for other models.

With the better reconstruction loss function, sampling the VAE gives the expected point cloud:

The surface of a torus, point cloud

The mappings we get don't necessarily seem angle-preserving, though:

A tiled grid mapped onto a torus

To preserve angles, we can add an orthogonality bonus to the loss. When we try to make the map preserve angles, we might make it less area preserving, as can be seen here:

A tiled grid mapped onto a torus which attempts to preserve angles

Also note from the last two images that there are seams along which the texture looks totally messed up. This is because the surface cannot be flattened to a plane without some cuts, along which the VAE encoder has to "jump" from one point on the 2D plane to another. This was one of my predicted shortcomings of the method.

Running

First, install the package with

pip install -e .

Training

My initial VAE experiments were run like so, via scripts/train_vae.py:

python scripts/train_vae.py --ortho-coeff 0.002 --num-iters 20000 models/torus.stl

This will save a model checkpoint to vae.pkl after 20000 iterations, which only takes a minute or two on a laptop CPU.

The above will train a VAE with Gaussian reconstruction loss, which may not learn a good bijective map (as shown above). To instead use the MSE decoder loss, try:

python scripts/train_vae.py --recon-loss-fn mse --kl-coeff 0.001 --batch-size 1024 --num-iters 20000 models/torus.stl

I also found a better orthogonality loss function. To get reasonable mappings that attempt to preserve angles, add --ortho-coeff 0.01 --ortho-loss-fn rel.

Using the VAE

Once you have trained a VAE, you can export a 3D model with the resulting texture mapping like so:

python scripts/map_vae.py models/torus.stl outputs/mapped_output.obj

Note that the resulting .obj file references a material.mtl file which should be in the same directory. I already include such a file with a checkerboard texture in outputs/material.mtl.

You can also sample a point cloud from the VAE using point_cloud_gen.py:

python scripts/point_cloud_gen.py outputs/point_cloud.obj

Finally, you can produce a texture image such that the pixel at point (x, y) is an RGB-encoded, normalized (x, y, z) coordinate from decoder(x, y).

python scripts/inv_map_vae.py models/torus.stl outputs/rgb_texture.png
Owner
Alex Nichol
Web developer, math geek, and AI enthusiast.
Alex Nichol
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
Official code release for "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis"

GRAF This repository contains official code for the paper GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. You can find detailed usage i

349 Dec 29, 2022
Improving Object Detection by Label Assignment Distillation

Improving Object Detection by Label Assignment Distillation This is the official implementation of the WACV 2022 paper Improving Object Detection by L

Cybercore Co. Ltd 51 Dec 08, 2022
A simple log parser and summariser for IIS web server logs

IISLogFileParser A basic parser tool for IIS Logs which summarises findings from the log file. Inspired by the Gist https://gist.github.com/wh13371/e7

2 Mar 26, 2022
SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals Abstract Sleep apnea (SA) is a common slee

9 Dec 21, 2022
TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

912 Jan 08, 2023
PyTorch GPU implementation of the ES-RNN model for time series forecasting

Fast ES-RNN: A GPU Implementation of the ES-RNN Algorithm A GPU-enabled version of the hybrid ES-RNN model by Slawek et al that won the M4 time-series

Kaung 305 Jan 03, 2023
Neural Architecture Search Powered by Swarm Intelligence 🐜

Neural Architecture Search Powered by Swarm Intelligence 🐜 DeepSwarm DeepSwarm is an open-source library which uses Ant Colony Optimization to tackle

288 Oct 28, 2022
A generator of point clouds dataset for PyPipes.

CloudPipesGenerator Documentation | Colab Notebooks | Video Tutorials | Master Degree website A generator of point clouds dataset for PyPipes. TODO Us

1 Jan 13, 2022
Implementation of ConvMixer in TensorFlow and Keras

ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on

Sayan Nath 8 Oct 03, 2022
a generic C++ library for image analysis

VIGRA Computer Vision Library Copyright 1998-2013 by Ullrich Koethe This file is part of the VIGRA computer vision library. You may use,

Ullrich Koethe 378 Dec 30, 2022
Colour detection is necessary to recognize objects, it is also used as a tool in various image editing and drawing apps.

Colour Detection On Image Colour detection is the process of detecting the name of any color. Simple isn’t it? Well, for humans this is an extremely e

Astitva Veer Garg 1 Jan 13, 2022
Our CIKM21 Paper "Incorporating Query Reformulating Behavior into Web Search Evaluation"

Reformulation-Aware-Metrics Introduction This codebase contains source-code of the Python-based implementation of our CIKM 2021 paper. Chen, Jia, et a

xuanyuan14 5 Mar 05, 2022
Steer OpenAI's Jukebox with Music Taggers

TagBox Steer OpenAI's Jukebox with Music Taggers! The closest thing we have to VQGAN+CLIP for music! Unsupervised Source Separation By Steering Pretra

Ethan Manilow 34 Nov 02, 2022
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning

Here is deepparse. Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning. Use deepparse to Use the pr

GRAAL/GRAIL 192 Dec 20, 2022
Official implementation of the Implicit Behavioral Cloning (IBC) algorithm

Implicit Behavioral Cloning This codebase contains the official implementation of the Implicit Behavioral Cloning (IBC) algorithm from our paper: Impl

Google Research 210 Dec 09, 2022
DETReg: Unsupervised Pretraining with Region Priors for Object Detection

DETReg: Unsupervised Pretraining with Region Priors for Object Detection Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik

Amir Bar 283 Dec 27, 2022
Text and code for the forthcoming second edition of Think Bayes, by Allen Downey.

Think Bayes 2 by Allen B. Downey The HTML version of this book is here. Think Bayes is an introduction to Bayesian statistics using computational meth

Allen Downey 1.5k Jan 08, 2023
Codes for “A Deeply Supervised Attention Metric-Based Network and an Open Aerial Image Dataset for Remote Sensing Change Detection”

DSAMNet The pytorch implementation for "A Deeply-supervised Attention Metric-based Network and an Open Aerial Image Dataset for Remote Sensing Change

Mengxi Liu 41 Dec 14, 2022
PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

This is the official implementation of the following paper: Torsten Scholak, Nathan Schucher, Dzmitry Bahdanau. PICARD - Parsing Incrementally for Con

ElementAI 217 Jan 01, 2023