DCA - Official Python implementation of Delaunay Component Analysis algorithm

Related tags

Deep LearningDCA
Overview

Delaunay Component Analysis (DCA)

Official Python implementation of the Delaunay Component Analysis (DCA) algorithm presented in the paper Delaunay Component Analysis for Evaluation of Data Representations. If you use this code in your work, please cite it as follows:

Citation

@inproceedings{
    poklukar2022delaunay,
    title={Delaunay Component Analysis for Evaluation of Data Representations},
    author={Petra Poklukar and Vladislav Polianskii and Anastasiia Varava and Florian T. Pokorny and Danica Kragic Jensfelt},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=HTVch9AMPa}
}

Getting started

Setup

Install the requirements with poetry:

poetry install
chmod +x dca/approximate_Delaunay_graph

Note: Delaunay graph building algorithm requires access to a GPU.

First example

  1. Run a 2D example that saves the intermediate files:
poetry run python examples/first_example.py 
  1. Check out the results saved output/first_example which will have the following structure:
experiments/first_example/
  /precomputed
    - clusterer.pkl               # HDBSCAN clusterer object
    - input_array.npy             # array of R and E points
    - input_array_comp_labels.npy # array of component labels corresponding to R and E points
    - unfiltered_edges.npy        # array of unfiltered approximated Delaunay edges
    - unfiltered_edges_len.npy    # array of unfiltered approximated Delaunay edge lengths
  /template_id1
    - output.json                 # dca scores 
    /DCA
        - components_stats.pkl    # Local evaluation scores
        - network_stats.pkl       # Global evaluation scores
    /visualization
        - graph visualizations
    /logs
        - version0_elapsed_time.log      # empirical runtime 
        - version0_input.json            # specific input parameters
        - version0_output_formatted.log  # all evaluation scores in a pretty format
        - version0_experiment_info.log   # console logs
        - # output files from qDCA
        - # any additional logs that should not be shared across experiment_ids in precomputed folder

Note: you can modify the experiment structure by definining what is shared across several experiments, e.g., what goes in the output/first_example/precomputed folder. For examples, see CL_ablation_study.py.

  1. In output/first_example/template_id1/visualization folder you should see an image of the approximated Delaunay graph and the distilled Delaunay graph like the ones below:

first_example

  1. In output/first_example/template_id1/logs/version0_output_formatted.log you should see the following output:
[mm/dd/yyyy hh:mm:ss] :: num_R: 20                            # total number of R points
[mm/dd/yyyy hh:mm:ss] :: num_E: 20                            # total number of E points
[mm/dd/yyyy hh:mm:ss] :: precision: 0.95                      
[mm/dd/yyyy hh:mm:ss] :: recall: 0.4
[mm/dd/yyyy hh:mm:ss] :: network_consistency: 1.0
[mm/dd/yyyy hh:mm:ss] :: network_quality: 0.2
[mm/dd/yyyy hh:mm:ss] :: first_trivial_component_idx: 2       # idx of the first outlier
[mm/dd/yyyy hh:mm:ss] :: num_R_points_in_fundcomp: 8          # number of vertices in F^R
[mm/dd/yyyy hh:mm:ss] :: num_E_points_in_fundcomp: 19         # number of vertices in F^E
[mm/dd/yyyy hh:mm:ss] :: num_RE_edges: 19                     # number of heterogeneous edges in G_DD
[mm/dd/yyyy hh:mm:ss] :: num_total_edges: 95                  # number of all edges in G_DD
[mm/dd/yyyy hh:mm:ss] :: num_R_outliers: 0                    
[mm/dd/yyyy hh:mm:ss] :: num_E_outliers: 1
[mm/dd/yyyy hh:mm:ss] :: num_fundcomp: 1                      # number of fundamental components |F|
[mm/dd/yyyy hh:mm:ss] :: num_comp: 3                          # number of all connected components
[mm/dd/yyyy hh:mm:ss] :: num_outliercomp: 1                   # number of trivial components
# Local scores for each component G_i: consistency and quality (Def 3.2) as well as number of R and E points contained in it
[mm/dd/yyyy hh:mm:ss] :: c(G0): 0.59, q(G0): 0.27, |G0^R|_v: 8   , |G0^E|_v: 19  , |G0|_v: 27  
[mm/dd/yyyy hh:mm:ss] :: c(G1): 0.00, q(G1): 0.00, |G1^R|_v: 12  , |G1^E|_v: 0   , |G1|_v: 12  
[mm/dd/yyyy hh:mm:ss] :: c(G2): 0.00, q(G2): 0.00, |G2^R|_v: 0   , |G2^E|_v: 1   , |G2|_v: 1   
  1. If you are only interested in the output DCA scores, the cleanup function will remove all of the intermediate files for you. Test it on this 2D example by running
poetry run python examples/first_example.py --cleanup 1

Note: to run q-DCA it is required to keep the intermediate files. This is because the distilled Delaunay graph is needed to calculate edges to the query points.

Run DCA on your own representations

Minimum example requires you to define the input parameters as in the code below. See dca/schemes.py for the optional arguments of the input configs.

# Generate input parameters
data_config = REData(R=R, E=E)
experiment_config = ExperimentDirs(
    experiment_dir=experiment_path,
    experiment_id=experiment_id,
)
graph_config = DelaunayGraphParams()
hdbscan_config = HDBSCANParams()
geomCA_config = GeomCAParams()

# Initialize loggers
exp_loggers = DCALoggers(experiment_config.logs_dir)

# Run DCA
dca = DCA(
    experiment_config,
    graph_config,
    hdbscan_config,
    geomCA_config,
    loggers=exp_loggers,
)
dca_scores = dca.fit(data_config)
dca.cleanup()  # Optional cleanup

Reproduce experiments in the paper

Datasets

We used and adjusted datasets used in our eariler work GeomCA. Therefore, we only provide the representations used in the contrastive learning experiment and q-DCA stylegan experiment, which you can download on this link and save them in representations/contrastive_learning and representations/stylegan folders, respectively. For VGG16, we provide the code (see VGG16_utils.py) we used on the splits constructed in GeomCA. For StyleGAN mode truncation experiment, we refer the user either to the splits we provided in GeomCA or to the code provided by Kynkäänniemi et. al.

Section 4.1: Contrastive Learning

Reproduce Varying component density experiment:

poetry run python experiments/contrastive_learning/CL_varying_component_density.py --n-iterations 10 --perc-to-discard 0.5 --cleanup 1

Reproduce Cluster assignment experiment, for example, using query set Q2 and considering flexible assignment procedure:

poetry run python experiments/contrastive_learning/CL_qDCA.py Df query_Df_holdout_c7_to_c11 --run-dca 1 --run-qdca 1 --several-assignments 1 --cleanup 1

Reproduce Mode truncation experiment in Appendix B.1:

poetry run python experiments/contrastive_learning/CL_mode_truncation.py --cleanup 1

Reproduce Ablation study experiments in Appendix B.1:

poetry run python experiments/contrastive_learning/CL_ablation_study.py cl-ablation-delaunay-edge-approximation --cleanup 1
poetry run python experiments/contrastive_learning/CL_ablation_study.py cl-ablation-delaunay-edge-filtering --cleanup 1
poetry run python experiments/contrastive_learning/CL_ablation_study.py cl-ablation-hdbscan --cleanup 1

Section 4.2: StyleGAN

Reproduce Mode truncation experiment, for example, on truncation 0.5 and 5000 representations provided by Poklukar et. al in GeomCA:

poetry run python experiments/stylegan/StyleGAN_mode_truncation.py 0.5 --num-samples "5000" --cleanup 1

Reproduce Quality of individual generated images experiment using qDCA, for example, on truncation 0.5 --cleanup 1

poetry run python experiments/stylegan/StyleGAN_qDCA.py --run-dca 1 --run-qdca 1 --cleanup 1

Section 4.3: VGG16

Reproduce Class separability experiment, for example, on version 1 containing classes of dogs and kitchen utils

poetry run python experiments/vgg16/VGG16_class_separability.py --version-id 1 --cleanup 1 

Reproduce Amending labelling inconsistencies experiment using qDCA, for example, on version 1 containing classes of dogs and kitchen utils

poetry run python experiments/vgg16/VGG16_qDCA.py --version-id 1 --run-dca 1 --run-qdca 1 --cleanup 1
Owner
Petra Poklukar
Petra Poklukar
This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural tree born form a large search space

SeBoW: Self-Born Wiring for neural trees(PaddlePaddle version) This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural

HollyLee 13 Dec 08, 2022
DeepLab2: A TensorFlow Library for Deep Labeling

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.

Google Research 845 Jan 04, 2023
Code for PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing CVPR 2021. Project page: https://kai-46.github.io/

Kai Zhang 141 Dec 14, 2022
YOLOv5 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and e

Ultralytics 34.1k Dec 31, 2022
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Jan 04, 2023
Official implementation of VQ-Diffusion

Vector Quantized Diffusion Model for Text-to-Image Synthesis Overview This is the official repo for the paper: [Vector Quantized Diffusion Model for T

Microsoft 592 Jan 03, 2023
Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Demetri Pananos 9 Oct 04, 2022
traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation toolbox based on PyTorch.

traiNNer traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation to

202 Jan 04, 2023
"SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image [Paper] [Website] Pipeline Code Environment pip install -r requirements

VITA 250 Jan 05, 2023
TinyML Cookbook, published by Packt

TinyML Cookbook This is the code repository for TinyML Cookbook, published by Packt. Author: Gian Marco Iodice Publisher: Packt About the book This bo

Packt 93 Dec 29, 2022
MiniSom is a minimalistic implementation of the Self Organizing Maps

MiniSom Self Organizing Maps MiniSom is a minimalistic and Numpy based implementation of the Self Organizing Maps (SOM). SOM is a type of Artificial N

Giuseppe Vettigli 1.2k Jan 03, 2023
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 03, 2023
Code for: Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space. Nicholas Monath, Manzil Zaheer, Daniel Silva, Andrew McCallum, Amr Ahmed. KDD 2019.

gHHC Code for: Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space. Nicholas Monath, Manzil Zaheer, D

Nicholas Monath 35 Nov 16, 2022
Source code for Fathony, Sahu, Willmott, & Kolter, "Multiplicative Filter Networks", ICLR 2021.

Multiplicative Filter Networks This repository contains a PyTorch MFN implementation and code to perform & reproduce experiments from the ICLR 2021 pa

Bosch Research 66 Jan 04, 2023
A tf.keras implementation of Facebook AI's MadGrad optimization algorithm

MADGRAD Optimization Algorithm For Tensorflow This package implements the MadGrad Algorithm proposed in Adaptivity without Compromise: A Momentumized,

20 Aug 18, 2022
Send text to girlfriend in the morning

Girlfriend Text Send text to girlfriend (or really anyone with a phone number) in the morning 1. Configure your settings in utils.py. phone_number = "

Paras Adhikary 199 Oct 25, 2022
This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge column damage detection

Bridge-damage-segmentation This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge c

Jingxiao Liu 5 Dec 07, 2022
Video Matting via Consistency-Regularized Graph Neural Networks

Video Matting via Consistency-Regularized Graph Neural Networks Project Page | Real Data | Paper Installation Our code has been tested on Python 3.7,

41 Dec 26, 2022
use tensorflow 2.0 to tell a dog and cat from a specified picture

dog_or_cat use tensorflow 2.0 to tell a dog and cat from a specified picture This is one of the classic experiments for the introduction of deep learn

你这个代码我看不懂 1 Oct 22, 2021