DCA - Official Python implementation of Delaunay Component Analysis algorithm

Related tags

Deep LearningDCA
Overview

Delaunay Component Analysis (DCA)

Official Python implementation of the Delaunay Component Analysis (DCA) algorithm presented in the paper Delaunay Component Analysis for Evaluation of Data Representations. If you use this code in your work, please cite it as follows:

Citation

@inproceedings{
    poklukar2022delaunay,
    title={Delaunay Component Analysis for Evaluation of Data Representations},
    author={Petra Poklukar and Vladislav Polianskii and Anastasiia Varava and Florian T. Pokorny and Danica Kragic Jensfelt},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=HTVch9AMPa}
}

Getting started

Setup

Install the requirements with poetry:

poetry install
chmod +x dca/approximate_Delaunay_graph

Note: Delaunay graph building algorithm requires access to a GPU.

First example

  1. Run a 2D example that saves the intermediate files:
poetry run python examples/first_example.py 
  1. Check out the results saved output/first_example which will have the following structure:
experiments/first_example/
  /precomputed
    - clusterer.pkl               # HDBSCAN clusterer object
    - input_array.npy             # array of R and E points
    - input_array_comp_labels.npy # array of component labels corresponding to R and E points
    - unfiltered_edges.npy        # array of unfiltered approximated Delaunay edges
    - unfiltered_edges_len.npy    # array of unfiltered approximated Delaunay edge lengths
  /template_id1
    - output.json                 # dca scores 
    /DCA
        - components_stats.pkl    # Local evaluation scores
        - network_stats.pkl       # Global evaluation scores
    /visualization
        - graph visualizations
    /logs
        - version0_elapsed_time.log      # empirical runtime 
        - version0_input.json            # specific input parameters
        - version0_output_formatted.log  # all evaluation scores in a pretty format
        - version0_experiment_info.log   # console logs
        - # output files from qDCA
        - # any additional logs that should not be shared across experiment_ids in precomputed folder

Note: you can modify the experiment structure by definining what is shared across several experiments, e.g., what goes in the output/first_example/precomputed folder. For examples, see CL_ablation_study.py.

  1. In output/first_example/template_id1/visualization folder you should see an image of the approximated Delaunay graph and the distilled Delaunay graph like the ones below:

first_example

  1. In output/first_example/template_id1/logs/version0_output_formatted.log you should see the following output:
[mm/dd/yyyy hh:mm:ss] :: num_R: 20                            # total number of R points
[mm/dd/yyyy hh:mm:ss] :: num_E: 20                            # total number of E points
[mm/dd/yyyy hh:mm:ss] :: precision: 0.95                      
[mm/dd/yyyy hh:mm:ss] :: recall: 0.4
[mm/dd/yyyy hh:mm:ss] :: network_consistency: 1.0
[mm/dd/yyyy hh:mm:ss] :: network_quality: 0.2
[mm/dd/yyyy hh:mm:ss] :: first_trivial_component_idx: 2       # idx of the first outlier
[mm/dd/yyyy hh:mm:ss] :: num_R_points_in_fundcomp: 8          # number of vertices in F^R
[mm/dd/yyyy hh:mm:ss] :: num_E_points_in_fundcomp: 19         # number of vertices in F^E
[mm/dd/yyyy hh:mm:ss] :: num_RE_edges: 19                     # number of heterogeneous edges in G_DD
[mm/dd/yyyy hh:mm:ss] :: num_total_edges: 95                  # number of all edges in G_DD
[mm/dd/yyyy hh:mm:ss] :: num_R_outliers: 0                    
[mm/dd/yyyy hh:mm:ss] :: num_E_outliers: 1
[mm/dd/yyyy hh:mm:ss] :: num_fundcomp: 1                      # number of fundamental components |F|
[mm/dd/yyyy hh:mm:ss] :: num_comp: 3                          # number of all connected components
[mm/dd/yyyy hh:mm:ss] :: num_outliercomp: 1                   # number of trivial components
# Local scores for each component G_i: consistency and quality (Def 3.2) as well as number of R and E points contained in it
[mm/dd/yyyy hh:mm:ss] :: c(G0): 0.59, q(G0): 0.27, |G0^R|_v: 8   , |G0^E|_v: 19  , |G0|_v: 27  
[mm/dd/yyyy hh:mm:ss] :: c(G1): 0.00, q(G1): 0.00, |G1^R|_v: 12  , |G1^E|_v: 0   , |G1|_v: 12  
[mm/dd/yyyy hh:mm:ss] :: c(G2): 0.00, q(G2): 0.00, |G2^R|_v: 0   , |G2^E|_v: 1   , |G2|_v: 1   
  1. If you are only interested in the output DCA scores, the cleanup function will remove all of the intermediate files for you. Test it on this 2D example by running
poetry run python examples/first_example.py --cleanup 1

Note: to run q-DCA it is required to keep the intermediate files. This is because the distilled Delaunay graph is needed to calculate edges to the query points.

Run DCA on your own representations

Minimum example requires you to define the input parameters as in the code below. See dca/schemes.py for the optional arguments of the input configs.

# Generate input parameters
data_config = REData(R=R, E=E)
experiment_config = ExperimentDirs(
    experiment_dir=experiment_path,
    experiment_id=experiment_id,
)
graph_config = DelaunayGraphParams()
hdbscan_config = HDBSCANParams()
geomCA_config = GeomCAParams()

# Initialize loggers
exp_loggers = DCALoggers(experiment_config.logs_dir)

# Run DCA
dca = DCA(
    experiment_config,
    graph_config,
    hdbscan_config,
    geomCA_config,
    loggers=exp_loggers,
)
dca_scores = dca.fit(data_config)
dca.cleanup()  # Optional cleanup

Reproduce experiments in the paper

Datasets

We used and adjusted datasets used in our eariler work GeomCA. Therefore, we only provide the representations used in the contrastive learning experiment and q-DCA stylegan experiment, which you can download on this link and save them in representations/contrastive_learning and representations/stylegan folders, respectively. For VGG16, we provide the code (see VGG16_utils.py) we used on the splits constructed in GeomCA. For StyleGAN mode truncation experiment, we refer the user either to the splits we provided in GeomCA or to the code provided by Kynkäänniemi et. al.

Section 4.1: Contrastive Learning

Reproduce Varying component density experiment:

poetry run python experiments/contrastive_learning/CL_varying_component_density.py --n-iterations 10 --perc-to-discard 0.5 --cleanup 1

Reproduce Cluster assignment experiment, for example, using query set Q2 and considering flexible assignment procedure:

poetry run python experiments/contrastive_learning/CL_qDCA.py Df query_Df_holdout_c7_to_c11 --run-dca 1 --run-qdca 1 --several-assignments 1 --cleanup 1

Reproduce Mode truncation experiment in Appendix B.1:

poetry run python experiments/contrastive_learning/CL_mode_truncation.py --cleanup 1

Reproduce Ablation study experiments in Appendix B.1:

poetry run python experiments/contrastive_learning/CL_ablation_study.py cl-ablation-delaunay-edge-approximation --cleanup 1
poetry run python experiments/contrastive_learning/CL_ablation_study.py cl-ablation-delaunay-edge-filtering --cleanup 1
poetry run python experiments/contrastive_learning/CL_ablation_study.py cl-ablation-hdbscan --cleanup 1

Section 4.2: StyleGAN

Reproduce Mode truncation experiment, for example, on truncation 0.5 and 5000 representations provided by Poklukar et. al in GeomCA:

poetry run python experiments/stylegan/StyleGAN_mode_truncation.py 0.5 --num-samples "5000" --cleanup 1

Reproduce Quality of individual generated images experiment using qDCA, for example, on truncation 0.5 --cleanup 1

poetry run python experiments/stylegan/StyleGAN_qDCA.py --run-dca 1 --run-qdca 1 --cleanup 1

Section 4.3: VGG16

Reproduce Class separability experiment, for example, on version 1 containing classes of dogs and kitchen utils

poetry run python experiments/vgg16/VGG16_class_separability.py --version-id 1 --cleanup 1 

Reproduce Amending labelling inconsistencies experiment using qDCA, for example, on version 1 containing classes of dogs and kitchen utils

poetry run python experiments/vgg16/VGG16_qDCA.py --version-id 1 --run-dca 1 --run-qdca 1 --cleanup 1
Owner
Petra Poklukar
Petra Poklukar
HALO: A Skeleton-Driven Neural Occupancy Representation for Articulated Hands

HALO: A Skeleton-Driven Neural Occupancy Representation for Articulated Hands Oral Presentation, 3DV 2021 Korrawe Karunratanakul, Adrian Spurr, Zicong

Korrawe Karunratanakul 43 Oct 07, 2022
Official Repo of my work for SREC Nandyal Machine Learning Bootcamp

About the Bootcamp A 3-day Machine Learning Bootcamp organised by Department of Electronics and Communication Engineering, Santhiram Engineering Colle

MS 1 Nov 29, 2021
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 749 Jan 04, 2023
Python-experiments - A Repository which contains python scripts to automate things and make your life easier with python

Python Experiments A Repository which contains python scripts to automate things

Vivek Kumar Singh 11 Sep 25, 2022
A basic reminder tool written in Python.

A simple Python Reminder Here's a basic reminder tool written in Python that speaks to the user and sends a notification. Run pip3 install pyttsx3 w

Sachit Yadav 4 Feb 05, 2022
This project is used for the paper Differentiable Programming of Isometric Tensor Network

This project is used for the paper "Differentiable Programming of Isometric Tensor Network". (arXiv:2110.03898)

Chenhua Geng 15 Dec 13, 2022
The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

LEAR The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction". **The code is in the "master

杨攀 93 Jan 07, 2023
Mixed Transformer UNet for Medical Image Segmentation

MT-UNet Update 2022/01/05 By another round of training based on previous weights, our model also achieved a better performance on ACDC (91.61% DSC). W

dotman 92 Dec 25, 2022
Some useful blender add-ons for SMPL skeleton's poses and global translation.

Blender add-ons for SMPL skeleton's poses and trans There are two blender add-ons for SMPL skeleton's poses and trans.The first is for making an offli

犹在镜中 154 Jan 04, 2023
LogAvgExp - Pytorch Implementation of LogAvgExp

LogAvgExp - Pytorch Implementation of LogAvgExp for Pytorch Install $ pip instal

Phil Wang 31 Oct 14, 2022
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Pedro Savarese 35 Jul 29, 2022
Dynamic Realtime Animation Control

Our project is targeted at making an application that dynamically detects the user’s expressions and gestures and projects it onto an animation software which then renders a 2D/3D animation realtime

Harsh Avinash 10 Aug 01, 2022
Learning Calibrated-Guidance for Object Detection in Aerial Images

Learning Calibrated-Guidance for Object Detection in Aerial Images arxiv We propose a simple yet effective Calibrated-Guidance (CG) scheme to enhance

51 Sep 22, 2022
Fast RFC3339 compliant Python date-time library

udatetime: Fast RFC3339 compliant date-time library Handling date-times is a painful act because of the sheer endless amount of formats used by people

Simon Pirschel 235 Oct 25, 2022
Time Dependent DFT in Tamm-Dancoff Approximation

Density Function Theory Program - kspy-tddft(tda) This is an implementation of Time-Dependent Density Functional Theory(TDDFT) using the Tamm-Dancoff

Peter Borthwick 2 Nov 17, 2022
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression

Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression We provide the code used in our paper "How Good are Low-Rank Approximation

Aristeidis (Ares) Panos 0 Dec 13, 2021
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training Code for NeurIPS 2021 paper "Better Safe Than Sorry: Preventing Delu

Lue Tao 29 Sep 20, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022
Includes PyTorch -> Keras model porting code for ConvNeXt family of models with fine-tuning and inference notebooks.

ConvNeXt-TF This repository provides TensorFlow / Keras implementations of different ConvNeXt [1] variants. It also provides the TensorFlow / Keras mo

Sayak Paul 87 Dec 06, 2022