Functional TensorFlow Implementation of Singular Value Decomposition for paper Fast Graph Learning

Related tags

Deep Learningtf-fsvd
Overview

tf-fsvd

TensorFlow Implementation of Functional Singular Value Decomposition for paper Fast Graph Learning with Unique Optimal Solutions

Cite

If you find our code useful, you may cite us as:

@inproceedings{haija2021fsvd,
  title={Fast Graph Learning with Unique Optimal Solutions},
  author={Sami Abu-El-Haija AND Valentino Crespi AND Greg Ver Steeg AND Aram Galstyan},
  year={2021},
  booktitle={arxiv:2102.08530},
}

Introduction

This codebase contains TensorFlow implementation of Functional SVD, an SVD routine that accepts objects with 3 attributes: dot, T, and shape. The object must be able to exactly multiply an (implicit) matrix M by any other matrix. Specifically, it should implement:

  1. dot(M1): should return M @ M1
  2. T: property should return another object that (implicitly) contains transpose of M.
  3. shape: property should return the shape of the (implicit) matrix M.

In most practical cases, M is implicit i.e. need not to be exactly computed. For consistency, such objects could inherit the abstract class ProductFn.

Simple Usage Example

Suppose you have an explicit sparse matrix mat

import scipy.sparse
import tf_fsvd

m = scipy.sparse.csr_mat( ... )
fn = tf_fsvd.SparseMatrixPF(m)

u, s, v = tf_fsvd.fsvd(fn, k=20)  # Rank 20 decomposition

The intent of this utility is for implicit matrices. For which, you may implement your own ProductFn class. You can take a look at BlockWisePF or WYSDeepWalkPF.

File Structure / Documentation

  • File tf_fsvd.py contains the main logic for TensorFlow implementation of Functional SVD (function fsvd), as well as a few classes for constructing implicit matrices.
    • SparseMatrixPF: when implicit matrix is a pre-computed sparse matrix. Using this class, you can now enjoy the equivalent of tf.linalg.svd on sparse tensors :-).
    • BlockWisePF: when implicit matrix is is column-wise concatenation of other implicit matrices. The concatenation is computed by suppling a list of ProductFn's
  • Directory implementations: contains implementations of simple methods employing fsvd.
  • Directory baselines: source code adapting competitive methods to produce metrics we report in our paper (time and accuracy).
  • Directory experiments: Shell scripts for running baselines and our implementations.
  • Directory results: Output directory containing results.

Running Experiments

ROC-AUC Link Prediction over AsymProj/WYS datasets

The AsymProj datasets are located in directory datasets/asymproj.

You can run the script for training on AsympProj datasets and measuring test ROC-AUC as:

python3 implementations/linkpred_asymproj.py

You can append flag --help to above to see which flags you can set for changing the dataset or the SVD rank.

You can run sweep on svd rank, for each of those datasets, by invoking:

# Sweep fSVD rank (k) on 4 link pred datasets. Make 3 runs per (dataset, k)
# Time is dominated by statement `import tensorflow as tf`
python3 experiments/fsvd_linkpred_k_sweep.py | bash  # You may remove "| bash" if you want to hand-pick commands.

# Summarize results onto CSV
python3 experiments/summarize_svdf_linkpred_sweep.py > results/linkpred_d_sweep/fsvd.csv

# Plot the sweep curve
python3 experiments/plot_sweep_k_linkpred.py

and running all printed commands. Alternatively, you can pipe the output of above to bash. This should populate directory results/linkpred_d_sweep/fsvd/.

Baselines

  • You can run the Watch Your Step baseline as:

     bash experiments/baselines/run_wys.sh
    

    which runs only once for every link prediction dataset. Watch Your Step spends some time computing the transition matrix powers (T^2, .., T^5).

  • You can run NetMF baselines (both approximate and exact) as:

    bash experiments/baselines/run_netmf.sh
    
  • You can run node2vec baseline as:

    experiments/baselines/run_n2v.sh
    

Classification Experiments over Planetoid Citation datasets

These datasets are from the planetoid paper. To obtain them, you should clone their repo:

mkdir -p ~/data
cd ~/data
git clone [email protected]:kimiyoung/planetoid.git

You can run the script for training and testing on planetoid datasets as:

python3 implementations/node_ssc_planetoid.py

You can append flag --help to above to see which flags you can set for changing the dataset or the number of layers.

You can sweep the number of layers running:

# Directly invokes python many times
LAYERS=`python3 -c "print(','.join(map(str, range(17))))"`
python3 experiments/planetoid_hp_search.py --wys_windows=1 --wys_neg_coefs=1 --layers=${LAYERS}

The script experiments/planetoid_hp_search.py directly invokes implementations/node_ssc_planetoid.py. You can visualize the accuracy VS depth curve by running:

python3 experiments/plot_sweep_depth_planetoid.py

Link Prediction for measuring [email protected] for Drug-Drug Interactions Network

You can run our method like:

python3 implementations/linkpred_ddi.py

This averages 10 runs (by default) and prints mean and standard deviation of validation and test metric ([email protected])

Owner
Sami Abu-El-Haija
Sami Abu-El-Haija
TResNet: High Performance GPU-Dedicated Architecture

TResNet: High Performance GPU-Dedicated Architecture paperV2 | pretrained models Official PyTorch Implementation Tal Ridnik, Hussam Lawen, Asaf Noy, I

426 Dec 28, 2022
Some methods for comparing network representations in deep learning and neuroscience.

Generalized Shape Metrics on Neural Representations In neuroscience and in deep learning, quantifying the (dis)similarity of neural representations ac

Alex Williams 45 Dec 27, 2022
PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis

WaveGrad2 - PyTorch Implementation PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis. Status (202

Keon Lee 59 Dec 06, 2022
Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Johannes von Lindheim 3 Oct 29, 2022
PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)

D-VQA We provide the PyTorch implementation for Debiased Visual Question Answering from Feature and Sample Perspectives (NeurIPS 2021). Dependencies P

Zhiquan Wen 19 Dec 22, 2022
Brain tumor detection using Convolution-Neural Network (CNN)

Detect and Classify Brain Tumor using CNN. A system performing detection and classification by using Deep Learning Algorithms using Convolution-Neural Network (CNN).

assia 1 Feb 07, 2022
Attention mechanism with MNIST dataset

[TensorFlow] Attention mechanism with MNIST dataset Usage $ python run.py Result Training Loss graph. Test Each figure shows input digit, attention ma

YeongHyeon Park 12 Jun 10, 2022
A pytorch &keras implementation and demo of Fastformer.

Fastformer Notes from the authors Pytorch/Keras implementation of Fastformer. The keras version only includes the core fastformer attention part. The

153 Dec 28, 2022
Codes for building and training the neural network model described in Domain-informed neural networks for interaction localization within astroparticle experiments.

Domain-informed Neural Networks Codes for building and training the neural network model described in Domain-informed neural networks for interaction

DIDACTS 0 Dec 13, 2021
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
Learning Saliency Propagation for Semi-supervised Instance Segmentation

Learning Saliency Propagation for Semi-supervised Instance Segmentation PyTorch Implementation This repository contains: the PyTorch implementation of

Berkeley DeepDrive 68 Oct 18, 2022
Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer"

TSOD Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer" Usage For training, open train_test, run p

Jinming Su 2 Dec 23, 2021
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features

CleanRL (Clean Implementation of RL Algorithms) CleanRL is a Deep Reinforcement Learning library that provides high-quality single-file implementation

Costa Huang 1.8k Jan 01, 2023
Training Very Deep Neural Networks Without Skip-Connections

DiracNets v2 update (January 2018): The code was updated for DiracNets-v2 in which we removed NCReLU by adding per-channel a and b multipliers without

Sergey Zagoruyko 585 Oct 12, 2022
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
Point cloud processing tool library.

Point Cloud ToolBox This point cloud processing tool library can be used to process point clouds, 3d meshes, and voxels. Environment python 3.7.5 Dep

ZhangXinyun 40 Dec 09, 2022
2021 credit card consuming recommendation

2021 credit card consuming recommendation

Wang, Chung-Che 7 Mar 08, 2022
Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images"

Reverse_Engineering_GMs Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Gener

100 Dec 18, 2022
A collection of 100 Deep Learning images and visualizations

A collection of Deep Learning images and visualizations. The project has been developed by the AI Summer team and currently contains almost 100 images.

AI Summer 65 Sep 12, 2022