GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

Overview

GNNLens2 is an interactive visualization tool for graph neural networks (GNN). It allows seamless integration with deep graph library (DGL) and can meet your various visualization requirements for presentation, analysis and model explanation. It is an open source version of GNNLens with simplification and extension.

Installation

Requirements

You can install Flask-CORS with

pip install -U flask-cors

Installation for the latest stable version

pip install gnnlens

Installation from source

If you want to try experimental features, you can install from source as follows:

git clone https://github.com/dmlc/GNNLens2.git
cd GNNLens2/python
python setup.py install

Verifying successful installation

Once you have installed the package, you can verify the success of installation with

import gnnlens

print(gnnlens.__version__)
# 0.1.0

Tutorials

We provide a set of tutorials to get you started with the library:

Team

HKUST VisLab: Zhihua Jin, Huamin Qu

AWS Shanghai AI Lab: Mufei Li, Wanru Zhao (work done during internship), Jian Zhang, Minjie Wang

SMU: Yong Wang

Comments
  • Support for heterogeneous graphs

    Support for heterogeneous graphs

    Thanks for this wonderful library. In writer.add_graph(), I saw that it only accepts homogeneous graphs, do you have any plans to support heterogeneous graphs?

    opened by Mrugankakarte 2
  • Is there any test about the limit of graph size?

    Is there any test about the limit of graph size?

    Hi there,

    I'm wondering if the limit of graph size has been tested, such as the maximal number of nodes and edges in the visualized graph? And the correspondence between the memory overhead/simulation time and the graph size also interests me. For example, how big the graph is that can be processed by the GNNLens2 within 10secs on a machine with 16 CPUs and 64GB memory?

    opened by LspongebobJH 1
  • Several update

    Several update

    1. Rename GNNVis to GNNLens
    2. Remove the examples folder, which will be fully replaced by the tutorials
    3. Rename gnnvis to python
    4. Rename gnnvis/mini_serve to python/gnnlens
    5. Update setup.py
    6. Add Writer
    opened by mufeili 0
  • Bump flask-cors from 3.0.0 to 3.0.9 in /gnnvis

    Bump flask-cors from 3.0.0 to 3.0.9 in /gnnvis

    Bumps flask-cors from 3.0.0 to 3.0.9.

    Release notes

    Sourced from flask-cors's releases.

    Release 3.0.9

    Security

    • Escape path before evaluating resource rules (thanks @​praetorian-colby-morgan). Prior to this, flask-cors incorrectly evaluated CORS resource matching before path expansion. E.g. "/api/../foo.txt" would incorrectly match resources for "/api/*" whereas the path actually expands simply to "/foo.txt"

    Release 3.0.8

    Fixes DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working

    Thank you @​juanmaneo and @​jdevera!

    Release 3.0.7

    Updated logging.warn to logging.warning (#234) Thanks Vaibhav

    Release 3.0.6

    Manual error in release process. Identical contents at 3.0.5.

    Release 3.0.5

    Fixes incorrect handling of regexes containg '[', and a few other special characters. https://github-redirect.dependabot.com/corydolphin/flask-cors/issues/212

    Release 3.0.4

    Handle response.headers being None. (Fixes issue #217) Thanks @​dusktreader for the improvement!

    Release 3.0.3

    Ensure that an Origin of '*' is never sent if supports_credentials is True (fixes Issue #202)

    • If always_send=True, and '*' is in the allowed origins, and a request is made without an Origin header, no Access-Control-Allow-Origins header will now be returned. This is breaking if you depended on it, but was a bug as it goes against the spec.

    Release 3.0.2

    Fixes Issue #187: regression whereby header (and domain) matching was incorrectly case sensitive. Now it is not, making the behavior identical to 2.X and 1.X.

    Release 3.0.1

    Fixes Issue #183: regression whereby regular expressions for origins with an "?" are not properly matched.

    Thanks @​di for the report!

    Changelog

    Sourced from flask-cors's changelog.

    3.0.9

    Security

    • Escape path before evaluating resource rules (thanks to Colby Morgan). Prior to this, flask-cors incorrectly evaluated CORS resource matching before path expansion. E.g. "/api/../foo.txt" would incorrectly match resources for "/api/*" whereas the path actually expands simply to "/foo.txt"

    3.0.8

    Fixes : DeprecationWarning: Using or importing the ABCs from 'collections' in Python 3.7. Thank you @​juanmaneo and @​jdevera for the contribution.

    3.0.7

    Updated logging.warn to logging.warning (#234) Thanks Vaibhav

    3.0.6

    Manual error in release process. Identical contents at 3.0.5.

    3.0.5

    Fixes incorrect handling of regexes containg [, and a few other special characters. Fixes Issue #212

    3.0.4

    Handle response.headers being None. (Fixes issue #217)

    3.0.3

    Ensure that an Origin of '*' is never sent if supports_credentials is True (fixes Issue #202)

    • If always_send=True, and '*' is in the allowed origins, and a request is made without an Origin header, no Access-Control-Allow-Origins header will now be returned. This is breaking if you depended on it, but was a bug as it goes against the spec.

    3.0.2

    Fixes Issue #187: regression whereby header (and domain) matching was incorrectly case sensitive. Now it is not, making the behavior identical to 2.X and 1.X.

    3.0.1

    Fixes Issue #183: regression whereby regular expressions for origins with an "?" are not properly matched.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies python 
    opened by dependabot[bot] 0
  • Unable to show the graph of ogbn-products

    Unable to show the graph of ogbn-products

    Hi, I follow the instructions in tutorial 1 and try to show the graph structure of ogbn-products. There is no error showed when I run the code, however, when I open the gnnlens website and select the ogbn-products graph in the graph selector, it shows nothing. I wonders what may cause this happen? Is it because the graph too large, or because the graph does not include the label? Hope I describe it clearly, if needed I could show my code.

    opened by EntongL 1
  • ImportError: cannot import name 'safe_join' from 'flask'

    ImportError: cannot import name 'safe_join' from 'flask'

    Since 2.1.0, flask deprecates safe_join, as elaborated in its release note here. For now, a workaround is to degrade flask to an older version like pip install Flask==2.0.3. This should be fixed in the future release of GNNLens2 by either restricting Flask version or follow the latest recommended practice.

    Credit to @SherylHYX for reporting the issue.

    bug 
    opened by mufeili 0
Releases(v0.1.0)
Owner
Distributed (Deep) Machine Learning Community
A Community of Awesome Machine Learning Projects
Distributed (Deep) Machine Learning Community
Interactive convnet features visualization for Keras

Quiver Interactive convnet features visualization for Keras The quiver workflow Video Demo Build your model in keras model = Model(...) Launch the vis

Keplr 1.7k Dec 21, 2022
Model analysis tools for TensorFlow

TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on

1.2k Dec 26, 2022
ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments.

ModelChimp What is ModelChimp? ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments. ModelChimp provides the followi

ModelChimp 124 Dec 21, 2022
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Comprehensive collection of Pixel Attribution methods for Computer Vision.

Jacob Gildenblat 6.5k Jan 01, 2023
Implementation of linear CorEx and temporal CorEx.

Correlation Explanation Methods Official implementation of linear correlation explanation (linear CorEx) and temporal correlation explanation (T-CorEx

Hrayr Harutyunyan 34 Nov 15, 2022
Convolutional neural network visualization techniques implemented in PyTorch.

This repository contains a number of convolutional neural network visualization techniques implemented in PyTorch.

1 Nov 06, 2021
Auralisation of learned features in CNN (for audio)

AuralisationCNN This repo is for an example of auralisastion of CNNs that is demonstrated on ISMIR 2015. Files auralise.py: includes all required func

Keunwoo Choi 39 Nov 19, 2022
A library that implements fairness-aware machine learning algorithms

Themis ML themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms. Fairness-aware M

Niels Bantilan 105 Dec 30, 2022
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Benedek Rozemberczki 187 Dec 27, 2022
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

Hierarchical neural-net interpretations (ACD) 🧠 Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 03, 2023
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 520 Dec 26, 2022
Visual Computing Group (Ulm University) 99 Nov 30, 2022
Bias and Fairness Audit Toolkit

The Bias and Fairness Audit Toolkit Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers

Data Science for Social Good 513 Jan 06, 2023
Code for "High-Precision Model-Agnostic Explanations" paper

Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently “anchors”

Marco Tulio Correia Ribeiro 735 Jan 05, 2023
Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Hendrik Strobelt 1.1k Jan 04, 2023
Visualization toolkit for neural networks in PyTorch! Demo -->

FlashTorch A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch. Neural networks are often described as "black box". The

Misa Ogura 692 Dec 29, 2022
Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

Seldon 1.9k Dec 30, 2022
Neural network visualization toolkit for tf.keras

Neural network visualization toolkit for tf.keras

Yasuhiro Kubota 262 Dec 19, 2022
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

56 Jan 03, 2023
Logging MXNet data for visualization in TensorBoard.

Logging MXNet Data for Visualization in TensorBoard Overview MXBoard provides a set of APIs for logging MXNet data for visualization in TensorBoard. T

Amazon Web Services - Labs 327 Dec 05, 2022