Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Overview

Version License repo size Arxiv build badge coverage badge


Karate Club is an unsupervised machine learning extension library for NetworkX.

Please look at the Documentation, relevant Paper, Promo Video, and External Resources.

Karate Club consists of state-of-the-art methods to do unsupervised learning on graph structured data. To put it simply it is a Swiss Army knife for small-scale graph mining research. First, it provides network embedding techniques at the node and graph level. Second, it includes a variety of overlapping and non-overlapping community detection methods. Implemented methods cover a wide range of network science (NetSci, Complenet), data mining (ICDM, CIKM, KDD), artificial intelligence (AAAI, IJCAI) and machine learning (NeurIPS, ICML, ICLR) conferences, workshops, and pieces from prominent journals.

The newly introduced graph classification datasets are available at SNAP, TUD Graph Kernel Datasets, and GraphLearning.io.


Citing

If you find Karate Club and the new datasets useful in your research, please consider citing the following paper:

@inproceedings{karateclub,
       title = {{Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs}},
       author = {Benedek Rozemberczki and Oliver Kiss and Rik Sarkar},
       year = {2020},
       pages = {3125–3132},
       booktitle = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20)},
       organization = {ACM},
}

A simple example

Karate Club makes the use of modern community detection techniques quite easy (see here for the accompanying tutorial). For example, this is all it takes to use on a Watts-Strogatz graph Ego-splitting:

import networkx as nx
from karateclub import EgoNetSplitter

g = nx.newman_watts_strogatz_graph(1000, 20, 0.05)

splitter = EgoNetSplitter(1.0)

splitter.fit(g)

print(splitter.get_memberships())

Models included

In detail, the following community detection and embedding methods were implemented.

Overlapping Community Detection

Non-Overlapping Community Detection

Neighbourhood-Based Node Level Embedding

Structural Node Level Embedding

Attributed Node Level Embedding

Meta Node Embedding

Graph Level Embedding

Head over to our documentation to find out more about installation and data handling, a full list of implemented methods, and datasets. For a quick start, check out our examples.

If you notice anything unexpected, please open an issue and let us know. If you are missing a specific method, feel free to open a feature request. We are motivated to constantly make Karate Club even better.


Installation

Karate Club can be installed with the following pip command.

$ pip install karateclub

As we create new releases frequently, upgrading the package casually might be beneficial.

$ pip install karateclub --upgrade

Running examples

As part of the documentation we provide a number of use cases to show how the clusterings and embeddings can be utilized for downstream learning. These can accessed here with detailed explanations.

Besides the case studies we provide synthetic examples for each model. These can be tried out by running the example scripts. In order to run one of the examples, the Graph2Vec snippet:

$ cd examples/whole_graph_embedding/
$ python graph2vec_example.py

Running tests

$ python setup.py test

License

Comments
  • GL2vec : RuntimeError: you must first build vocabulary before training the model

    GL2vec : RuntimeError: you must first build vocabulary before training the model

    Hello, First thanks for your work, it's just great.

    However, I have an error while trying to run GL2vec on my dataset, while it works perfectly with the example. Where is exactly this type of error coming from ?

    Thanks in advance

    opened by hug0prevoteau 14
  • How to build my own dataset?

    How to build my own dataset?

    I have to build graphs, and following that I have to generate graph embedding.

    I checked the documentation i.e. https://karateclub.readthedocs.io/.

    But I didn't understand how to build my own graphs.

    1. Can you please point out a sample code where you create dataset from scratch?
    2. I have already checked code here. But they all load pre-defined dataset.
    3. Can you show any code snippet where you create graph i.e. create nodes and add edges.
    4. How to set attributes (features) for the nodes and edges?

    Thanks in advance for your help.

    I am following the https://karateclub.readthedocs.io/en/latest/notes/installation.html.

    opened by smith-co 9
  • Using Feather-Graph with Node Attributes

    Using Feather-Graph with Node Attributes

    Hi @benedekrozemberczki,

    Thanks for creating and maintaining this awesome toolbox for graph and node level embedding techniques. I've been using Feather-Graph to embed non-attributed graphs and the results have been fantastic.

    Question: I'm working on a new problem where graphs contain nodes with attribute information and I wanted to see if it's possible (or makes sense) to extend Feather-Graph to incorporate node attribute information?

    Current thought process: I went through the source code and saw that Feather-Node can leverage an attribute matrix, while Feather-Graph uses the log-degree and clustering coefficient as node features. I felt like there could be an opportunity to plug the feature generation process of Feather-Node into Feather-Graph here, but couldn't determine if there would be any downsides to this approach?

    I went through your paper "Characteristic Functions on Graphs..." but wasn't able to come to a decision one way or the other. Hoping you can shed some light on it!

    Thanks, Scott

    opened by safreita1 8
  • About GL2vec

    About GL2vec

    Hello, thanks for the awesome work!!

    It seems that there are 2 mistakes in the implementation of GL2vec module.

    The first one is :

    in the code below, """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" def _create_line_graph(self, graph): r"""Getting the embedding of graphs. Arg types: * graph (NetworkX graph) - The graph transformed to be a line graph. Return types: * line_graph (NetworkX graph) - The line graph of the source graph. """ graph = nx.line_graph(graph) node_mapper = {node: i for i, node in enumerate(graph.nodes())} edges = [[node_mapper[edge[0]], node_mapper[edge[1]]] for edge in graph.edges()] line_graph = nx.from_edgelist(edges) return line_graph """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" when converting graph G to line graph LG, the method "_create_line_graph()" ignores the edge attribute of G. (It means that there will be no node arrtibutes in LG) so consequently, the method "WeisfeilerLehmanHashing" will not use the attribute information, and will always use the structural information (degree) instead.

    The second one is :

    The GL2vec module only returns the embedding of line graph. But in the original paper of GL2vec, they concatenate the embedding of graph and of line graph.
    Then named the framework "GL2vec", which means "Graph and Line graph to vector".

    Only use the embedding of line graph for downstream task may lead to worse performance.

    We noticed that when applying the embeddings to the graph classification task, (when graph both have node attribute and edge attribute) the performance (accuracy) are as follow: concat(G , LG) > G > LG

    Hope it helps :)

    opened by cheezy88 8
  • Classifying with graph2vec model

    Classifying with graph2vec model

    I'm able to obtain an embedding for a list of NetworkX graphs using graph2vec, and I was wondering if karateclub has a function to make classifications for graphs outside the training set? That is, given my embedding, I want to input a graph outside my original graph list (used in the model) and obtain a list of most similar graphs (something like a "most similar" function).

    opened by joseluisfalla 6
  • How to improve the performance of Graph2Vec model fit function ?

    How to improve the performance of Graph2Vec model fit function ?

    I tried to increase the performance of the Graph2Vec model by using increasing the worker parameter when initializing the model. But it seems that still, the model takes only 1 core to process the fit function.

    Is method I have used to assign the workers correct ? Is there another method to improve the performance ?

    model =  Graph2Vec(workers=28)
    graphs_list=create_graph_list(graph_df)
    model.fit(graphs_list)
    graph_x = model.get_embedding()
    
    opened by 1209973 6
  • Is consecutive numeric indexing necessary for Graph2Vec?

    Is consecutive numeric indexing necessary for Graph2Vec?

    Thanks for the awesome work, networkx is truly helpful when we are dealing with Graph data structure.

    I'm trying to get graph embedding using Graph2vec so that we could compare similarity among graphs. But I'm stuck in this assertion: assert numeric_indices == node_indices, "The node indexing is wrong."

    Say if we have two graphs, each node in the graph represents a word. We build a mapping so that we could replace text with number. For example, whenever the word "Library" occurs in any graph, we label it with the number "2". In this case, the indexes inside one graph might not be consecutive because the mapping is created from a number of graphs.

    So is it still necessary for enforce consecutive indexing in this case? Or I understand the usage of Graph2Vec wrong?

    opened by bdeng3 5
  • Graph Embeddings using node features and inductivity

    Graph Embeddings using node features and inductivity

    Hello,

    First of all thank you for this amazing library! I have a serie of small graphs where each node contains features and I am trying to learn graph-level embedding in an unsupervised manner. However, I couldn't find how to load node features in the graphs before feeding them to a graph embedding algorithm. Could you describe the input needed by the algorithms ?

    Also, is it possible to generate embedding with some sort of forward function once the models are trained (without retraining the model) ? I.e. does the library support inductivity ?

    Thank you!

    opened by TrovatelliT 5
  • graph2vec implementation and graphs with missing nodes

    graph2vec implementation and graphs with missing nodes

    Hi there,

    first of all, thanks a lot for developing this, it has potential to simplify in-silico experiments on biological networks and I am grateful for that!

    I have a question related to the graph2vec implementation. The requirement of the package for graph notation is that nodes have to be named with integers starting from 0 and have to be consecutive. I am working with a collection of 9.000 small networks and would like to embed all of them into an N-dimensional space. Now, all those networks consist of about 25.000 nodes but in some networks these nodes (here it's really genes) are missing (not all genes are supposed to be present in all networks).

    If I rename all my nodes from actual gene names to integers and know that some networks don't have all the genes, I will end up with some networks without consecutive node names, e.g. there will be (..), 20, 21, 24, 25, (...) in one network and perhaps (...), 20, 21, 22, 24, 25, (...) in another. That would violate the requirement of being consecutive.

    My question is: is the implementation aware that a node 25 is the same object between the different networks? Or is it not important and in reality the embedding only takes into account the structure only and I should 'rename' all my networks separately to keep the node naming consecutive?

    opened by kajocina 5
  • Multithreading for WL hasing function

    Multithreading for WL hasing function

    Hi!

    Maybe just another suggestion. In the embedding algorithms, the WeisfeilerLehmanHashing function in the fit function could be time-consuming and the WL hashing function for each graph is independent. Therefore, maybe using multhreading from python can speed them up and I modify the code for my application of graph2vec:

    ==================================

    def fit(self, graphs):
        """
        Fitting a Graph2Vec model.
    
        Arg types:
            * **graphs** *(List of NetworkX graphs)* - The graphs to be embedded.
        """
        pool = ThreadPool(8)
        args_generator = [(graph, self.wl_iterations, self.attributed) for graph in graphs]
        documents = pool.starmap(WeisfeilerLehmanHashing, args_generator)
        pool.close()
        pool.join()
        #documents = [WeisfeilerLehmanHashing(graph, self.wl_iterations, self.attributed) for graph in graphs]
        documents = [TaggedDocument(words=doc.get_graph_features(), tags=[str(i)]) for i, doc in enumerate(documents)]
    
        model = Doc2Vec(documents,
                        vector_size=self.dimensions,
                        window=0,
                        min_count=self.min_count,
                        dm=0,
                        sample=self.down_sampling,
                        workers=self.workers,
                        epochs=self.epochs,
                        alpha=self.learning_rate,
                        seed=self.seed)
    
        self._embedding = [model.docvecs[str(i)] for i, _ in enumerate(documents)]
    
    opened by zslwyuan 5
  • Update requirements

    Update requirements

    As it stands, setup.py has the following requirements which specify maximum versions:

    install_requires = [
        "numpy<1.23.0",
        "networkx<2.7",
        "decorator==4.4.2",
        "pandas<=1.3.5"
    ]
    

    Is there a reason for the maximum versions, such as expired deprecated features used by karateclub? In my personal research, and in using the included test suite via python3 ./setup.py test, I have not encountered issues in upgrading the packages.

    $ pip3 install --upgrade --user networkx numpy pandas decorator
    
    $ pip3 list | grep "networkx\|numpy\|decorator\|pandas"
    decorator              5.1.1
    networkx               2.8.8
    numpy                  1.23.5
    pandas                 1.5.2
    

    Running the tests with these updated package yields the following:

    $ cd karateclub/
    $ pytest
    ...
    47 passed, 2540 warnings in 210.58s (0:03:30) 
    

    Yes, there are lots of warnings. Many are DeprecationWarnings. The current requirements generate 855 warnings.

    $ cd karateclub/
    $ pip3 install --user .
    $ pytest
    ...
    47 passed, 855 warnings in 225.49s (0:03:45)
    

    I suppose the question is: even with additional instances of DeprecationWarning, can we bump up the maximum requirements for this package? Or would the community feel better addressing the deprecation issues before continuing?

    For context, my motivation is to keep this package current; I'm currently held back (not actually, but per the setup requirements) by this package's maximum requirements. Does anyone have any thoughts?

    opened by WhatTheFuzz 4
Releases(v_10304)
Owner
Benedek Rozemberczki
Machine Learning Engineer at AstraZeneca and PhD candidate at The University of Edinburgh.
Benedek Rozemberczki
Python module for machine learning time series:

seglearn Seglearn is a python package for machine learning time series or sequences. It provides an integrated pipeline for segmentation, feature extr

David Burns 536 Dec 29, 2022
In this Repo a simple Sklearn Model will be trained and pushed to MLFlow

SKlearn_to_MLFLow In this Repo a simple Sklearn Model will be trained and pushed to MLFlow Install This Repo is based on poetry python3 -m venv .venv

1 Dec 13, 2021
Apache Liminal is an end-to-end platform for data engineers & scientists, allowing them to build, train and deploy machine learning models in a robust and agile way

Apache Liminals goal is to operationalise the machine learning process, allowing data scientists to quickly transition from a successful experiment to an automated pipeline of model training, validat

The Apache Software Foundation 121 Dec 28, 2022
MiniTorch - a diy teaching library for machine learning engineers

This repo is the full student code for minitorch. It is designed as a single repo that can be completed part by part following the guide book. It uses

1.1k Jan 07, 2023
Mixing up the Invariant Information clustering architecture, with self supervised concepts from SimCLR and MoCo approaches

Self Supervised clusterer Combined IIC, and Moco architectures, with some SimCLR notions, to get state of the art unsupervised clustering while retain

Bendidi Ihab 9 Feb 13, 2022
K-Means clusternig example with Python and Scikit-learn

Unsupervised-Machine-Learning Flat Clustering K-Means clusternig example with Python and Scikit-learn Flat clustering Clustering algorithms group a se

Emin 1 Dec 13, 2021
A machine learning web application for binary classification using streamlit

Machine Learning web App This is a machine learning web application for binary classification using streamlit options this application contains 3 clas

abdelhak mokri 1 Dec 20, 2021
A Python library for detecting patterns and anomalies in massive datasets using the Matrix Profile

matrixprofile-ts matrixprofile-ts is a Python 2 and 3 library for evaluating time series data using the Matrix Profile algorithms developed by the Keo

Target 696 Dec 26, 2022
Houseprices - Predict sales prices and practice feature engineering, RFs, and gradient boosting

House Prices - Advanced Regression Techniques Predicting House Prices with Machine Learning This project is build to enhance my knowledge about machin

1 Jan 01, 2022
This is a curated list of medical data for machine learning

Medical Data for Machine Learning This is a curated list of medical data for machine learning. This list is provided for informational purposes only,

Andrew L. Beam 5.4k Dec 26, 2022
CVXPY is a Python-embedded modeling language for convex optimization problems.

CVXPY The CVXPY documentation is at cvxpy.org. We are building a CVXPY community on Discord. Join the conversation! For issues and long-form discussio

4.3k Jan 08, 2023
scikit-learn models hyperparameters tuning and feature selection, using evolutionary algorithms.

Sklearn-genetic-opt scikit-learn models hyperparameters tuning and feature selection, using evolutionary algorithms. This is meant to be an alternativ

Rodrigo Arenas 180 Dec 20, 2022
(3D): LeGO-LOAM, LIO-SAM, and LVI-SAM installation and application

SLAM-application: installation and test (3D): LeGO-LOAM, LIO-SAM, and LVI-SAM Tested on Quadruped robot in Gazebo ● Results: video, video2 Requirement

EungChang-Mason-Lee 203 Dec 26, 2022
Accelerating model creation and evaluation.

EmeraldML A machine learning library for streamlining the process of (1) cleaning and splitting data, (2) training, optimizing, and testing various mo

Yusuf 0 Dec 06, 2021
Extreme Learning Machine implementation in Python

Python-ELM v0.3 --- ARCHIVED March 2021 --- This is an implementation of the Extreme Learning Machine [1][2] in Python, based on scikit-learn. From

David C. Lambert 511 Dec 20, 2022
This repo implements a Topological SLAM: Deep Visual Odometry with Long Term Place Recognition (Loop Closure Detection)

This repo implements a topological SLAM system. Deep Visual Odometry (DF-VO) and Visual Place Recognition are combined to form the topological SLAM system.

Best of Australian Centre for Robotic Vision (ACRV) 32 Jun 23, 2022
XManager: A framework for managing machine learning experiments 🧑‍🔬

XManager is a platform for packaging, running and keeping track of machine learning experiments. It currently enables one to launch experiments locally or on Google Cloud Platform (GCP). Interaction

DeepMind 620 Dec 27, 2022
Optimal Randomized Canonical Correlation Analysis

ORCCA Optimal Randomized Canonical Correlation Analysis This project is for the python version of ORCCA algorithm. It depends on Numpy for matrix calc

Yinsong Wang 1 Nov 21, 2021
Provide an input CSV and a target field to predict, generate a model + code to run it.

automl-gs Give an input CSV file and a target field you want to predict to automl-gs, and get a trained high-performing machine learning or deep learn

Max Woolf 1.8k Jan 04, 2023