A Python Library for Graph Outlier Detection (Anomaly Detection)

Overview

PyGOD Logo

PyPI version Documentation status GitHub stars GitHub forks testing Coverage Status License

PyGOD is a Python library for graph outlier detection (anomaly detection). This exciting yet challenging field has many key applications, e.g., detecting suspicious activities in social networks [1] and security systems [2].

PyGOD includes more than 10 latest graph-based detection algorithms, such as DOMINANT (SDM'19) and GUIDE (BigData'21). For consistency and accessibility, PyGOD is developed on top of PyTorch Geometric (PyG) and PyTorch, and follows the API design of PyOD. See examples below for detecting outliers with PyGOD in 5 lines!

PyGOD is featured for:

  • Unified APIs, detailed documentation, and interactive examples across various graph-based algorithms.
  • Comprehensive coverage of more than 10 latest graph outlier detectors.
  • Full support of detections at multiple levels, such as node-, edge- (WIP), and graph-level tasks (WIP).
  • Scalable design for processing large graphs via mini-batch and sampling.
  • Streamline data processing with PyG--fully compatible with PyG data objects.

Outlier Detection Using PyGOD with 5 Lines of Code:

# train a dominant detector
from pygod.models import DOMINANT

model = DOMINANT(num_layers=4, epoch=20)  # hyperparameters can be set here
model.fit(data)  # data is a Pytorch Geometric data object

# get outlier scores on the input data
outlier_scores = model.decision_scores # raw outlier scores on the input data

# predict on the new data in the inductive setting
outlier_scores = model.decision_function(test_data) # raw outlier scores on the input data  # predict raw outlier scores on test

Citing PyGOD:

PyGOD paper is available on arxiv. If you use PyGOD in a scientific publication, we would appreciate citations to the following paper:

@article{pygod2022,
  author  = {Liu, Kay and Dou, Yingtong and Zhao, Yue and Ding, Xueying and Hu, Xiyang and Zhang, Ruitong and Ding, Kaize and Chen, Canyu and Peng, Hao and Shu, Kai and Chen, George H. and Jia, Zhihao and Yu, Philip S.},
  title   = {PyGOD: A Python Library for Graph Outlier Detection},
  journal = {arXiv preprint arXiv:2204.12095},
  year    = {2022},
}

or:

Liu, K., Dou, Y., Zhao, Y., Ding, X., Hu, X., Zhang, R., Ding, K., Chen, C., Peng, H., Shu, K., Chen, G.H., Jia, Z., and Yu, P.S. 2022. PyGOD: A Python Library for Graph Outlier Detection. arXiv preprint arXiv:2204.12095.

Installation

It is recommended to use pip or conda (wip) for installation. Please make sure the latest version is installed, as PyGOD is updated frequently:

pip install pygod            # normal install
pip install --upgrade pygod  # or update if needed

Alternatively, you could clone and run setup.py file:

git clone https://github.com/pygod-team/pygod.git
cd pygod
pip install .

Required Dependencies:

  • Python 3.6 +
  • numpy>=1.19.4
  • scikit-learn>=0.22.1
  • scipy>=1.5.2
  • setuptools>=50.3.1.post20201107

Note on PyG and PyTorch Installation: PyGOD depends on PyTorch Geometric (PyG), PyTorch, and networkx. To streamline the installation, PyGOD does NOT install these libraries for you. Please install them from the above links for running PyGOD:

  • torch>=1.10
  • pytorch_geometric>=2.0.3
  • networkx>=2.6.3

API Cheatsheet & Reference

Full API Reference: (https://docs.pygod.org). API cheatsheet for all detectors:

  • fit(X): Fit detector.
  • decision_function(G): Predict raw anomaly score of PyG data G using the fitted detector.

Key Attributes of a fitted model:

  • decision_scores_: The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores.
  • labels_: The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies.

For the inductive setting:

  • predict(G): Predict if nodes in PyG data G is an outlier or not using the fitted detector.
  • predict_proba(G): Predict the probability of nodes in PyG data G being outlier using the fitted detector.
  • predict_confidence(G): Predict the model's node-wise confidence (available in predict and predict_proba) [3].

Input of PyGOD: Please pass in a PyTorch Geometric (PyG) data object. See PyG data processing examples.

Implemented Algorithms

PyGOD toolkit consists of two major functional groups:

(i) Node-level detection :

Type Backbone Abbr Year Sampling Ref
Unsupervised MLP MLPAE 2014 Yes [4]
Unsupervised GNN GCNAE 2016 Yes [5]
Unsupervised MF ONE 2019 No [6]
Unsupervised GNN DOMINANT 2019 Yes [7]
Unsupervised GNN DONE 2020 Yes [8]
Unsupervised GNN AdONE 2020 Yes [8]
Unsupervised GNN AnomalyDAE 2020 Yes [9]
Unsupervised GAN GAAN 2020 Yes [10]
Unsupervised GNN OCGNN 2021 Yes [11]
Unsupervised/SSL GNN CoLA (beta) 2021 In progress [12]
Unsupervised/SSL GNN ANEMONE (beta) 2021 In progress [13]
Unsupervised GNN GUIDE 2021 Yes [14]
Unsupervised/SSL GNN CONAD 2022 Yes [15]

(ii) Utility functions :

Type Name Function Documentation
Metric eval_precision_at_k Calculating [email protected] eval_precision_at_k
Metric eval_recall_at_k Calculating [email protected] eval_recall_at_k
Metric eval_roc_auc Calculating ROC-AUC Score eval_roc_auc
Metric eval_average_precision Calculating average precision eval_average_precision
Data gen_structure_outliers Generating structural outliers gen_structure_outliers
Data gen_attribute_outliers Generating attribute outliers gen_attribute_outliers

Quick Start for Outlier Detection with PyGOD

"A Blitz Introduction" demonstrates the basic API of PyGOD using the dominant detector. It is noted that the API across all other algorithms are consistent/similar.


How to Contribute

You are welcome to contribute to this exciting project:

See contribution guide for more information.


PyGOD Team

PyGOD is a great team effort by researchers from UIC, IIT, BUAA, ASU, and CMU. Our core team members include:

Kay Liu (UIC), Yingtong Dou (UIC), Yue Zhao (CMU), Xueying Ding (CMU), Xiyang Hu (CMU), Ruitong Zhang (BUAA), Kaize Ding (ASU), Canyu Chen (IIT),

Reach out us by submitting an issue report or send an email to [email protected].


Reference

[1] Dou, Y., Liu, Z., Sun, L., Deng, Y., Peng, H. and Yu, P.S., 2020, October. Enhancing graph neural network-based fraud detectors against camouflaged fraudsters. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM).
[2] Cai, L., Chen, Z., Luo, C., Gui, J., Ni, J., Li, D. and Chen, H., 2021, October. Structural temporal graph neural networks for anomaly detection in dynamic graphs. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM).
[3] Perini, L., Vercruyssen, V., Davis, J. Quantifying the confidence of anomaly detectors in their example-wise predictions. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML-PKDD), 2020.
[4] Sakurada, M. and Yairi, T., 2014, December. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In Proceedings of the MLSDA 2014 2nd workshop on machine learning for sensory data analysis.
[5] Kipf, T.N. and Welling, M., 2016. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308.
[6] Bandyopadhyay, S., Lokesh, N. and Murty, M.N., 2019, July. Outlier aware network embedding for attributed networks. In Proceedings of the AAAI conference on artificial intelligence (AAAI).
[7] Ding, K., Li, J., Bhanushali, R. and Liu, H., 2019, May. Deep anomaly detection on attributed networks. In Proceedings of the SIAM International Conference on Data Mining (SDM).
[8] (1, 2) Bandyopadhyay, S., Vivek, S.V. and Murty, M.N., 2020, January. Outlier resistant unsupervised deep architectures for attributed network embedding. In Proceedings of the International Conference on Web Search and Data Mining (WSDM).
[9] Fan, H., Zhang, F. and Li, Z., 2020, May. AnomalyDAE: Dual autoencoder for anomaly detection on attributed networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[10] Chen, Z., Liu, B., Wang, M., Dai, P., Lv, J. and Bo, L., 2020, October. Generative adversarial attributed network anomaly detection. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM).
[11] Wang, X., Jin, B., Du, Y., Cui, P., Tan, Y. and Yang, Y., 2021. One-class graph neural networks for anomaly detection in attributed networks. Neural computing and applications.
[12] Liu, Y., Li, Z., Pan, S., Gong, C., Zhou, C. and Karypis, G., 2021. Anomaly detection on attributed networks via contrastive self-supervised learning. IEEE transactions on neural networks and learning systems (TNNLS).
[13] Jin, M., Liu, Y., Zheng, Y., Chi, L., Li, Y. and Pan, S., 2021. ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM).
[14] Yuan, X., Zhou, N., Yu, S., Huang, H., Chen, Z. and Xia, F., 2021, December. Higher-order Structure Based Anomaly Detection on Attributed Networks. In 2021 IEEE International Conference on Big Data (Big Data).
[15] Xu, Z., Huang, X., Zhao, Y., Dong, Y., and Li, J., 2022. Contrastive Attributed Network Anomaly Detection with Data Augmentation. In Proceedings of the 26th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD).
Comments
  • Query on Anomaly Prediction and Outlier labels

    Query on Anomaly Prediction and Outlier labels

    Hi,

    Given a graph object in the prediction API, What does the outlier labels mentioned hereas outlier_labels (numpy array of shape (n_samples,)) indicate from a graph perspective?

    Does the contents in the numpy array as 1 or 0 indicate the Nodes in the graph that are normal or anomalous? for example Labels: [0 0 0 ... 0 0 0] . Does each 0 value pertain to a node in graph?

    So, How should this prediction output be interpreted from a graph perspective? Thanks in advance.

    opened by nsankar 7
  • When I use 1080ti GPU, there is an out of memory problem in all datasets except Cora, which is inconsistent with the description in the benchmark paper.

    When I use 1080ti GPU, there is an out of memory problem in all datasets except Cora, which is inconsistent with the description in the benchmark paper.

    When I use 1080ti GPU, there is an out of memory problem in all datasets except Cora, which is inconsistent with the description in the benchmark paper.

    opened by 1017027994zjj 5
  • remove external (non-core-python) library `argparse` as a dependency

    remove external (non-core-python) library `argparse` as a dependency

    Describe the bug

    The current dependencies include installing an external library: argparse.

    It must be noted that argparse is a part of the core python libraries. There is no need for installing it alike other external libraries.

    • docs for core-library, argparse: https://docs.python.org/3/library/argparse.html

    :fire: EDIT: The library (argparse) that you are installing from PyPI is no longer maintained as it is now a part of standard python3. See my comment here.

    See further details:

    • https://gitter.im/conda-forge/conda-forge.github.io?at=62598b5b0466b352a46afd25

    https://github.com/pygod-team/pygod/blob/d037b67bd3001f4d45be5093b3717700aa79d953/requirements.txt#L1-L5

    opened by sugatoray 4
  • adone get unexpected keyword argument

    adone get unexpected keyword argument

    Running examples\adone.py for replication

    C:\Users\yuezh\Anaconda3\envs\torch19\python.exe C:/Users/yuezh/PycharmProjects/pygod/examples/adone.py training... Traceback (most recent call last): File "C:/Users/yuezh/PycharmProjects/pygod/examples/adone.py", line 35, in model.fit(data) File "C:\Users\yuezh\PycharmProjects\pygod\pygod\models\adone.py", line 158, in fit act=self.act).to(self.device) File "C:\Users\yuezh\PycharmProjects\pygod\pygod\models\adone.py", line 331, in init act=act) TypeError: init() got an unexpected keyword argument 'in_channels'

    todo 
    opened by yzhao062 3
  • A problem about structural reconstruction

    A problem about structural reconstruction

    When I am reading paper and PyGOD code, I find a problem when some algorithms aim to reconstruct structural infomation:

    $$ \hat{A}=\sigma(\pmb z\pmb z^T) $$

    where z is the graph embedding we have learnt, $\sigma$ is sigmoid function, and $\hat{A}$ is reconstructed adjacency matrix. One term of objective function is

    $$ \Vert A-\hat{A}\Vert_F^2 $$

    where $A$ is the adjacency matrix. But we should find that the diagonal elements of $\hat{A}$ is closed to 1 because

    $$ \hat{A}_{ii}=\sigma(z_iz_i^T) $$

    So I think we should add a self-loop on $A$ when reconstruction:

    $$ \Vert(A+I)-\hat{A}\Vert_F^2 $$

    In PyGOD code, I haven't found this consideration. I modified the code of DOMINANT in this way, and found performance improvement in some dataset.

    opened by Kaslanarian 2
  • Pygod does not work in a subprocess

    Pygod does not work in a subprocess

    Describe the bug Hi, I am trying to run example of PyGOD in a subprocess and it does not work for me

    To Reproduce

    from torch.multiprocessing import Process
    
    import torch_geometric.transforms as T
    from torch_geometric.datasets import Planetoid
    
    import torch
    from pygod.generator import gen_contextual_outliers, gen_structural_outliers
    from pygod.utils import load_data
    from pygod.models import AnomalyDAE
    
    
    
    def f(data):
        model = AnomalyDAE()
        print('started model fitting')
        model.fit(data)
        print('model fit succesful')
    
    if __name__ == '__main__':
        data = Planetoid('./data/Cora', 'Cora', transform=T.NormalizeFeatures())[0]
        data, ya = gen_contextual_outliers(data, n=100, k=50)
        data, ys = gen_structural_outliers(data, m=10, n=10)
        data.y = torch.logical_or(ys, ya).int()
    
        data = load_data('inj_cora')
        data.y = data.y.bool()
        p = Process(target=f, args=(data,))
        p.start()
        p.join()
    
    

    Expected behavior

    The model does not fit for me

    Desktop (please complete the following information):

    • OS: all os and systems
    • python: 3.8
    opened by prabhant 2
  • ONE does not accept negative value

    ONE does not accept negative value

    It appears that it would throw an error if the input x contains negative values. If this is expected, we should probably mention it somewhere. image

    check pygod/test/test_one.py

    opened by yzhao062 2
  • Connection Error when calling pygod.utils.load_data()

    Connection Error when calling pygod.utils.load_data()

    Describe the bug When calling pygod.utils.load_data(), sometimes it returns the following error message: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Please refer to 1 and 2 for potential fixing approaches.

    opened by YingtongDou 1
  • Update for v0.3.1

    Update for v0.3.1

    • add edge drop probability to structural outlier injection.
    • update benchmark script with more datasets.
    • multiple minor fixes by @cshjin @YingtongDou @kayzliu
    opened by kayzliu 1
  • Enabling different hidden dimension for attribute autoencoder and structure autoencoder

    Enabling different hidden dimension for attribute autoencoder and structure autoencoder

    Is your feature request related to a problem? Please describe. For now, some detectors (e.g., GUIDE) has two separate autoencoders for attribute and structure, but two autoencoders share the same hidden layer dimension. In many cases, there are a significant difference between the dimension of the node attributes and the dimension of structure information (e.g., adjacency matrix). Using the same hidden dimension may hampers the performance of the detectors.

    Describe the solution you'd like Enabling different hidden dimension for attribute autoencoder and structure autoencoder

    opened by kayzliu 1
  • Add tutorial for load the data from other formats

    Add tutorial for load the data from other formats

    We can add a tutorial in the document about loading data from numpy, scipy, matlab, networkx and other common data formats. Some of the data loaders can be found in https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html

    opened by YingtongDou 1
  • Is it possible to use heterogeneous pyg object to find anaomalies?

    Is it possible to use heterogeneous pyg object to find anaomalies?

    Hi, Thank you for this awesome package. I am working with heterogeneous and knowledge graphs. For example, if I use the famous MovieLens dataset and construct a heterogeneous graph, Can I feed it to model.fit(data) ?

    opened by monk1337 2
  • ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64])

    ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64])

    When running model.fit (on GPU), I received the following error: ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64])

    Any suggestion to get rid of this problem when using on GPU is appreciated.

    opened by nbijlani 1
  • Flickr is not consistent with the

    Flickr is not consistent with the "Flickr" dataset in pyg

    The Flickr dataset used in some god papers is the Flickr in pyg. And inj_ Flickr implemented in your library has a very different dataset. I hope you can use a correct data set.

    opened by goldenNormal 4
  • Problem for CoLA and ANEMONE models.

    Problem for CoLA and ANEMONE models.

    The codes for masking the target nodes is wrong. The target node is the first node in subgraph after the RandomWalk sample, while you mask the last node. The performance of CoLa and ANEMONE will improve 2% by fixing the bug.

    Wrong codes in CoLA(line 361~364) batch_feature = torch.cat( (batch_feature[:, :-1, :], added_feat_zero_row, batch_feature[:, -1:, :]), dim=1)

    Correct codes: batch_feature = torch.cat( (added_feat_zero_row, batch_feature[:, 1:, :], batch_feature[:, 0:1, :]), dim=1)

    Wrong codes in ANEMONE(line 288~289 and 429~430) bf = torch.cat( (bf[:, :-1, :], added_feat_zero_row, bf[:, -1:, :]), dim=1)

    Correct codes: bf = torch.cat( (added_feat_zero_row, bf[:, 1:, :], bf[:, 0 : 1, :]), dim=1)

    opened by 1017027994zjj 1
  • About node embedding function

    About node embedding function

    Hi, could you please provide the function that returns the trained node embeddings so that I can input the embeddings to machine learning classifier such as SVM.

    Best wish!

    enhancement 
    opened by wubo2180 2
Releases(v0.3.1)
  • v0.3.1(Sep 6, 2022)

    What's Changed

    • add edge drop probability to structural outlier injection
    • update benchmark script with more datasets.
    • multiple minor fixes by @cshjin @YingtongDou @kayzliu

    New Contributors

    • @cshjin made their first contribution in https://github.com/pygod-team/pygod/pull/40
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jun 25, 2022)

  • v0.2.0(Apr 30, 2022)

    What's New

    • Our paper is available on arXiv.
    • We enable most of the models to train with minbatch, see model list for supported models. @kayzliu @xyvivian @aha12345678
    • Add new models CoLA (beta) and ANEMONE (beta) by @harvardchen
    • The first community contributor @zhiming-xu add a new model CONAD.
    • Add new metric eval_average_precision by @YingtongDou.
    • Improved device setting by @yzhao062
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Apr 4, 2022)

    Many key applications depend on graph data. To tailor this need, we just open-sourced the first comprehensive graph outlier detection library--PyGOD.

    PyGOD contains more than 10 latest graph outlier detectors, which are built on PyTorch and PyG. It features:

    • unified and simple API as PyOD: using GNNs for outlier detection within 5 lines of code
    • full documentation and examples
    • for both academic use and industry app, all you need to prepare is the data in PyG format.

    PyGOD is a collaborative effort among UIC, CMU, ASU, IIT, and BUAA. We commit to providing long-term maintenance and keep adding new models to the library. It is also our goal to promote graph outlier detection methods to broader audiences. If you encounter a bug or have any suggestions please fill an issue or reach us via email [email protected]. Also, feel free to try it out with your code!

    We appreciate every star, fork, and follow.

    Source code(tar.gz)
    Source code(zip)
Owner
PyGOD Team
Maintaining A Python Library for Graph Outlier Detection (Anomaly Detection)
PyGOD Team
8-week curriculum for AI Builders

curriculum 8-week curriculum for AI Builders สารบัญ บทที่ 1 - Machine Learning คืออะไร บทที่ 2 - ชุดข้อมูลมหัศจรรย์และถิ่นที่อยู่ บทที่ 3 - Stochastic

AI Builders 134 Jan 03, 2023
Unofficial implementation of Pix2SEQ

Unofficial-Pix2seq: A Language Modeling Framework for Object Detection Unofficial implementation of Pix2SEQ. Please use this code with causion. Many i

159 Dec 12, 2022
One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing".

Introduction One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing". Users

seq-to-mind 18 Dec 11, 2022
Implements Gradient Centralization and allows it to use as a Python package in TensorFlow

Gradient Centralization TensorFlow This Python package implements Gradient Centralization in TensorFlow, a simple and effective optimization technique

Rishit Dagli 101 Nov 01, 2022
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan NOTE: This documentation describes a BETA release of PyStan 3. PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is

Stan 229 Dec 29, 2022
Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation Prerequisites This repo is built upon a local copy of transfo

Jixuan Wang 10 Sep 28, 2022
Convex optimization for fun and profit.

CFMM Optimal Routing This repository contains the code needed to generate the figures used in the paper Optimal Routing for Constant Function Market M

Guillermo Angeris 183 Dec 29, 2022
A full-fledged version of Pix2Seq

Stable-Pix2Seq A full-fledged version of Pix2Seq What it is. This is a full-fledged version of Pix2Seq. Compared with unofficial-pix2seq, stable-pix2s

peng gao 205 Dec 27, 2022
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Longguang Wang 225 Dec 26, 2022
Code for NeurIPS 2021 paper: Invariant Causal Imitation Learning for Generalizable Policies

Invariant Causal Imitation Learning for Generalizable Policies Ioana Bica, Daniel Jarrett, Mihaela van der Schaar Neural Information Processing System

Ioana Bica 17 Dec 01, 2022
Augmented Traffic Control: A tool to simulate network conditions

Augmented Traffic Control Full documentation for the project is available at http://facebook.github.io/augmented-traffic-control/. Overview Augmented

Meta Archive 4.3k Jan 08, 2023
2D Human Pose estimation using transformers. Implementation in Pytorch

PE-former: Pose Estimation Transformer Vision transformer architectures perform very well for image classification tasks. Efforts to solve more challe

Panteleris Paschalis 23 Oct 17, 2022
Exploring Classification Equilibrium in Long-Tailed Object Detection, ICCV2021

Exploring Classification Equilibrium in Long-Tailed Object Detection (LOCE, ICCV 2021) Paper Introduction The conventional detectors tend to make imba

52 Nov 21, 2022
Implementing Graph Convolutional Networks and Information Retrieval Mechanisms using pure Python and NumPy

Implementing Graph Convolutional Networks and Information Retrieval Mechanisms using pure Python and NumPy

Noah Getz 3 Jun 22, 2022
A python library for self-supervised learning on images.

Lightly is a computer vision framework for self-supervised learning. We, at Lightly, are passionate engineers who want to make deep learning more effi

Lightly 2k Jan 08, 2023
上海交通大学全自动抢课脚本,支持准点开抢与抢课后持续捡漏两种模式。2021/06/08更新。

Welcome to Course-Bullying-in-SJTU-v3.1! 2021/6/8 紧急更新v3.1 更新说明 为了更好地保护用户隐私,将原来用户名+密码的登录方式改为微信扫二维码+cookie登录方式,不再需要配置使用pytesseract。在使用扫码登录模式时,请稍等,二维码将马

87 Sep 13, 2022
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

JD Galileo Team 128 Nov 29, 2022
The official code of Anisotropic Stroke Control for Multiple Artists Style Transfer

ASMA-GAN Anisotropic Stroke Control for Multiple Artists Style Transfer Proceedings of the 28th ACM International Conference on Multimedia The officia

Six_God 146 Nov 21, 2022
Implementation of U-Net and SegNet for building segmentation

Specialized project Created by Katrine Nguyen and Martin Wangen-Eriksen as a part of our specialized project at Norwegian University of Science and Te

Martin.w-e 3 Dec 07, 2022
Video Contrastive Learning with Global Context

Video Contrastive Learning with Global Context (VCLR) This is the official PyTorch implementation of our VCLR paper. Install dependencies environments

143 Dec 26, 2022