Deep Learning and Logical Reasoning from Data and Knowledge

Overview

Logic Tensor Networks (LTN)

Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data and rich abstract knowledge about the world. LTN uses a differentiable first-order logic language, called Real Logic, to incorporate data and logic.

Grounding_illustration

LTN converts Real Logic formulas (e.g. ∀x(cat(x) → ∃y(partOf(x,y)∧tail(y)))) into TensorFlow computational graphs. Such formulas can express complex queries about the data, prior knowledge to satisfy during learning, statements to prove ...

Computational_graph_illustration

One can represent and effectively compute the most important tasks of deep learning. Examples of such tasks are classification, regression, clustering, or link prediction. The "Getting Started" section of the README links to tutorials and examples of LTN code.

[Paper]

@misc{badreddine2021logic,
      title={Logic Tensor Networks}, 
      author={Samy Badreddine and Artur d'Avila Garcez and Luciano Serafini and Michael Spranger},
      year={2021},
      eprint={2012.13635},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

Installation

Clone the LTN repository and install it using pip install -e <local project path>.

Following are the dependencies we used for development (similar versions should run fine):

  • python 3.8
  • tensorflow >= 2.2 (for running the core system)
  • numpy >= 1.18 (for examples)
  • matplotlib >= 3.2 (for examples)

Repository structure

  • logictensornetworks/core.py -- core system for defining constants, variables, predicates, functions and formulas,
  • logictensornetworks/fuzzy_ops.py -- a collection of fuzzy logic operators defined using Tensorflow primitives,
  • logictensornetworks/utils.py -- a collection of useful functions,
  • tutorials/ -- tutorials to start with LTN,
  • examples/ -- various problems approached using LTN,
  • tests/ -- tests.

Getting Started

Tutorials

tutorials/ contains a walk-through of LTN. In order, the tutorials cover the following topics:

  1. Grounding in LTN part 1: Real Logic, constants, predicates, functions, variables,
  2. Grounding in LTN part 2: connectives and quantifiers (+ complement: choosing appropriate operators for learning),
  3. Learning in LTN: using satisfiability of LTN formulas as a training objective,
  4. Reasoning in LTN: measuring if a formula is the logical consequence of a knowledgebase.

The tutorials are implemented using jupyter notebooks.

Examples

examples/ contains a series of experiments. Their objective is to show how the language of Real Logic can be used to specify a number of tasks that involve learning from data and reasoning about logical knowledge. Examples of such tasks are: classification, regression, clustering, link prediction.

  • The binary classification example illustrates in the simplest setting how to ground a binary classifier as a predicate in LTN, and how to feed batches of data during training,
  • The multiclass classification examples (single-label, multi-label) illustrate how to ground predicates that can classify samples in several classes,
  • The MNIST digit addition example showcases the power of a neurosymbolic approach in a classification task that only provides groundtruth for some final labels (result of the addition), where LTN is used to provide prior knowledge about intermediate labels (possible digits used in the addition),
  • The regression example illustrates how to ground a regressor as a function symbol in LTN,
  • The clustering example illustrates how LTN can solve a task using first-order constraints only, without any label being given through supervision,
  • The Smokes Friends Cancer example is a classical link prediction problem of Statistical Relational Learning where LTN learns embeddings for individuals based on fuzzy groundtruths and first-order constraints.

The examples are presented with both jupyter notebooks and Python scripts.

Querying with LTN

Learning with LTN

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

LTN has been developed thanks to active contributions and discussions with the following people (in alphabetical order):

  • Alessandro Daniele (FBK)
  • Artur d’Avila Garcez (City)
  • Benedikt Wagner (City)
  • Emile van Krieken (VU Amsterdam)
  • Francesco Giannini (UniSiena)
  • Giuseppe Marra (UniSiena)
  • Ivan Donadello (FBK)
  • Lucas Bechberger (UniOsnabruck)
  • Luciano Serafini (FBK)
  • Marco Gori (UniSiena)
  • Michael Spranger (Sony AI)
  • Michelangelo Diligenti (UniSiena)
  • Samy Badreddine (Sony AI)
Comments
  • ValueError: mask cannot be scalar.

    ValueError: mask cannot be scalar.

    When I try define ltn.variable the following error is returned:

        <ipython-input-11-51fc9a0fab79>:5 axioms *
            bb12_relation = ltn.variable("P",features[labels_position=="P"])
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:600 _slice_helper
            return boolean_mask(tensor=tensor, mask=slice_spec)
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:1365 boolean_mask
            raise ValueError("mask cannot be scalar.")
    
        ValueError: mask cannot be scalar.
    

    Based on the code of multiclass-multilabel.ipynb I declare the first variable in the axioms function that returns the mentioned error: ltn.variable("P",features[labels_position=="P"])

    opened by MilenaTenorio 9
  • ltnw: run knowledgebase without training should be possibe

    ltnw: run knowledgebase without training should be possibe

    import logging; logging.basicConfig(level=logging.INFO)
    
    import logictensornetworks_wrapper as ltnw
    import tensorflow as tf
    
    ltnw.constant("c",[2.1,3])
    ltnw.constant("d",[3.4,1.5])
    ltnw.function("f",4,2,fun_definition=lambda x,y:x-y)
    mu = tf.constant([2.,3.])
    ltnw.predicate("P",2,pred_definition=lambda x:tf.exp(-tf.reduce_sum(tf.square(x-mu))))
    
    ltnw.formula("P(c)")
    
    ltnw.initialize_knowledgebase()
    
    with tf.Session() as sess:
        print(sess.run(ltnw.ask("P(c)")))
        print(sess.run(ltnw.ask("P(d)")))
        print(sess.run(ltnw.ask("P(f(c,d))")))
    

    Throws ValueError: No variables to optimize.

    bug 
    opened by mspranger 3
  • Lambda for functions need to be implemented using Functional API of TF

    Lambda for functions need to be implemented using Functional API of TF

    Here is what I did:

    import logictensornetworks as ltn
    f1 = ltn.Function.Lambda(lambda args: args[0]-args[1])
    c1 = ltn.constant([2.1,3])
    c2 = ltn.constant([4.5,0.8])
    print(f1([c1,c2])) # multiple arguments are passed as a list
    

    And I get this:

    WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'list'> input: [<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[2.1, 3. ]], dtype=float32)>, <tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[4.5, 0.8]], dtype=float32)>]
    Consider rewriting this model with the Functional API.
    tf.Tensor([-2.4  2.2], shape=(2,), dtype=float32)
    

    Here are the versions:

    tensorflow=2.4.0
    ltn = directly from this repo today (24 Jan 2021)
    
    opened by thoth291 2
  • Check of number_of_features_or_feed of ltn.variable

    Check of number_of_features_or_feed of ltn.variable

    opened by ivanDonadello 2
  • ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    The implementation of ltnw.term is incompatible with the redeclaration of constants, variables or functions

    ltnw.term is looking at the result value previously stored in the global dictionary ltnw.TERMS rather than reconstructing the term

    For instance, the code:

    ltnw.variable('?x',[[3.0,5.0],[2.0,6.0],[3.0,9.0]])
    print('1st call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    
    ltnw.variable('?x',[[3.0,10.0],[1.0,6.0]])
    print('2nd call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    

    outputs:

    1st call
    value of variable:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    2nd call
    value of variable:
    [[ 3. 10.]
     [ 1.  6.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    opened by sbadredd 2
  • Error in the axioms of the clustering example

    Error in the axioms of the clustering example

    Following issue #17 and #20 , the commit 578d7bcaa35c797ac1c94cf322f0a6ec524beaa2 updated the axioms in the clustering example.

    It introduced a typo in the masks. In pseudo-code the rules with masks should be:

    for all x,y s.t. close_threshold > distance(x,y): x,y belong to the same cluster
    for all x,y s.t. distance(x,y) > distant_threshold: x,y belong to different cluster
    

    However, the rules have been written:

    for all x,y s.t.  distance(x,y) > close_threshold: x,y belong to the same cluster
    for all x,y s.t. distant_threshold > distance(x,y) : x,y belong to different cluster
    

    Basically, the operands have been mixed. This explains why the latest results were not as good as the previous ones. This is easy to fix; the operands just have to be interchanged again

    bug 
    opened by sbadredd 1
  • Add runtime Type Checking when constructing expressions

    Add runtime Type Checking when constructing expressions

    Issue #19 defined classes for Term and Formula following the usual definitions of FOL (see also)

    This can be used to type-check the arguments of various functions:

    • The inputs of predicates and functions are instances of Term,
    • The expressions in connectives and quantifier operations are instances of Formula,
    • The masks in quantifiers are instances of Formula.

    This is already indicated in type hints. Adding a runtime validation would make a stronger API and ensure that the user correctly uses the different LTN classes

    enhancement 
    opened by sbadredd 0
  • Parent classes for Terms and Formulas

    Parent classes for Terms and Formulas

    Going further than issue #16, we can define classes for Term and Formula.

    • Variable and Constant would be subclasses of Term
    • The output of a Function is a Term
    • Proposition is a subclass of Formula
    • The output of a Predicate is a Formula, and so is the result of connective and quantifiers operations

    This can in turn be used for type checking the arguments of various functions:

    • The inputs of predicates and functions must be instances of Term
    • The inputs of connective and quantifier operations must be instances of Formula

    This could be useful for helping the user with better error messages and debugging

    enhancement 
    opened by sbadredd 0
  • Add a constructor for variables made from trainable constants

    Add a constructor for variables made from trainable constants

    A variable can be instantiated using two different types of objects:

    • A value (numpy, python list, ...) that will be fed in a tf.constant (the variable refers to a new object).
    • A tf.Tensor instance that will be used directly as the variable (the variable refers to the same object).

    The latter is useful when the variable denotes a sequence of trainable constants.

    c1 = ltn.constant([2.1,3], trainable=True)
    c2 = ltn.constant([4.5,0.8], trainable=True)
    
    with tf.GradientTape() as tape:
        # Notice that the assignation must be done within a tf.GradientTape.
        # Tensorflow will keep track of the gradients between c1/c2 and x.
        x = ltn.variable("x",tf.stack([c1,c2]))
        res = P2(x)
    tape.gradient(res,c1).numpy() # the tape keeps track of gradients between P2(x), x and c1
    

    The assignation must be done within a tf.GradientTape. This is explained in the tutorials, but a user could easily miss this information.

    I propose to add a constructor for variables from constants, that must explicitly take the tf.GradientTape instance as an argument. In this way, it will be harder to miss.

    enhancement 
    opened by sbadredd 0
  • Support masks using LTN syntax instead of TensorFlow operations

    Support masks using LTN syntax instead of TensorFlow operations

    To use a guarded quantifier in a LTN sentence, the user must use lambda functions in the middle of traditional LTN syntax. Also, he can use TensorFlow syntax to write the mask, which adds to the confusion.

    For example, in the MNIST single-digit additional example, we have the following mask:

    exists(...,...,
        mask_vars=[d1,d2,labels_z],
        mask_fn=lambda vars: tf.equal(vars[0]+vars[1],vars[2])
    )
    

    If we would write the mask in LTN syntax, this would give:

    exists(...,...,
        mask= Equal([Add([d1,d2]),labels_z])
    )
    

    I believe the latter is clearer and more coherent within an LTN expression.

    This implies that the user must define extra LTN symbols for Equal and Add. I believe this is worth it, for the sake of clarity. In case the user wouldn't want to do that, he can still re-use the lambda function inside of a Mask predicate:

    Mask = ltn.Predicate.Lambda(lambda vars: tf.equal(vars[0]+vars[1],vars[2]))
    ...
    exists(...,...,
        mask=Mask([d1,d2,labels_z])
    )
    

    The mask is still written using an LTN symbol and doesn't require changing the code much compared to the original approach

    enhancement 
    opened by sbadredd 0
  • Create classes for Variable, Constant and Proposition

    Create classes for Variable, Constant and Proposition

    At the moment, LTN implements most expressions using tf.Tensor objects with some added dynamic attributes.

    For example, for a non-trainable LTN constant, the logic is the following (simplified):

    def constant(value):
        result = tf.constant(value)
        result.active_doms = []
        return result
    

    This makes the system easy to break, and debugging difficult. When copying or operating with the constant, the user might not realize that a new tensor is created and the active_doms attribute is lost.

    I propose to separate the logic of LTN with the logic of Tensorflow, and use distinct types. Something like:

    class Constant:
        def __init__(self, value):
            self.tensor = tf.constant(value)
            self.active_doms = []
    

    This implies that LTN predicates and functions will have to be adapted to work with constant.tensor, variable.tensor, ...

    enhancement 
    opened by sbadredd 0
  • Add a ltn.Predicate constructor that takes in a logits model

    Add a ltn.Predicate constructor that takes in a logits model

    Constructors for ltn.Predicate

    The constructor for ltn.Predicate accepts a model that outputs one truth degree in [0,1].

    class ModelThatOutputsATruthDegree(tf.keras.Model):
        def __init__(self):
            super().__init__()
            self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.relu)
            self.dense2 = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid) # returns one value in [0,1]
    
        def call(self, x):
            x = self.dense1(x)
            return self.dense2(x)
    
    model = ModelThatOutputsATruthDegree()
    P1 = ltn.Predicate(model)
    P1(x) # -> call with a ltn Variable
    

    Issue

    Many models output several values simultaneously. For example, a model for the predicate P2 classifying images x into n classes type_1, ..., type_n will likely output n logits using the same hidden layers.

    Eventually, we would expect to call the corresponding predicate using the syntax P2(x,type). This requires two additional steps:

    1. Transforming the logits into values in [0,1],
    2. Indexing the class using the term type.

    Because this is a common use-case, we implemented a function ltn.utils.LogitsToPredicateModel for convenience. It is used in some of the examples (cf MNIST digit addition). The syntax is:

    logits_model(x) # how to call `logits_model`
    P2 = ltn.Predicate(ltn.utils.LogitsToPredicateModel(logits_model), single_label=True)
    P2([x,type]) # how to call the predicate
    

    It automatically adds a final argument for class indexing and performs a sigmoid or softmax activation depending on the parameter single_label.

    Proposition

    It would be more elegant to have the functionality of creating a predicate from a logits model as a class constructor for ltn.Predicate.

    A suggested syntax is:

    P2 = ltn.Predicate.FromLogits(logits_model, activation_function="softmax", with_class_indexing=True)
    
    • The functionality comes as a new class constructor,
    • The activation function is more explicit than the single_label parameter in ltn.utils.LogitsToPredicateModel,
    • with_class_indexing=False still allows creating predicates in the form of P1(x), like abovementioned.

    Changes to the rest of the API

    The proposition adds a new constructor but shouldn't change any other method of ltn.Predicate or any framework method in general.

    enhancement 
    opened by sbadredd 1
  • Weighted connective operators

    Weighted connective operators

    Hello,

    In my project, I needed to use connective fuzzy logic operator., So, I implemented a class that enables to add weights to classic fuzzy operators, based on this paper : https://www.researchgate.net/publication/2610015_The_Weighting_Issue_in_Fuzzy_Logic

    I think it may be useful for other people or even to add it to ltn operators, so here is my code :

    class WeightedConnective:
        """Class to compute a weighted connective fuzzy operator."""
    
        def __init__(self, single_connective: Callable = ltn.fuzzy_ops.And_Prod()):
            """Initialize WeightedConnective.
    
            Parameters
            ----------
            single_connective : Callable
                Function to compute the binary operation
            """
            self.single_connective = single_connective
    
        def __call__(self, *args: float, weights: list[float] | None = None) -> float:
            """Call function of WeightedConnective.
    
            Parameters
            ----------
            *args : float
                Truth values whose operation should be computed
            weights : list[float] | None
                List of weights for the predicates, None if all predicates should be weighted
                equally, default: None
    
            Returns
            -------
            float:
                Truth value of weighted connective operation between predicates
    
            Raises
            ------
            ValueError
                If no predicate was provided
            ValueError
                If the number of predicates and the number of weights are different
            """
            n = len(args)
            if n == 0:
                raise ValueError("No predicate was found")
            if n == 1:
                return args[0]
            if weights is None:
                weights = [1. / n for _ in range(n)]
            if len(weights) != n:
                raise ValueError(
                    f"Numbers of predicates and weights should be equal : {n} predicates and "
                    f"{len(weights)} weights were found")
    
            s = sum(weights)
            if s != 0:
                weights = [elt / s for elt in weights]
    
            w = max(weights)
            res = (weights[0] / w) * args[0]
            for i, x in enumerate(args):
                if i != 0:
                    res = self.single_connective(res, (weights[i] / w) * args[i])
            return res
    
    enhancement 
    opened by maelle101 1
  • Saving LTN model

    Saving LTN model

    Hello,

    I am working on a project using LTN. I train a model with several Neural Networks (the number varies between executions). Is there an easy way to save and then load an entire LTN model ? Or should I use several time tensorflow saving function and store other information (for example which Predicate corresponds to each NN) by a custom way ?

    Thanks in advance for any answer, and thanks for this great framework.

    opened by maelle101 3
  • Imbalanced classification

    Imbalanced classification

    first, thank you for this great framework, my question is; what is the best way to define variables for imbalanced classification (with a lot of categories) for which in each batch they might be empty? thank you!

    opened by mpourvali 3
  • Allow to permanently `diag` variables

    Allow to permanently `diag` variables

    Diagonal quantification

    Given 2 (or more) variables, ltn.diag allows to express statements about specific pairs (or tuples) of the variables, such that the i-th tuple contains the i-th instances of the variables.

    In simplified pseudo-code, the usual quantification would compute:

    for x_i in x:
        for y_j in y:
            results.append(P(x_i,y_j))
    aggregate(results)
    

    In contrast, diagonal quantification would compute:

    for x_i, y_i in zip(x,y):
        results.append(P(x_i,y_i))
    aggregate(results)
    

    In LTN code, given two variables x1 and x2, we use diagonal quantification as follows:

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    P = ltn.Predicate(...)
    P([x1,x2]) # -> returns 10x10 values
    ltn.diag(x1,x2)
    P([x1,x2]) # -> returns only 10 "zipped" values
    ltn.undiag(x1,x2)
    P([x1,x2]) # -> returns 10x10 values
    

    See also the second tutorial.

    Issue

    At the moment, every quantifier automatically calls ltn.undiag after the aggregation is performed, so that the variables keep their normal behavior outside of the formula. Therefore, it is recommended to use ltn.diag only in quantified formulas as follows.

    Forall(ltn.diag(x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of 10x10 values
    

    However, there are cases where the second (normal) behavior for the two variables x1 and x2 is never useful. Some variables are designed from the start to be used as paired, zipped variables. In that case, forcing the user to re-use the keyword ltn.diag at every quantification is redundant.

    Proposition

    Define a new keyword ltn.diag_lock which can be used once at the instantiation of the variables, and will force the diag behavior in every subsequent quantification. ltn.undiag will not be called after an aggregation.

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    ltn.diag_lock([x1,x2])
    P([x1,x2]) # -> returns only 10 "zipped" values
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> still returns an aggregate of only 10 "zipped values"
    

    Possibly, we can add an ltn.undiag_lock too.

    The implementation details are left to define but shouldn't change the rest of the API.

    enhancement 
    opened by sbadredd 0
  • automated translation of tptp problems to ltn axioms

    automated translation of tptp problems to ltn axioms

    Hello,

    we're trying to automatically translate TPTP problems to axioms computable by the LTNs. Errors occur when trying to apply the gradient tape in the training step because of initialized variables outside of the tape scope as it is described in the tutorial notebooks. Is there by any chance already an implementation (or in the works) to translate a logic problem (written in some intermediate language) to LTN readable axioms?

    Best, Philip

    opened by phjlip 1
Releases(v2.0)
PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting This is the Pytorch implement of CVPR 2016 paper on Context Encoders 1) Semantic Inpainting Demo Inst

321 Dec 25, 2022
Deep learning for Engineers - Physics Informed Deep Learning

SciANN: Neural Networks for Scientific Computations SciANN is a Keras wrapper for scientific computations and physics-informed deep learning. New to S

SciANN 195 Jan 03, 2023
This repo provides the official code for TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/pdf/2103.04430.pdf).

TransBTS: Multimodal Brain Tumor Segmentation Using Transformer This repo is the official implementation for TransBTS: Multimodal Brain Tumor Segmenta

Raymond 247 Dec 28, 2022
StyleTransfer - Open source style transfer project, based on VGG19

StyleTransfer - Open source style transfer project, based on VGG19

Patrick martins de lima 9 Dec 13, 2021
Benchmarking Pipeline for Prediction of Protein-Protein Interactions

B4PPI Benchmarking Pipeline for the Prediction of Protein-Protein Interactions How this benchmarking pipeline has been built, and how to use it, is de

Loïc Lannelongue 4 Jun 27, 2022
YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Yolo v4, v3 and v2 for Windows and Linux (neural networks for object detection) Paper YOLO v4: https://arxiv.org/abs/2004.10934 Paper Scaled YOLO v4:

Alexey 20.2k Jan 09, 2023
Unofficial Tensorflow Implementation of ConvNeXt from A ConvNet for the 2020s

Tensorflow Implementation of "A ConvNet for the 2020s" This is the unofficial Tensorflow Implementation of ConvNeXt from "A ConvNet for the 2020s" pap

DK 11 Oct 12, 2022
A standard framework for modelling Deep Learning Models for tabular data

PyTorch Tabular aims to make Deep Learning with Tabular data easy and accessible to real-world cases and research alike.

801 Jan 08, 2023
TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition.

TraND This is the code for the paper "Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, Xiaoping Zhang and Tao Mei: TraND: Transferable

Jinkai Zheng 32 Apr 04, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
Unofficial Implementation of MLP-Mixer, Image Classification Model

MLP-Mixer Unoffical Implementation of MLP-Mixer, easy to use with terminal. Train and test easly. https://arxiv.org/abs/2105.01601 MLP-Mixer is an arc

Oğuzhan Ercan 6 Dec 05, 2022
Reviving Iterative Training with Mask Guidance for Interactive Segmentation

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation

Visual Understanding Lab @ Samsung AI Center Moscow 406 Jan 01, 2023
Implementation of Graph Convolutional Networks in TensorFlow

Graph Convolutional Networks This is a TensorFlow implementation of Graph Convolutional Networks for the task of (semi-supervised) classification of n

Thomas Kipf 6.6k Dec 30, 2022
Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions

Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions Accepted by AAAI 2022 [arxiv] Wenyu Liu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jia

liuwenyu 245 Dec 16, 2022
LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021

LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021 We propose a cross encoder model (LTR_CrossEncoder) for information retrieval, re-retrie

Xuan Hieu Duong 7 Jan 12, 2022
ByteTrack with ReID module following the paradigm of FairMOT, tracking strategy is borrowed from FairMOT/JDE.

ByteTrack_ReID ByteTrack is the SOTA tracker in MOT benchmarks with strong detector YOLOX and a simple association strategy only based on motion infor

Han GuangXin 46 Dec 29, 2022
CS5242_2021 - Neural Networks and Deep Learning, NUS CS5242, 2021

CS5242_2021 Neural Networks and Deep Learning, NUS CS5242, 2021 Cloud Machine #1 : Google Colab (Free GPU) Follow this Notebook installation : https:/

Xavier Bresson 165 Oct 25, 2022
[MedIA2021]MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning

MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning [MedIA or Arxiv] and [Demo] This repository pr

Healthcare Intelligence Laboratory 92 Dec 08, 2022
PyTorch version implementation of DORN

DORN_PyTorch This is a PyTorch version implementation of DORN Reference H. Fu, M. Gong, C. Wang, K. Batmanghelich and D. Tao: Deep Ordinal Regression

Zilin.Zhang 3 Apr 27, 2022
Portfolio analytics for quants, written in Python

QuantStats: Portfolio analytics for quants QuantStats Python library that performs portfolio profiling, allowing quants and portfolio managers to unde

Ran Aroussi 2.7k Jan 08, 2023