Ἀνατομή is a PyTorch library to analyze representation of neural networks

Overview

anatome

Ἀνατομή is a PyTorch library to analyze internal representation of neural networks

This project is under active development and the codebase is subject to change.

Installation

anatome requires

Python>=3.9.0
PyTorch>=1.9.0
torchvision>=0.10.0

After the installation of PyTorch, install anatome as follows:

pip install -U git+https://github.com/moskomule/anatome

Available Tools

Representation Similarity

To measure the similarity of learned representation, anatome.SimilarityHook is a useful tool. Currently, the following methods are implemented.

from anatome import SimilarityHook

model = resnet18()
hook1 = SimilarityHook(model, "layer3.0.conv1")
hook2 = SimilarityHook(model, "layer3.0.conv2")
model.eval()
with torch.no_grad():
    model(data[0])
# downsampling to (size, size) may be helpful
hook1.distance(hook2, size=8)

Loss Landscape Visualization

from anatome import landscape2d

x, y, z = landscape2d(resnet18(),
                      data,
                      F.cross_entropy,
                      x_range=(-1, 1),
                      y_range=(-1, 1),
                      step_size=0.1)
imshow(z)

Fourier Analysis

  • Yin et al. NeurIPS 2019 etc.,
from anatome import fourier_map

map = fourier_map(resnet18(),
                  data,
                  F.cross_entropy,
                  norm=4)
imshow(map)

Citation

If you use this implementation in your research, please cite as:

@software{hataya2020anatome,
    author={Ryuichiro Hataya},
    title={anatome, a PyTorch library to analyze internal representation of neural networks},
    url={https://github.com/moskomule/anatome},
    year={2020}
}
Comments
  • CCA is very small between a random net vs a pretrained one, bug?

    CCA is very small between a random net vs a pretrained one, bug?

    I am getting this issue:

    import anatome
    print(anatome)
    # from anatome import CCAHook
    from anatome import SimilarityHook
    model = resnet18(pretrained=True)
    random_model = resnet18()
    # random_model = resnet18().cuda()
    # hook1 = CCAHook(model, "layer1.0.conv1")
    # hook2 = CCAHook(random_model, "layer1.0.conv1")
    cxa_dist_type = 'pwcca'
    layer_name = "layer1.0.conv1"
    hook1 = SimilarityHook(model, layer_name, cxa_dist_type)
    hook2 = SimilarityHook(random_model, layer_name, cxa_dist_type)
    with torch.no_grad():
        model(data[0])
        random_model(data[0])
    distance_btw_nets = hook1.distance(hook2, size=8)
    print(f'{distance_btw_nets=}')
    distance_btw_nets = hook1.distance(hook2, size=None)
    print(f'{distance_btw_nets=}')
    <module 'anatome' from '/Users/brando/anaconda3/envs/metalearning/lib/python3.9/site-packages/anatome/__init__.py'>
    distance_btw_nets=0.3089657425880432
    distance_btw_nets=-2.468004822731018e-08
    

    the second is suppose to use the full features but we see the error is much smaller when I expected it to increase by a lot since we are using more info since we didn't down sample.

    Is this a bug?

    opened by brando90 13
  • do you preprocess the matrices for us?

    do you preprocess the matrices for us?

    I just noticed these two paragraphs from the papers I read and was wondering if you centered the matrices or if the user has to center them for anatome to work before hand.

    Screen Shot 2021-09-15 at 4 31 04 PM Screen Shot 2021-09-15 at 4 30 48 PM

    opened by brando90 7
  • Do hooks create unexpected side effects in anatome?

    Do hooks create unexpected side effects in anatome?

    I will try really hard to make this my last question, and I won't bother you again. Do we need to do some sort of clearing after we call hook.distance in anatome? (or deep copy the models for the code to work properly)?

    e.g. modification based on your tutorial:

    def cxa_dist(mdl1: nn.Module, mdl2: nn.Module, X: Tensor, layer_name: str,
                 downsample_size: Optional[str] = None, iters: int = 1, cxa_dist_type: str = 'pwcca') -> float:
        import copy
        mdl1 = copy.deepcopy(mdl1)
        mdl2 = copy.deepcopy(mdl2)
        # get sim/dis functions
        hook1 = SimilarityHook(mdl1, layer_name, cxa_dist_type)
        hook2 = SimilarityHook(mdl2, layer_name, cxa_dist_type)
        mdl1.eval()
        mdl2.eval()
        for _ in range(iters):  # might make sense to go through multiple is NN is stochastic e.g. BN, dropout layers
            mdl1(X)
            mdl2(X)
        # - size: size of the feature map after downsampling
        dist = hook1.distance(hook2, size=downsample_size)
        # - remove hook, to make sure code stops being stateful (I hope)
        remove_hook(mdl1, hook1)
        remove_hook(mdl2, hook2)
        return float(dist)
    
    def remove_hook(mdl: nn.Module, hook):
        """
        ref: https://github.com/pytorch/pytorch/issues/5037
        """
        handle = mdl.register_forward_hook(hook)
        handle.remove()
    
    opened by brando90 5
  • How should anatome be used if we are comparing nets during training?

    How should anatome be used if we are comparing nets during training?

    Since the code is attaching hooks and it seems stateful (due to using objects instead of pure functions), how should one use anatome to compute CCAs sims etc correctly?

    Is using deep copy of the solution? Or deleting the hook after every use of similarity function?:

    def cxa_dist(mdl1: nn.Module, mdl2: nn.Module, X: Tensor, layer_name: str,
                 downsample_size: Optional[str] = None, iters: int = 1, cxa_dist_type: str = 'pwcca') -> float:
        import copy
        mdl1 = copy.deepcopy(mdl1)
        mdl2 = copy.deepcopy(mdl2)
        # print(cca_size)
        # meta_batch [T, N*K, CHW], [T, K, D]
        from anatome import SimilarityHook
        # get sim/dis functions
        hook1 = SimilarityHook(mdl1, layer_name, cxa_dist_type)
        hook2 = SimilarityHook(mdl2, layer_name, cxa_dist_type)
        mdl1.eval()
        mdl2.eval()
        for _ in range(iters):  # might make sense to go through multiple is NN is stochastic e.g. BN, dropout layers
            # x = torch_uu.torch_uu.distributions.Uniform(low=lb, high=ub).sample((num_samples_per_task, Din))
            # x = torch_uu.torch_uu.distributions.Uniform(low=-1, high=1).sample((15, 1))
            # x = torch_uu.torch_uu.distributions.Uniform(low=-1, high=1).sample((500, 1))
            mdl1(X)
            mdl2(X)
        # - size: size of the feature map after downsampling
        dist = hook1.distance(hook2, size=downsample_size)
        return float(dist)
    
    opened by brando90 4
  • Size Check

    Size Check

    Hello ~ Thank you for the implementation! It is amazing and helps me a lot!

    I have a question: in CCA, you seem to check x.size(0) < x.size(1). I believe the implementation is correct, for I have seen some similar checks in other implementations of CCA. However, I don't understand the rationale behind this. Could you explain a bit? Thanks! Also, something related (it could be other reasons), my data has a larger feature size (x.size(1)) than the number of examples (x.size(0)), and sometimes I get this error: The algorithm failed to converge because the input matrix is ill-conditioned or has too many repeated singular values. I was wondering whether they were related? Could you provide any insight on this?

    opened by Yupei-Du 3
  • How do we run the jupyter notebook example?

    How do we run the jupyter notebook example?

    Got error:

    FileNotFoundError: [Errno 2] No such file or directory: '/Users/brando/.torch/data/imagenet/val'
    

    is it possible to download some data set to test anatome such that it doesn't throw an error?

    Perhaps a colab example is better?

    a start: https://colab.research.google.com/drive/1GrhWrWFPmlc6kmxc0TJY0Nb6qOBBgjzX?usp=sharing

    opened by brando90 3
  • error on import

    error on import

    called

    ran

    import anatome

    Traceback (most recent call last):

    File "/home/cody/miniconda3/envs/RepDist/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code exec(code_obj, self.user_global_ns, self.user_ns)

    File "", line 1, in import anatome

    File "/home/cody/miniconda3/envs/RepDist/lib/python3.7/site-packages/anatome/init.py", line 1, in from .similarity import SimilarityHook

    File "", line 1 (x.size(0)=) ^ SyntaxError: invalid syntax

    I have anatome==0.0.1 I can provide my environment info if needed but the error looks pretty clear.

    opened by codestar12 3
  • Correcting CKA similarity to output distance by subtracting 1

    Correcting CKA similarity to output distance by subtracting 1

    corrected what seems to be that anatome is returning similarity when the function name is distance.

    Including the table from the original CKA paper to ensure my suggestion is correct indeed:

    Screen Shot 2021-02-02 at 4 52 02 PM

    paper link: http://proceedings.mlr.press/v97/kornblith19a.html

    opened by brando90 3
  • Is the way to calculate similarity just by doing 1 minus the values your library gives?

    Is the way to calculate similarity just by doing 1 minus the values your library gives?

    asking because

    1. it seems the original paper for CKA gives the values in terms of similarities and not distances so I don't understand why the library here gives them in distances
    2. make sure I don't do 1 - cca and make mistakes/is valid.

    I see that you have a bunch of 1 - value except for CKA, is that a bug?

    https://github.com/moskomule/anatome/blob/57f34fa796ffcff0ba37b5fa5142018e9f6fde61/anatome/similarity.py#L154 https://github.com/moskomule/anatome/blob/57f34fa796ffcff0ba37b5fa5142018e9f6fde61/anatome/similarity.py#L133 https://github.com/moskomule/anatome/blob/57f34fa796ffcff0ba37b5fa5142018e9f6fde61/anatome/similarity.py#L203

    opened by brando90 3
  • cca code for GPU code not working

    cca code for GPU code not working

    small example:

    import torch
    import torch.nn as nn
    from anatome import SimilarityHook
    
    from collections import OrderedDict
    
    #
    Din, Dout = 1, 1
    mdl1 = nn.Sequential(OrderedDict([
        ('fc1_l1', nn.Linear(Din, Dout)),
        ('out', nn.SELU()),
        ('fc2_l2', nn.Linear(Din, Dout)),
    ]))
    mdl2 = nn.Sequential(OrderedDict([
        ('fc1_l1', nn.Linear(Din, Dout)),
        ('out', nn.SELU()),
        ('fc2_l2', nn.Linear(Din, Dout)),
    ]))
    
    print(f'is cuda available: {torch.cuda.is_available()}')
    
    with torch.no_grad():
        mu = torch.zeros(Din)
        # std =  1.25e-2
        std = 10
        noise = torch.distributions.normal.Normal(loc=mu, scale=std).sample()
        # mdl2.fc1_l1.weight.fill_(50.0)
        # mdl2.fc1_l1.bias.fill_(50.0)
        mdl2.fc1_l1.weight += noise
        mdl2.fc1_l1.bias += noise
    
    if torch.cuda.is_available():
        mdl1 = mdl1.cuda()
        mdl2 = mdl2.cuda()
    
    hook1 = SimilarityHook(mdl1, "fc1_l1")
    hook2 = SimilarityHook(mdl2, "fc1_l1")
    mdl1.eval()
    mdl2.eval()
    
    # params for doing "good" CCA
    iters = 10
    num_samples_per_task = 500
    size = 8
    # start CCA comparision
    lb, ub = -1, 1
    
    for _ in range(iters):
        x = torch.torch.distributions.Uniform(low=-1, high=1).sample((num_samples_per_task, 1))
        if torch.cuda.is_available():
            x = x.cuda()
        y1 = mdl1(x)
        y2 = mdl2(x)
        print(f'y1 - y2 = {(y1-y2).norm(2)}')
    print('about to do cca')
    dist = hook1.distance(hook2, size=size)
    print('cca done')
    print(f'cca dist = {dist}')
    print('--> Done!\a')
    

    but it always has a segmentation error:

    (automl-meta-learning) miranda9~/automl-meta-learning $ python test_cca_gpu.py 
    is cuda available: True
    y1 - y2 = 4.561897277832031
    y1 - y2 = 3.7458858489990234
    y1 - y2 = 3.8464999198913574
    y1 - y2 = 4.947702407836914
    y1 - y2 = 5.404015064239502
    y1 - y2 = 4.85843563079834
    y1 - y2 = 4.000360488891602
    y1 - y2 = 4.194643020629883
    y1 - y2 = 4.894904613494873
    y1 - y2 = 4.7721710205078125
    about to do cca
    Segmentation fault
    

    why? how is this fixed?

    opened by brando90 3
  • potential improvement for CNN size=None (or bug?)

    potential improvement for CNN size=None (or bug?)

    I noticed for size=None you assume an activation is a neuron.

    It is possible to have this instead:

                # - convolution layer [M, C, H, W]
                if size is None:
                    # - no downsampling: [M, C, H, W] -> [M, C, H*W]
                    # an activation is an (effective) neuron is an activation (in the spatial dimension)
                    # (effective) data size is
    
                    # flatten(2) -> flatten from 2 to -1 (end)
                    self_tensor = self_tensor.flatten(start_dim=2, end_dim=-1).contiguous()
                    other_tensor = other_tensor.flatten(start_dim=2, end_dim=-1).contiguous()
    
                    # improvement [M, C, H, W] -> [M, C*H*W]
                    self_tensor = self_tensor.flatten(start_dim=3, end_dim=-1).contiguous()
                    other_tensor = other_tensor.flatten(start_dim=3, end_dim=-1).contiguous()
                    return self.cca_function(self_tensor, other_tensor).item()
    

    Original paper Screen Shot 2021-10-28 at 12 42 40 PM

    I am aware you later go and compare them by looping through each data point later, which is not exactly equivalent as the above - though that is a small nuance. That approach assumes C is the effective size of the data and that each activation in the spatial dimension is a filter. But usually a neuron (vector) is considered to have size with respect to the data set so usually it's [M, CHW] or [MHW, C]. So I'm unsure why having the filter size as the effective size of the data set for CCA is justified.

    I will go with [MHW, C] since I think the definition of a neuron per filter makes more sense and each patch seen by a filter as a data point makes more sense. I think due to the nature of CCA, this is fine to apply even across layers. If you want to know why I'm happy to copy paste that section of the background section of my paper here.

    see:

    Screen Shot 2021-10-28 at 12 48 47 PM

    Thanks for your great library and feedback!

    opened by brando90 2
  • bug in lincka?

    bug in lincka?

    isn't a 1- missing?

    https://github.com/moskomule/anatome/blob/393b36df77631590be7f4d23bff5436fa392dc0e/anatome/distance.py#L212

    https://github.com/moskomule/anatome/pull/7

    Screen Shot 2021-11-17 at 10 51 08 AM
    opened by brando90 1
  • why orthonormalize the cca combination instead of the cca vectors/canonical neurons?

    why orthonormalize the cca combination instead of the cca vectors/canonical neurons?

    Why does anatome compute pwcca by orthonormalizing the a vector instead of the CCA vector x_tilde = x @ a? e.g.

    https://github.com/moskomule/anatome/blob/393b36df77631590be7f4d23bff5436fa392dc0e/anatome/distance.py#L161

    the authors do the latter i.e. orthonormalize x_tilde not a: Screen Shot 2021-11-16 at 10 32 03 AM

    https://arxiv.org/abs/1806.05759

    current anatome code:

    def pwcca_distance(x: Tensor,
                       y: Tensor,
                       backend: str
                       ) -> Tensor:
        """ Projection Weighted CCA proposed in Marcos et al. 2018.
        Args:
            x: input tensor of Shape DxH, where D>H
            y: input tensor of Shape DxW, where D>H
            backend: svd or qr
        Returns:
        """
    
        a, b, diag = cca(x, y, backend)
        a, _ = torch.linalg.qr(a)  # reorthonormalize
        alpha = (x @ a).abs_().sum(dim=0)
        alpha /= alpha.sum()
        return 1 - alpha @ diag
    

    related: https://stackoverflow.com/questions/69993768/how-does-one-implement-pwcca-in-pytorch-match-the-original-pwcca-implemented-in

    opened by brando90 6
  • Bug in cca_by_svd computation

    Bug in cca_by_svd computation

    I am computing two random matrices that are different and totally random. Their CCA shouldn't be high since they are different and random. Google's svcca library gives me much lower CCA values but anatome's gives me CCA values that are 1.0...which obviously are wrong. Any idea where the bug might be? Been looking for it for a while:

    Code:

    #%%
    import torch
    from matplotlib import pyplot as plt
    
    from uutils.torch_uu.metrics.cca import cca_core
    
    from torch import Tensor
    from anatome.similarity import svcca_distance, cca_by_svd, cca_by_qr, _compute_cca_traditional_equation
    import numpy as np
    import random
    np.random.seed(0)
    torch.manual_seed(0)
    random.seed(0)
    
    
    # tutorial shapes (500, 10000) (500, 10000) based on MNIST with 500 neurons from a FCNN
    D, N = 500, 10_000
    
    # - creating a random baseline
    # b1 = np.random.randn(*acts1.shape)
    # b2 = np.random.randn(*acts2.shape)
    b1 = np.random.randn(D, N)
    b2 = np.random.randn(D, N)
    print('-- reproducibity finger print')
    print(f'{b1.sum()=}')
    print(f'{b2.sum()=}')
    print(f'{b1.shape=}')
    
    # - get cca values for baseline
    print("\n-- Google's SVCCA -- ")
    baseline = cca_core.get_cca_similarity(b1, b2, epsilon=1e-10, verbose=False)
    # _plot_helper(baseline["cca_coef1"], "CCA coef idx", "CCA coef value")
    # print("Baseline Mean CCA similarity", np.mean(baseline["cca_coef1"]))
    # print("Baseline CCA similarity", baseline["cca_coef1"])
    print(f'{len(baseline["cca_coef1"])=}')
    print("Baseline CCA similarity", baseline["cca_coef1"][:6])
    print(f"{np.mean(baseline['cca_coef1'])=}")
    
    # - get sklern's cca's https://scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.CCA.html
    # from sklearn.cross_decomposition import CCA
    # # cca = CCA(n_components=D)
    # cca = CCA(n_components=6)
    # cca.fit(b1, b2)
    
    # -
    print("\n-- Ultimate Anatome's SVCCA --")
    # 'svcca': partial(svcca_distance, accept_rate=0.99, backend='svd')
    # svcca_dist: Tensor = svcca_distance(x, y, accept_rate=0.99, backend='svd')
    b1_t, b2_t = torch.from_numpy(b1), torch.from_numpy(b2)
    # svcca: Tensor = 1.0 - svcca_distance(x=b1_t, y=b2_t, accept_rate=1.0, backend='svd')
    # diag: Tensor = svcca_distance(x=b1_t, y=b2_t, accept_rate=0.99, backend='svd')
    # a, b, diag = cca(x, y, backend='svd')
    a, b, diag = cca_by_svd(b1_t, b2_t)
    # a, b, diag = cca_by_qr(b1_t, b2_t)
    # diag = _compute_cca_traditional_equation(b1_t, b2_t)
    print(f'{diag.size()=}')
    print(f'{diag[:6]=}')
    print(f'{diag.mean()=}')
    # print(f'{svcca=}')
    
    print()
    

    output

    -- reproducibity finger print
    b1.sum()=686.0427476883059
    b2.sum()=2341.981561471438
    b1.shape=(500, 10000)
    -- Google's SVCCA -- 
    len(baseline["cca_coef1"])=500
    Baseline CCA similarity [0.43056735 0.42918498 0.42502398 0.42290128 0.42208184 0.41986944]
    np.mean(baseline['cca_coef1'])=0.19071397156414877
    -- Ultimate Anatome's SVCCA --
    diag.size()=torch.Size([500])
    diag[:6]=tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000], dtype=torch.float64)
    diag.mean()=tensor(1., dtype=torch.float64)
    

    related: https://stackoverflow.com/questions/69993768/how-does-one-implement-pwcca-in-pytorch-match-the-original-pwcca-implemented-in

    opened by brando90 33
Releases(0.04)
Owner
Ryuichiro Hataya
PhD student at UTokyo and RA at RIKEN AIP / focusing on DL and ML
Ryuichiro Hataya
Improving adversarial robustness by a coupling rejection strategy

Adversarial Training with Rectified Rejection The code for the paper Adversarial Training with Rectified Rejection. Environment settings and libraries

Tianyu Pang 29 Jan 06, 2023
Tutel MoE: An Optimized Mixture-of-Experts Implementation

Project Tutel Tutel MoE: An Optimized Mixture-of-Experts Implementation. Supported Framework: Pytorch Supported GPUs: CUDA(fp32 + fp16), ROCm(fp32) Ho

Microsoft 344 Dec 29, 2022
This is a demo app to be used in the video streaming applications

MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with Deep Neural Networks MoViDNN is an Android application that can be used to ev

ATHENA Christian Doppler (CD) Laboratory 7 Jul 21, 2022
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks.

AllSet This is the repo for our paper: You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. We prepared all codes and a subse

Jianhao 51 Dec 24, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
PyTorch code to run synthetic experiments.

Code repository for Invariant Risk Minimization Source code for the paper: @article{InvariantRiskMinimization, title={Invariant Risk Minimization}

Facebook Research 345 Dec 12, 2022
code from "Tensor decomposition of higher-order correlations by nonlinear Hebbian plasticity"

Code associated with the paper "Tensor decomposition of higher-order correlations by nonlinear Hebbian learning," Ocker & Buice, Neurips 2021. "plot_f

Gabriel Koch Ocker 4 Oct 16, 2022
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022
A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021) This repository contains the official implemen

81 Dec 14, 2022
SwinTrack: A Simple and Strong Baseline for Transformer Tracking

SwinTrack This is the official repo for SwinTrack. A Simple and Strong Baseline Prerequisites Environment conda (recommended) conda create -y -n SwinT

LitingLin 196 Jan 04, 2023
Unofficial PyTorch implementation of SimCLR by Google Brain

Unofficial PyTorch implementation of SimCLR by Google Brain

Rishabh Anand 2 Oct 13, 2021
Novel and high-performance medical image classification pipelines are heavily utilizing ensemble learning strategies

An Analysis on Ensemble Learning optimized Medical Image Classification with Deep Convolutional Neural Networks Novel and high-performance medical ima

14 Dec 18, 2022
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)

Awesome Visual-Transformer Collect some Transformer with Computer-Vision (CV) papers. If you find some overlooked papers, please open issues or pull r

dkliang 2.8k Jan 08, 2023
TransCD: Scene Change Detection via Transformer-based Architecture

TransCD: Scene Change Detection via Transformer-based Architecture

wangzhixue 29 Dec 11, 2022
An implementation of an abstract algebra for music tones (pitches).

nbdev template Use this template to more easily create your nbdev project. If you are using an older version of this template, and want to upgrade to

Open Music Kit 0 Oct 10, 2022
CVPR '21: In the light of feature distributions: Moment matching for Neural Style Transfer

In the light of feature distributions: Moment matching for Neural Style Transfer (CVPR 2021) This repository provides code to recreate results present

Nikolai Kalischek 49 Oct 13, 2022
Deep Semisupervised Multiview Learning With Increasing Views (IEEE TCYB 2021, PyTorch Code)

Deep Semisupervised Multiview Learning With Increasing Views (ISVN, IEEE TCYB) Peng Hu, Xi Peng, Hongyuan Zhu, Liangli Zhen, Jie Lin, Huaibai Yan, Dez

3 Nov 19, 2022
Distilled coarse part of LoFTR adapted for compatibility with TensorRT and embedded divices

Coarse LoFTR TRT Google Colab demo notebook This project provides a deep learning model for the Local Feature Matching for two images that can be used

Kirill 46 Dec 24, 2022
A PyTorch Implementation of FaceBoxes

FaceBoxes in PyTorch By Zisian Wong, Shifeng Zhang A PyTorch implementation of FaceBoxes: A CPU Real-time Face Detector with High Accuracy. The offici

Zi Sian Wong 797 Dec 17, 2022
This repo contains the source code and a benchmark for predicting user's utilities with Machine Learning techniques for Computational Persuasion

Machine Learning for Argument-Based Computational Persuasion This repo contains the source code and a benchmark for predicting user's utilities with M

Ivan Donadello 4 Nov 07, 2022