A framework for Quantification written in Python

Related tags

Deep LearningQuaPy
Overview

QuaPy

QuaPy is an open source framework for quantification (a.k.a. supervised prevalence estimation, or learning to quantify) written in Python.

QuaPy is based on the concept of "data sample", and provides implementations of the most important aspects of the quantification workflow, such as (baseline and advanced) quantification methods, quantification-oriented model selection mechanisms, evaluation measures, and evaluations protocols used for evaluating quantification methods. QuaPy also makes available commonly used datasets, and offers visualization tools for facilitating the analysis and interpretation of the experimental results.

Installation

pip install quapy

A quick example:

The following script fetches a dataset of tweets, trains, applies, and evaluates a quantifier based on the Adjusted Classify & Count quantification method, using, as the evaluation measure, the Mean Absolute Error (MAE) between the predicted and the true class prevalence values of the test set.

import quapy as qp
from sklearn.linear_model import LogisticRegression

dataset = qp.datasets.fetch_twitter('semeval16')

# create an "Adjusted Classify & Count" quantifier
model = qp.method.aggregative.ACC(LogisticRegression())
model.fit(dataset.training)

estim_prevalence = model.quantify(dataset.test.instances)
true_prevalence  = dataset.test.prevalence()

error = qp.error.mae(true_prevalence, estim_prevalence)

print(f'Mean Absolute Error (MAE)={error:.3f}')

Quantification is useful in scenarios characterized by prior probability shift. In other words, we would be little interested in estimating the class prevalence values of the test set if we could assume the IID assumption to hold, as this prevalence would be roughly equivalent to the class prevalence of the training set. For this reason, any quantification model should be tested across many samples, even ones characterized by class prevalence values different or very different from those found in the training set. QuaPy implements sampling procedures and evaluation protocols that automate this workflow. See the Wiki for detailed examples.

Features

  • Implementation of many popular quantification methods (Classify-&-Count and its variants, Expectation Maximization, quantification methods based on structured output learning, HDy, QuaNet, and quantification ensembles).
  • Versatile functionality for performing evaluation based on artificial sampling protocols.
  • Implementation of most commonly used evaluation metrics (e.g., AE, RAE, SE, KLD, NKLD, etc.).
  • Datasets frequently used in quantification (textual and numeric), including:
    • 32 UCI Machine Learning datasets.
    • 11 Twitter quantification-by-sentiment datasets.
    • 3 product reviews quantification-by-sentiment datasets.
  • Native support for binary and single-label multiclass quantification scenarios.
  • Model selection functionality that minimizes quantification-oriented loss functions.
  • Visualization tools for analysing the experimental results.

Requirements

  • scikit-learn, numpy, scipy
  • pytorch (for QuaNet)
  • svmperf patched for quantification (see below)
  • joblib
  • tqdm
  • pandas, xlrd
  • matplotlib

SVM-perf with quantification-oriented losses

In order to run experiments involving SVM(Q), SVM(KLD), SVM(NKLD), SVM(AE), or SVM(RAE), you have to first download the svmperf package, apply the patch svm-perf-quantification-ext.patch, and compile the sources. The script prepare_svmperf.sh does all the job. Simply run:

./prepare_svmperf.sh

The resulting directory svm_perf_quantification contains the patched version of svmperf with quantification-oriented losses.

The svm-perf-quantification-ext.patch is an extension of the patch made available by Esuli et al. 2015 that allows SVMperf to optimize for the Q measure as proposed by Barranquero et al. 2015 and for the KLD and NKLD measures as proposed by Esuli et al. 2015. This patch extends the above one by also allowing SVMperf to optimize for AE and RAE.

Wiki

Check out our Wiki, in which many examples are provided:

Comments
  • Couldn't train QuaNet on multiclass data

    Couldn't train QuaNet on multiclass data

    Hi, I am having trouble in training a QuaNet quantifier for multiclass (20) data. Everything works fine with where my dataset only has 2 classes. It looks like the ACC quantifier is not able to aggregate from more than 2 classes?

    The classifier is built and trained as with the code below

    classifier = LSTMnet(dataset.vocabulary_size, dataset.n_classes)
    learner = NeuralClassifierTrainer(classifier)
    learner.fit(*dataset.training.Xy)
    

    where it has all the default configurations

    {'embedding_size': 100, 'hidden_size': 256, 'repr_size': 100, 'lstm_class_nlayers': 1, 'drop_p': 0.5}

    Then I tried to train QuaNet with following code

    model = QuaNetTrainer(learner, qp.environ['SAMPLE_SIZE'])
    model.fit(dataset.training, fit_learner=False)
    

    and it showed that QuaNet is built as

    QuaNetModule( (lstm): LSTM(120, 64, batch_first=True, dropout=0.5, bidirectional=True) (dropout): Dropout(p=0.5, inplace=False) (ff_layers): ModuleList( (0): Linear(in_features=208, out_features=1024, bias=True) (1): Linear(in_features=1024, out_features=512, bias=True) ) (output): Linear(in_features=512, out_features=20, bias=True) )

    And then the error occured in model.fit().

    Attached is the error I get.

    Traceback (most recent call last): File "quanet-test.py", line 181, in model.fit(dataset.training, fit_learner=False) File "/home/vickys/.local/lib/python3.6/site-packages/quapy/method/neural.py", line 126, in fit self.epoch(train_data_embed, train_posteriors, self.tr_iter, epoch_i, early_stop, train=True) File "/home/vickys/.local/lib/python3.6/site-packages/quapy/method/neural.py", line 182, in epoch quant_estims = self.get_aggregative_estims(sample_posteriors) File "/home/vickys/.local/lib/python3.6/site-packages/quapy/method/neural.py", line 145, in get_aggregative_estims prevs_estim.extend(quantifier.aggregate(predictions)) File "/home/vickys/.local/lib/python3.6/site-packages/quapy/method/aggregative.py", line 238, in aggregate return ACC.solve_adjustment(self.Pte_cond_estim_, prevs_estim) File "/home/vickys/.local/lib/python3.6/site-packages/quapy/method/aggregative.py", line 246, in solve_adjustment adjusted_prevs = np.linalg.solve(A, B) File "<array_function internals>", line 6, in solve File "/usr/local/lib64/python3.6/site-packages/numpy/linalg/linalg.py", line 394, in solve r = gufunc(a, b, signature=signature, extobj=extobj) ValueError: solve1: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (m,m),(m)->(m) (size 2 is different from 20)

    Thank you!

    opened by vickysvicky 4
  • Parameter fit_learner in QuaNetTrainer (fit method)

    Parameter fit_learner in QuaNetTrainer (fit method)

    The parameter fit_leaner is not used in the function:

    def fit(self, data: LabelledCollection, fit_learner=True):

    and the learner is fitted every time:

    self.learner.fit(*classifier_data.Xy)

    opened by pglez82 1
  • Wiki correction

    Wiki correction

    In the last part of the Methods wiki page, where it says:

    from classification.neural import NeuralClassifierTrainer, CNNnet

    I think it should say:

    from quapy.classification.neural import NeuralClassifierTrainer, LSTMnet

    opened by pglez82 1
  • Error in LSTMnet

    Error in LSTMnet

    I think there is the function init_hidden:

    def init_hidden(self, set_size):
            opt = self.hyperparams
            var_hidden = torch.zeros(opt['lstm_nlayers'], set_size, opt['lstm_hidden_size'])
            var_cell = torch.zeros(opt['lstm_nlayers'], set_size, opt['lstm_hidden_size'])
            if next(self.lstm.parameters()).is_cuda:
                var_hidden, var_cell = var_hidden.cuda(), var_cell.cuda()
            return var_hidden, var_cell
    

    Where it says opt['lstm_hidden_size'] should be opt['hidden_size']

    opened by pglez82 1
  • EMQ can be instantiated with a transformation function

    EMQ can be instantiated with a transformation function

    This transformation function is applied to each intermediate estimate.

    Why should someone want to transform the prior between two iterations? A transformation of the prior is a heuristic, yet effective way of promoting desired properties of the solution. For instance,

    • small values could be enhanced if the data is extremely imbalanced
    • small values could be reduced if the user is looking for a sparse solution
    • neighboring values could be averaged if the user is looking for a smooth solution
    • the function could also leave the prior unaltered and just be used as a callback for logging the progress of the method

    I hope this feature is useful. Let me know what you think!

    opened by mirkobunse 0
  • fixing two problems with parameters: hidden_size and lstm_nlayers

    fixing two problems with parameters: hidden_size and lstm_nlayers

    I found another problem with a parameter. When using LSTMnet with QuaNet two parameters overlap (lstm_nlayers). I have renamed the one in the LSTMnet to lstm_class_nlayers.

    opened by pglez82 0
  • Using a different gpu than cuda:0

    Using a different gpu than cuda:0

    The code seems to be tied up to using only 'cuda', which by default uses the first gpu in the system ('cuda:0'). It would be handy to be able to tell the library in which cuda gpu you want to train (cuda:0, cuda:1, etc).

    opened by pglez82 0
Releases(0.1.6)
Owner
The Human Language Technologies group of ISTI-CNR
This thesis is mainly concerned with state-space methods for a class of deep Gaussian process (DGP) regression problems

Doctoral dissertation of Zheng Zhao This thesis is mainly concerned with state-space methods for a class of deep Gaussian process (DGP) regression pro

Zheng Zhao 21 Nov 14, 2022
Reinforcement learning models in ViZDoom environment

DoomNet DoomNet is a ViZDoom agent trained by reinforcement learning. The agent is a neural network that outputs a probability of actions given only p

Andrey Kolishchak 126 Dec 09, 2022
audioLIME: Listenable Explanations Using Source Separation

audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info

Institute of Computational Perception 27 Dec 01, 2022
A cross-lingual COVID-19 fake news dataset

CrossFake An English-Chinese COVID-19 fake&real news dataset from the ICDMW 2021 paper below: Cross-lingual COVID-19 Fake News Detection. Jiangshu Du,

Yingtong Dou 11 Dec 01, 2022
Awesome Long-Tailed Learning

Awesome Long-Tailed Learning This repo pays specially attention to the long-tailed distribution, where labels follow a long-tailed or power-law distri

Stomach_ache 284 Jan 06, 2023
Training a Resilient Q-Network against Observational Interference, Causal Inference Q-Networks

Obs-Causal-Q-Network AAAI 2022 - Training a Resilient Q-Network against Observational Interference Preprint | Slides | Colab Demo | Environment Setup

23 Nov 21, 2022
Pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model'

RTK-PAD This is an official pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model', which is accepted by IEEE T

6 Aug 01, 2022
Code release for SLIP Self-supervision meets Language-Image Pre-training

SLIP: Self-supervision meets Language-Image Pre-training What you can find in this repo: Pre-trained models (with ViT-Small, Base, Large) and code to

Meta Research 621 Dec 31, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

VITA 112 Nov 07, 2022
This repository is a series of notebooks that show solutions for the projects at Dataquest.io.

Dataquest Project Solutions This repository is a series of notebooks that show solutions for the projects at Dataquest.io. Of course, there are always

Dataquest 1.1k Dec 30, 2022
Neuron class provides LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neurons learned with Gradient descent or LeLevenberg–Marquardt algorithm

Neuron class provides LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neu

Filip Molcik 38 Dec 17, 2022
Doing the asl sign language classification on static images using graph neural networks.

SignLangGNN When GNNs 💜 MediaPipe. This is a starter project where I tried to implement some traditional image classification problem i.e. the ASL si

10 Nov 09, 2022
The fastai deep learning library

Welcome to fastai fastai simplifies training fast and accurate neural nets using modern best practices Important: This documentation covers fastai v2,

fast.ai 23.2k Jan 07, 2023
A curated list of awesome Active Learning

Awesome Active Learning 🤩 A curated list of awesome Active Learning ! 🤩 Background (image source: Settles, Burr) What is Active Learning? Active lea

BAI Fan 431 Jan 03, 2023
내가 보려고 정리한 <프로그래밍 기초 Ⅰ> / organized for me

Programming-Basics 프로그래밍 기초 Ⅰ 아카이브 Do it! 점프 투 파이썬 주차 강의주제 비고 1주차 Syllabus 2주차 자료형 - 숫자형 3주차 자료형 - 문자열형 4주차 입력과 출력 5주차 제어문 - 조건문 if 6주차 제어문 - 반복문 whil

KIMMINSEO 1 Mar 07, 2022
NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning Tim Pearce, Jun Zhu Offline RL workshop, NeurIPS 2021 Paper: https://arxiv.org/abs/2104

Tim Pearce 169 Dec 26, 2022
LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021

LoFTR-with-train-script LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021 (with train script --- unofficial ---). About Megadepth

Nan Xiaohu 15 Nov 04, 2022
Training Structured Neural Networks Through Manifold Identification and Variance Reduction

Training Structured Neural Networks Through Manifold Identification and Variance Reduction This repository is a pytorch implementation of the Regulari

0 Dec 23, 2021
PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019

Learning Character-Agnostic Motion for Motion Retargeting in 2D We provide PyTorch implementation for our paper Learning Character-Agnostic Motion for

Rundi Wu 367 Dec 22, 2022
Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line

NAVER/LINE Vision 357 Jan 04, 2023