GT4SD, an open-source library to accelerate hypothesis generation in the scientific discovery process.

Overview

GT4SD (Generative Toolkit for Scientific Discovery)

License: MIT Code style: black Contributions

logo

The GT4SD (Generative Toolkit for Scientific Discovery) is an open-source platform to accelerate hypothesis generation in the scientific discovery process. It provides a library for making state-of-the-art generative AI models easier to use.

Installation

pip

You can install gt4sd directly from GitHub:

pip install git+https://github.com/GT4SD/gt4sd-core

Development setup & installation

If you would like to contribute to the package, we recommend the following development setup: Clone the gt4sd-core repository:

git clone [email protected]:GT4SD/gt4sd-core.git
cd gt4ds-core
conda env create -f conda.yml
conda activate gt4sd
pip install -e .

Learn more in CONTRIBUTING.md

Supported packages

Beyond implementing various generative modeling inference and training pipelines GT4SD is designed to provide a high-level API that implement an harmonized interface for several existing packages:

  • GuacaMol: inference pipelines for the baselines models.
  • MOSES: inference pipelines for the baselines models.
  • TAPE: encoder modules compatible with the protein language models.
  • PaccMann: inference pipelines for all algorithms of the PaccMann family as well as traiing pipelines for the generative VAEs.
  • transformers: training and inference pipelines for generative models from the HuggingFace Models

Using GT4SD

Running inference pipelines

Running an algorithm is as easy as typing:

from gt4sd.algorithms.conditional_generation.paccmann_rl.core import (
    PaccMannRLProteinBasedGenerator, PaccMannRL
)
target = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTT'
# algorithm configuration with default parameters
configuration = PaccMannRLProteinBasedGenerator()
# instantiate the algorithm for sampling
algorithm = PaccMannRL(configuration=configuration, target=target)
items = list(algorithm.sample(10))
print(items)

Or you can use the ApplicationRegistry to run an algorithm instance using a serialized representation of the algorithm:

from gt4sd.algorithms.registry import ApplicationsRegistry
target = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTT'
algorithm = ApplicationsRegistry.get_application_instance(
    target=target,
    algorithm_type='conditional_generation',
    domain='materials',
    algorithm_name='PaccMannRL',
    algorithm_application='PaccMannRLProteinBasedGenerator',
    generated_length=32,
    # include additional configuration parameters as **kwargs
)
items = list(algorithm.sample(10))
print(items)

Running training pipelines via the CLI command

GT4SD provides a trainer client based on the gt4sd-trainer CLI command. The trainer currently supports training pipelines for language modeling (language-modeling-trainer), PaccMann (paccmann-vae-trainer) and Granular (granular-trainer, multimodal compositional autoencoders).

$ gt4sd-trainer --help
usage: gt4sd-trainer [-h] --training_pipeline_name TRAINING_PIPELINE_NAME
                     [--configuration_file CONFIGURATION_FILE]

optional arguments:
  -h, --help            show this help message and exit
  --training_pipeline_name TRAINING_PIPELINE_NAME
                        Training type of the converted model, supported types:
                        granular-trainer, language-modeling-trainer, paccmann-
                        vae-trainer. (default: None)
  --configuration_file CONFIGURATION_FILE
                        Configuration file for the trainining. It can be used
                        to completely by-pass pipeline specific arguments.
                        (default: None)

To launch a training you have two options.

You can either specify the training pipeline and the path of a configuration file that contains the needed training parameters:

gt4sd-trainer  --training_pipeline_name ${TRAINING_PIPELINE_NAME} --configuration_file ${CONFIGURATION_FILE}

Or you can provide directly the needed parameters as argumentsL

gt4sd-trainer  --training_pipeline_name language-modeling-trainer --type mlm --model_name_or_path mlm --training_file /pah/to/train_file.jsonl --validation_file /path/to/valid_file.jsonl 

To get more info on a specific training pipeleins argument simply type:

gt4sd-trainer --training_pipeline_name ${TRAINING_PIPELINE_NAME} --help

References

If you use gt4sd in your projects, please consider citing the following:

@software{GT4SD,
author = {GT4SD Team},
month = {2},
title = {{GT4SD (Generative Toolkit for Scientific Discovery)}},
url = {https://github.com/GT4SD/gt4sd-core},
version = {main},
year = {2022}
}

License

The gt4sd codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.

Comments
  • cli-upload

    cli-upload

    cli-upload

    Add upload functionality to the command line. It gives the user the possibility to upload specific artifacts on a server.

    Given a specific version for an algorithm:

    • check if that version is already on the server: - check if the folder bucket/algorithm_type/algorithm_name/algorithm_application/version/ exists.
    • If yes, tell the user and stop the upload.
    • If not, upload all the files in that version.

    cli-upload relies on minio and has been tested locally using docker-compose. cli-upload can be used to upload on a cloud or local server.


    How to use cli-upload

    Following the example in the README (in the Saving a trained algorithm for inference via the CLI command section) and assuming a trained model in /tmp/test_cli_upload, run:

    gt4sd-upload --training_pipeline_name paccmann-vae-trainer --model_path /tmp/test_cli_upload --training_name fast-example --target_version fast-example-v0 --algorithm_application PaccMannGPGenerator

    opened by georgosgeorgos 15
  • MOSES VAE from Guacamol training reconstruction is

    MOSES VAE from Guacamol training reconstruction is "incorrect"

    Describe the bug The VAE in GT4SD uses the wrapper of the Moses VAE from Guacamol. Unfortunately, the decoding training step from the Moses VAE is bugged.

    More detail The problem arises from the definition of the forward_decoder method:

    def forward_decoder(self, x, z):
        lengths = [len(i_x) for i_x in x]
    
        x = nn.utils.rnn.pad_sequence(x, batch_first=True, padding_value=self.pad)
        x_emb = self.x_emb(x)
    
        z_0 = z.unsqueeze(1).repeat(1, x_emb.size(1), 1)
        x_input = torch.cat([x_emb, z_0], dim=-1)  # <--- PROBLEM 1
        x_input = nn.utils.rnn.pack_padded_sequence(x_input, lengths, batch_first=True)
    
        h_0 = self.decoder_lat(z)
        h_0 = h_0.unsqueeze(0).repeat(self.decoder_rnn.num_layers, 1, 1)
    
        output, _ = self.decoder_rnn(x_input, h_0)
    
        output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True)
        y = self.decoder_fc(output)
    
        recon_loss = F.cross_entropy(  # <--- PROBLEM 2
            y[:, :-1].contiguous().view(-1, y.size(-1)),
            x[:, 1:].contiguous().view(-1),
            ignore_index=self.pad
        )
    
        return recon_loss
    

    Namely, the reconstruction step is wrong in two spots:

    1. construction of the true input: x_input = torch.cat([x_emb, z_0], dim=-1) In the visual representation of a typical RNN, the true token feeds in from the 'bottom" of the cell and the previous hidden state from the "left". In this implementation, the reparameterized latent vector z is fed in both from the "left" (normal) and the "bottom" (atypical). Fix: this line should be removed
    2. calculation of the reconstruction loss: recon_loss = F.cross_entropy(...) This reconstruction loss is calculated as the per-token loss of the input batch (i.e., the mean of a batch of tokens) because the default reduction in F.cross_entropy is "mean". In turn, this results in reconstruction losses that are very low for the VAE, causing the optimizer to ignore the decoder and focus on the encoder. When a VAE focuses too hard on the encoder, you get mode collapse, and that's what happens with the Moses VAE. Fix: this line should be: F.cross_entropy(..., reduction="sum") / len(x)

    To reproduce

    1. Problem 1 is not a "problem" so much as it is highly atypical to structure a VAE like this. I can't say if it results in any actual problems, but it simply shouldn't be there
    2. Problem 2 can be observed with two experiments:
      1. Using PCA with two dimensions, plot the embeddings of a random batch z ~ q(z|x) and a sample from the standard normal distribution z ~ N(0, I). The embeddings from the encoder will look like a point at (0, 0) compared to the samples from the standard normal
      2. Measure the reconstruction accuracy x_r ~ p(x | z ~ q(z | x_0)). In a well-trained VAE, sum(x_r == x_0 for x_0 in xs) / len(xs) should be above 50%. This VAE is generally fairly low (in my experience).
    bug 
    opened by davidegraff 12
  • Improve CLA workflow

    Improve CLA workflow

    actions to commit to other peoples forks was not something super easy to do, so I'm settling for a bit more verbosity and automation.

    the issue will be closed with a comment to the commit that added the contributor. There is a notice to merge this into a PR.

    Therefore there is no assignment of the issue any more.

    Looks like this: https://github.com/C-nit/gt4sd-core/issues/9 and can also be triggered in a different way: https://github.com/C-nit/gt4sd-core/issues/11

    opened by C-nit 11
  • feat: Support in RT Trainer for multiple entities.

    feat: Support in RT Trainer for multiple entities.

    Solving #143 by expanding the Regression Transformer trainer to support multi-entity discriminations, i.e., support the multientity_cg collator from the RT repo.

    Signed-off-by: Nicolai Ree [email protected]

    opened by NicolaiRee 9
  • feat: property_predictors in scorer

    feat: property_predictors in scorer

    • Implement PropertyPredictorScorer in domains.materials.property_scorer. - circular import using domains.materials.scorer for the implementation
    • We are simply using the PropertyPredictorRegistry to select a property and parameters by name and PropertyPredictionScorer to compute a score on a sample wrt a target value.
    • Tests mimick the logic in properties.
    cla-signed 
    opened by georgosgeorgos 8
  • Training pipeline Regression Transformer

    Training pipeline Regression Transformer

    Adding new training pipeline for RT

    • allows to finetune existing models available in the toolkit
    • allows to train models from scratch
    • patching LRSchedulers in torchdrug --> they are needed for RT training and threw errors
    cla-signed 
    opened by jannisborn 6
  • Added toxicity and  affinity to visum notebook

    Added toxicity and affinity to visum notebook

    Signed-off-by: Eduardo [email protected] Added toxicity (Tox21 model from https://github.com/PaccMann/paccmann_sarscov2) and affinity (Paccmann predictor) to the notebook.

    @drugilsberg , I am not sure about one specific step in the notebook and I would really appreciate it if you could help: When calling the sample in PaccMannGP for the first time the first line of the output is

    configuring optimization for target: {'qed': {'weight': 1.0}, 'sa': {'weight': 1.0}}

    However, on the second call to the same object (no reinitialization), in section "Sampling and Plotting Molecules with GT4SD", the first line reads:

    configuring optimization for target: {'qed': {}, 'sa': {}}

    Do you know if this has any influence on the molecules being generated? I attached a PDF file with the output for convenience.

    visum-2022-handson-generative-models.pdf

    @helenaMontenegro , the notebook now requires users to download a small model, but I don't think this is a problem.

    cla-signed 
    opened by edux300 5
  • Problem multiprocess in requirements

    Problem multiprocess in requirements

    The new multiprocess library version (0.70.13) gives problems when installing gt4sd-core using the development mode. I had to set multiprocess==0.70.12.2 to install the library.

    opened by georgosgeorgos 5
  • Torchdrug trainer pipeline

    Torchdrug trainer pipeline

    Implemented torchdrug trainer pipeline. Models can be used via:

    gt4sd-trainer --training_pipeline_name torchdrug-gcpn-trainer -h
    gt4sd-trainer --training_pipeline_name torchdrug-graphaf-trainer -h
    

    Features:

    • [x] Support for the same two models that are available via inference TorchDrugGCPN and TorchDrugGraphAF.
    • [x] Both models can be trained on all MoleculeDatasets from torchdrug.Datasets. Those are around 20 predefined datasets.
    • [x] Implemented a custom dataset where users can pass their own data
    • [x] In addition to the unittests I verified functionalities from the CLI via gt4sd-trainer

    Problems:

    • [ ] Property optimization does not work, due to instabilities in TorchDrug. I opened issue and PR but we have to wait until they merge, release a new version and then bump our dependency. The code I wrote here already supports the property optimization but I disabled the unittest for the moment because it would fail due to the TorchDrug issue. See details: https://github.com/DeepGraphLearning/torchdrug/issues/83
    • [x] gt4sd-saving: I ran a test via CLI but the saving failed. Not sure how problematic this is, here's the error:
    INFO:gt4sd.cli.saving:Selected configuration: ConfigurationTuple(algorithm_type='generation', domain='materials', algorithm_name='TorchDrugGenerator', algorithm_application='TorchDrugGCPN')
    INFO:gt4sd.cli.saving:Saving model version "fast" with the following configuration: <class 'gt4sd.algorithms.generation.torchdrug.core.TorchDrugGCPN'>
    INFO:gt4sd.algorithms.core:TorchDrugGCPN can not save a version based on TorchDrugSavingArguments(model_path='/Users/jab/.gt4sd/runs/', training_name='gcpn_test')
    
    enhancement cla-signed 
    opened by jannisborn 5
  • RT sampling_wrapper to specify a substructure or series of tokens to keep unmasked

    RT sampling_wrapper to specify a substructure or series of tokens to keep unmasked

    I would like to propose an upgrade on the feature demonstrated in this notebook: https://github.com/GT4SD/gt4sd-core/blob/main/notebooks/regression-transformer-demo.ipynb (see cells 12-14)

    In addition to explicitly specifying tokens_to_mask, one probably could more likely imagine that a chemist might want to specify a substructure to mask or to "freeze" (keep unchanged, i.e. unmasked). It might be easier to specify tokens to freeze as that would be just selecting a part of the string to be kept unmasked. Prototype example is given below.

        sampling_wrapper={
            'property_goal': {
                '<logp>': 6.123,
                '<scs>': 1.5
            },
            'fraction_to_mask': 0.6,
            # keep morpholino tail unchanged
            'tokens_to_freeze': ['N4CCOCC4']
        }
    

    If one could specify substructure to freeze or to mask - that would be potentially even more advantageous, as that would remove ambiguities when a substructure can be expressed in more than one sequence.

        sampling_wrapper={
            'property_goal': {
                '<logp>': 6.123,
                '<scs>': 1.5
            },
            'fraction_to_mask': 0.6,
            # keep morpholino tail unchanged
            'substructure_to_freeze': ['N1CCOCC1'],
            # explicitly mask benzene ring moiety
            'substructure_to_mask':  ['C1=CC=CC=C1'],
        }
    

    One could use RDKit functionality to identify substructure tokens, as given here: https://www.rdkit.org/docs/Cookbook.html#substructure-matching

    Regarding the interpretation of the 'fraction_to_mask', I would then imagine that it would best applied to the remaining set of tokens (after tokens_to_freeze and explicit tokens_to_mask are excluded). I hope this makes sense, happy to clarify and exemplify further.

    enhancement 
    opened by OleinikovasV 4
  • Artifact storage for property predictors

    Artifact storage for property predictors

    Closes #116

    Now we can store artifacts also for property predictors

    • New property predictors are tested
    • One thing that remains to do is to have functions under gt4sd.properties.molecules.functions. Atm this is not yet supported since it would yield circular imports.
    cla-signed 
    opened by jannisborn 4
  • RT saving pipeline

    RT saving pipeline

    Closes #169

    • gt4sd-saving now also supports the RT training pipeline. I implemented the get_filepath_mappings_for_training_pipeline_arguments method. The inference.json is now created inside the RT trainer and also saved in the model folder such that it can later be copied by gt4sd-saving. The Property class was needed as a helper for this, to track some attributes of each property.
    • Expanded the RT example. Describes now a full process of training/finetuning a model, saving it with gt4sd-saving, running inference on it and finally uploading it to the model hub.

    I tested everything with the example from the README

    Minors:

    • adding a method filter_stubbed to the molecular RT that removes stub-like molecules("invalid SELFIES")
    • Bumping paccmann_gp dependency
    enhancement cla-signed 
    opened by jannisborn 0
  • RegressionTransformer saving pipeline

    RegressionTransformer saving pipeline

    Is your feature request related to a problem? Please describe. gt4sd-saving is not fully supportive of RT

    ToDo:

    • Implement get_filepath_mappings_for_training_pipeline_arguments
    • Save inference.json to model dir
    enhancement 
    opened by jannisborn 0
  • Disentangle properties from algorithms

    Disentangle properties from algorithms

    Is your feature request related to a problem? Please describe. Currently, the properties submodule imports stuff from algorithms.core and thus also from that __init__. In the init, we registry all the training pipelines and thus, one needs to have all those dependencies installed, including torchdrug, guacamol_baselines and other vcs-requirements

    Describe the solution you'd like Creating a submodule gt4sd.core that specifies base classes used by multiple submodules like gt4sd.algorithms or gt4sd.properties

    Describe alternatives you've considered Do the imports only when someone calls list_available_algorithms

    NOTE: When creating gt4sd.core we have to make sure that all the rest remains functional, including relative imports, jupyter notebooks (should be fine since we barely import from algorithms.core directly) and in particular also documentation

    enhancement 
    opened by jannisborn 0
  • Add methods for artifact-based property predictors

    Add methods for artifact-based property predictors

    Is your feature request related to a problem? Please describe. Currently the artifact-based property predictors (like gt4sd.properties.molecules.core.Tox21) are not usable as functions via gt4sd.properties.molecules.tox_21, unlike all the non-artifact-based properties). Moving the functions there would yield circula import issues

    Describe the solution you'd like A small refactor that goes around the circular imports

    enhancement 
    opened by jannisborn 0
  • Refactor AlgorithmConfiguration baseclass

    Refactor AlgorithmConfiguration baseclass

    Inconsistent types between AlgorithmConfiguration base class and the child ConfigurablePropertyAlgorithm Configuration, concerning attributes like domain but also methods like ensure_artifacts_for_version (class methods in the base class but instance methods in the base class).

    A simple refactor into 3 instead of 2 classes should fix this.

    Originally posted by @jannisborn in https://github.com/GT4SD/gt4sd-core/pull/121#discussion_r943649339

    • So the ones in the contstructor for lines like self.domain=domain says: error: Cannot assign to class variable "domain" via instance. That's because in the parent class (AlgorithmConfiguration) we set it as domain: ClassVar[str]
    • the ones in the signatures like get_application_prefix which returns a str are because in the parent class those are class methods, not instance methods. THe error is Signature of "get_application_prefix" incompatible with supertype "AlgorithmConfiguration

    It might be fixable by a refactor but I'm not sure it's worth it

    refactoring 
    opened by jannisborn 0
Releases(v1.0.4)
Owner
Generative Toolkit 4 Scientific Discovery
Generative Toolkit 4 Scientific Discovery
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)

TensorFlow Examples This tutorial was designed for easily diving into TensorFlow, through examples. For readability, it includes both notebooks and so

Aymeric Damien 42.5k Jan 08, 2023
Generative Exploration and Exploitation - This is an improved version of GENE.

GENE This is an improved version of GENE. In the original version, the states are generated from the decoder of VAE. We have to check whether the gere

33 Mar 23, 2022
PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019

Learning Character-Agnostic Motion for Motion Retargeting in 2D We provide PyTorch implementation for our paper Learning Character-Agnostic Motion for

Rundi Wu 367 Dec 22, 2022
General-purpose program synthesiser

DeepSynth General-purpose program synthesiser. This is the repository for the code of the paper "Scaling Neural Program Synthesis with Distribution-ba

Nathanaël Fijalkow 24 Oct 23, 2022
(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"

Res2Net The official pytorch implemention of the paper "Res2Net: A New Multi-scale Backbone Architecture" Our paper is accepted by IEEE Transactions o

Res2Net Applications 928 Dec 29, 2022
Computational inteligence project on faces in the wild dataset

Table of Contents The general idea How these scripts work? Loading data Needed modules and global variables Parsing the arrays in dataset Extracting a

tooraj taraz 4 Oct 21, 2022
Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Photo-Realistic-Super-Resoluton Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" [Paper]

Harry Yang 199 Dec 01, 2022
A practical ML pipeline for data labeling with experiment tracking using DVC.

Auto Label Pipeline A practical ML pipeline for data labeling with experiment tracking using DVC Goals: Demonstrate reproducible ML Use DVC to build a

Todd Cook 4 Mar 08, 2022
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

4.9k Jan 03, 2023
PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021.

PAML PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021. (Continuously updating ) Int

15 Nov 18, 2022
Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

Recursive-NeRF: An Efficient and Dynamically Growing NeRF This is a Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

33 Nov 30, 2022
The PyTorch implementation for paper "Neural Texture Extraction and Distribution for Controllable Person Image Synthesis" (CVPR2022 Oral)

ArXiv | Get Start Neural-Texture-Extraction-Distribution The PyTorch implementation for our paper "Neural Texture Extraction and Distribution for Cont

Ren Yurui 111 Dec 10, 2022
Detectron2-FC a fast construction platform of neural network algorithm based on detectron2

What is Detectron2-FC Detectron2-FC a fast construction platform of neural network algorithm based on detectron2. We have been working hard in two dir

董晋宗 9 Jun 06, 2022
[NeurIPS 2021] Low-Rank Subspaces in GANs

Low-Rank Subspaces in GANs Figure: Image editing results using LowRankGAN on StyleGAN2 (first three columns) and BigGAN (last column). Low-Rank Subspa

112 Dec 28, 2022
Conditional Gradients For The Approximately Vanishing Ideal

Conditional Gradients For The Approximately Vanishing Ideal Code for the paper: Wirth, E., and Pokutta, S. (2022). Conditional Gradients for the Appro

IOL Lab @ ZIB 0 May 25, 2022
This repository is maintained for the scientific paper tittled " Study of keyword extraction techniques for Electric Double Layer Capacitor domain using text similarity indexes: An experimental analysis "

kwd-extraction-study This repository is maintained for the scientific paper tittled " Study of keyword extraction techniques for Electric Double Layer

ping 543f 1 Dec 05, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

CV Backbones including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab. GhostNet Code TinyNet Code TNT Code Pyr

HUAWEI Noah's Ark Lab 3k Jan 08, 2023
Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Intelligent Robotics and Machine Vision Lab 4 Jul 19, 2022
Node Dependent Local Smoothing for Scalable Graph Learning

Node Dependent Local Smoothing for Scalable Graph Learning Requirements Environments: Xeon Gold 5120 (CPU), 384GB(RAM), TITAN RTX (GPU), Ubuntu 16.04

Wentao Zhang 15 Nov 28, 2022