PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

Related tags

Deep Learningphotonai
Overview

PHOTON LOGO

GitHub Workflow Status Coverage Status Github Contributors Github Commits PyPI Version License Twitter URL

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

We've created a system in which you can easily select and combine both pre-processing and learning algorithms from state-of-the-art machine learning toolboxes, and arrange them in simple or parallel pipeline data streams.

In addition, you can parametrize your training and testing workflow choosing cross-validation schemes, performance metrics and hyperparameter optimization metrics from a list of pre-registered options.

Importantly, you can integrate custom solutions into your data processing pipeline, but also for any part of the model training and evaluation process including custom hyperparameter optimization strategies.

For a detailed description, visit our website and read the documentation

or you can read our paper in PLOS ONE


Getting Started

In order to use PHOTONAI you only need to have your favourite Python IDE ready. Then install the latest stable version simply via pip

pip install photonai
# Or try out the latest features if you don't rely on a stable version, using:
pip install --upgrade git+https://github.com/wwu-mmll/[email protected]

You can setup a full stack machine learning pipeline in a few lines of code:

from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import KFold

from photonai.base import Hyperpipe, PipelineElement
from photonai.optimization import FloatRange, Categorical, IntegerRange

# DESIGN YOUR PIPELINE
my_pipe = Hyperpipe('basic_svm_pipe',  # the name of your pipeline
                    # which optimizer PHOTONAI shall use
                    optimizer='sk_opt',
                    optimizer_params={'n_configurations': 25},
                    # the performance metrics of your interest
                    metrics=['accuracy', 'precision', 'recall', 'balanced_accuracy'],
                    # after hyperparameter optimization, this metric declares the winner config
                    best_config_metric='accuracy',
                    # repeat hyperparameter optimization three times
                    outer_cv=KFold(n_splits=3),
                    # test each configuration five times respectively,
                    inner_cv=KFold(n_splits=5),
                    verbosity=1,
                    project_folder='./tmp/')


# first normalize all features
my_pipe.add(PipelineElement('StandardScaler'))

# then do feature selection using a PCA
my_pipe += PipelineElement('PCA', 
                           hyperparameters={'n_components': IntegerRange(5, 20)}, 
                           test_disabled=True)

# engage and optimize the good old SVM for Classification
my_pipe += PipelineElement('SVC', 
                           hyperparameters={'kernel': Categorical(['rbf', 'linear']),
                                            'C': FloatRange(0.5, 2)}, gamma='scale')

# train pipeline
X, y = load_breast_cancer(return_X_y=True)
my_pipe.fit(X, y)

Features

Easy access to established ML implementations

We pre-registered diverse preprocessing and learning algorithms from state-of-the-art toolboxes e.g. scikit-learn, keras and imbalanced learn to rapidly build custom pipelines

Hyperparameter Optimization

With PHOTONAI you can seamlessly switch between diverse hyperparameter optimization strategies, such as (random) grid-search or bayesian optimization (scikit-optimize, smac3).

Extended ML Pipeline

You can build custom sequences of processing and learning algorithms with a simple syntax. PHOTONAI offers extended pipeline functionality such as parallel sequences, custom callbacks in-between pipeline elements, AND- and OR- Operations, as well as the possibility to flexibly position data augmentation, class balancing or learning algorithms anywhere in the pipeline.

Model Sharing

PHOTONAI provides a standardized format for sharing and loading optimized pipelines across platforms with only one line of code.

Automation

While you concentrate on selecting appropriate processing steps, learning algorithms, hyperparameters and training parameters, PHOTONAI automates the nested cross-validated optimization and evaluation loop for any custom pipeline.

Results Visualization

PHOTONAI comes with extensive logging of all information in the training, testing and hyperparameter optimization process. In addition, optimum performances and the hyperparameter optimization progress are visualized in the PHOTONAI Explorer.

For more use cases, examples, contribution guidelines and API details visit our website

www.photon-ai.com

Comments
  • Question on the Over/Under sampling on validation and test splits

    Question on the Over/Under sampling on validation and test splits

    Hi, I defined a classification hyperpipe that involves a PipelineElement that Oversamples or Undersamples the input dataset. I would like to know if this step is done only on the training split of the nested cross validation or also on the validation and test splits ? Actually, I would like to know if the metrics computed to select the best models and to evaluate them are only computed on the "real" samples and not on the "real + fake" ones (in case of an oversampling), and if it is computed on all the samples and not only the selected ones in case of an undersampling. Do you know the answer or maybe a document where I can search for the answer ? I have not found it on the documentation but maybe I searched #badly. Thanks a lot ! Clément

    opened by brosscle 2
  • Bump dask from 2.30.0 to 2021.10.0

    Bump dask from 2.30.0 to 2021.10.0

    Bumps dask from 2.30.0 to 2021.10.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • FIX #44 ImbalancedDataTransformer

    FIX #44 ImbalancedDataTransformer

    Associated to #44.

    ImbalancedDataTransformer did not subsequently adjust the method name. This should be fixed by this PR. Furthermore the input of the kwargs was replaced by a config parameter. This can now specifically set the setting for each strategy. It is important that this is not used within the hyperparameters, as it is not certain which parameter will be set first. I have added this to the comments in the relevant places. Is there a better way to do this in this context?

    opened by lucasplagwitz 1
  • Imbalanced Data Transform always set to 'RandomUnderSampler' method

    Imbalanced Data Transform always set to 'RandomUnderSampler' method

    Hi, and thanks for your work on PhotonAI ! I created an hyperpipe and added a PipelineElement about imbalanced data transformations, like explained on https://wwu-mmll.github.io/photonai/examples/imbalanced_data/ . Unfortunately, when I look at my hyperpipe elements after the creation, I get the element PipelineElement(method_name='RandomUnderSampler', name='ImbalancedDataTransformer') for every selected method, event when selecting an oversampling method, which is quite embarrassing... Do you have any idea about how to solve this issue, and effectively add the selected method as element to my hyperpipe ? I am using version 2.1.0, maybe this issue has been addressed on version 2.2.0 ? Thanks in advance for your help ! Clément

    opened by brosscle 1
  • Test-Adaptations for Dask 2020.12.0-2021.2.0

    Test-Adaptations for Dask 2020.12.0-2021.2.0

    Move function self.create_hyperpipe() to create_hyperpipe() to enable a pickle-based serialization for newer versions of dask/distributed.

    Works only for versions != 2021.3.x -> is probably related to dask/distributed#4645 and solved with 2021.04.0.

    Small skopt adaptations: use defaults (base_estimator: ET->GP)

    opened by lucasplagwitz 1
  • Permutation test: (1) document to large to save to MongoDB (2) _calculate_results: mongodb path set to trap-umbriel

    Permutation test: (1) document to large to save to MongoDB (2) _calculate_results: mongodb path set to trap-umbriel

    While running a permutation test with Photon, the following issues occurred.

    Issue 1: While running the test (implemented as suggested in the docu https://www.photon-ai.com/documentation/permutation_test), the script is unable to write the results of the very first permutation (y=ytrue) into the MongoDB since the file size is too large. Since the PermutationTest relies on the MonoDB entries the process finishes with code 1.

    As a workaround, reducing the number of cv folds or the number of hyperparameter configurations (hence reducing the data to be stored in the MongoDB) solves the problem and the test will continue to run without any issues. The most obvious solution to me would be to forgo saving certain data into the MongoDB (e.g. feature importances, predictions). To my understanding this had been implemented in the past but was discontinued (commit d5ecd1b, 24.09.2019).

    Issue 2: While calculating the permutation results, the server cannot be found. It seems as in the def _calculate_results function (line 181), the server is set to mongodb_path="mongodb://trap-umbriel:27017/photon_results" and not to the server which has been set by the user.

    photonai version: 1.1.0 (develop tree) OS: MAC OS 10.15.4 n samples: 1650 features (predictors): 55

    Error log: Issue 1: File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 90, in fit self.pipe.results.save() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/base/models.py", line 476, in save self.to_son(), upsert=True) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 930, in replace_one collation=collation, session=session), File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 856, in _update_retryable _update, session) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1491, in _retryable_write return self._retry_with_session(retryable, func, s, None) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1384, in _retry_with_session return func(session, sock_info, retryable) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 852, in _update retryable_write=retryable_write) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 822, in _update retryable_write=retryable_write).copy() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/pool.py", line 618, in command self._raise_connection_failure(error) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/pool.py", line 613, in command user_fields=user_fields) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/network.py", line 143, in command name, size, max_bson_size + message._COMMAND_OVERHEAD) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/message.py", line 1077, in _raise_document_too_large raise DocumentTooLarge("%r command document too large" % (operation,)) pymongo.errors.DocumentTooLarge: 'update' command document too large

    Issue 2: Traceback (most recent call last): perm_tester.fit(X, y) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 143, in fit perm_result = self._calculate_results(self.permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 185, in _calculate_results mother_permutation = PermutationTest.find_reference(mongodb_path, permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 291, in find_reference mother_permutation = _find_mummy(permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 284, in _find_mummy 'computation_completed': True}).order_by([('computation_start_time', DESCENDING)]).first() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/queryset.py", line 127, in first return next(iter(self.limit(-1))) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/queryset.py", line 543, in return (to_instance(doc) for doc in self._get_raw_cursor()) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/cursor.py", line 1156, in next if len(self.__data) or self._refresh(): File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/cursor.py", line 1050, in _refresh self.__session = self.__collection.database.client._ensure_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1810, in _ensure_session return self.__start_session(True, causal_consistency=False) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1763, in __start_session server_session = self._get_server_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1796, in _get_server_session return self._topology.get_server_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/topology.py", line 485, in get_server_session None) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/topology.py", line 209, in _select_servers_loop self._error_message(selector)) pymongo.errors.ServerSelectionTimeoutError: trap-umbriel:27017: [Errno 8] nodename nor servname provided, or not known

    opened by MSchmitt-git 1
  • Could not load meta information for optimum pipe

    Could not load meta information for optimum pipe

    I am trying to use a model I was given by a colleague but keep coming up against an error finding base.PhotonBase.

    Here is the code I ran:

    from photonai.base import Hyperpipe best_model_file = 'mymodel.photon' my_model = Hyperpipe.load_optimum_pipe(best_model_file)

    where mymodel.photon sits in a folder that also includes a "photon_best_model" folder with __optimum_pipe_0_SimpleImputer.pkl, _optimum_pipe_1_StandardScaler.pkl, _optimum_pipe_2_Ridge.pkl, and optimum_pipe_blueprint.pkl' I requested these files from my colleague because the errors I was getting indicated they needed to be there to load the model.

    Running this I get the following error: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. defaults = yaml.load(f) /Users/lee_jollans/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+. warnings.warn(msg, category=FutureWarning) Could not load meta information for optimum pipe Traceback (most recent call last): File "tryphoton.py", line 10, in <module> my_model = Hyperpipe.load_optimum_pipe(best_model_file) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1105, in load_optimum_pipe return PhotonModelPersistor.load_optimum_pipe(file, password) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1444, in load_optimum_pipe element_list = PhotonModelPersistor.load_elements(folder=load_folder) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1410, in load_elements loaded_pipeline_element = joblib.load(os.path.join(folder, element_info['filename'] + '.pkl')) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 605, in load obj = _unpickle(fobj, filename, mmap_mode) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 529, in _unpickle obj = unpickler.load() File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1085, in load dispatch[key[0]](self) File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1373, in load_global klass = self.find_class(module, name) File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1423, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'photonai.base.PhotonBase'

    My colleague had originally noted that Hyperpipe is imported using from photonai.base.PhotonBase import Hyperpipe, which also did not work because I got the PhotonBase error.

    Note, I am running macOS Mojave and python 3.7.3

    Any help is greatly appreciated!

    opened by ljollans 8
  • Suggestions  in fabolas.py

    Suggestions in fabolas.py

    Hi, I’m a student and learning about BayesianOptimization rencently. I’m trying to make fabolas compactible to George 0.3.1 and I think I did it. And I hope I can give you some suggestions:

    1. I suggest that using stationary kernel (E.g. SE kernel) instead of non-stationary kernel (LinearKernel in Fabolas.py), because when you run get_incumbent() (in Fabolas.py), you will project the environment variables to 1, and then change to 0 because of _quadratic_bf(). Then you will run predict() in get_incumbent(), and the parameters of predict() will be matrix with env=0 (E.g. (a1,b1,0) ,(a2,b2,0) ,(a3,b3,0)...) If you use LinearKernel, the var of predict() will be a zeroes, and mean will of predict() will be a vector with same elements.As a result, the epmgp.py cannot work

    2. The parameter of EnvPrior() “n_lr=degree+1” can change to “n_lr= len(env_kernel) “

    opened by cjfcsjt 1
Releases(2.2.1)
  • 2.2.1(Aug 3, 2022)

  • 2.2.0(Nov 5, 2021)

  • 2.1.0(Mar 5, 2021)

    Documentation: https://wwu-mmll.github.io/photonai/

    Changelog Features:

    • enable integration of custom metrics
    • integrate automatic generation of learning curves
    • integrate nevergrad hyperparameter optimization strategy
    • add new hyperparameter optimizer designed to compare different (learning) algorithms in a Switch (OR-element)
    • add functionality to automatically find, analyze and compare the best config for each estimator (Switch) per outer fold
    • added scorer method to hyperpipe that scores with best_config_metric, therefore the Hyperpipe object can be used with scikit-learn functions.
    • integrated sklearn permutation feature importances into the workflow
    • disable usage of test samples with the parameter use_test_set in hyperpipe
    • removed the need to import Output Settings class to declare the project_folder -> moved to Hyperpipe constructor
    • added inverse_transform methods to several PHOTONAI algorithm implementations

    Development:

    • integrate documentation into github repo based on mkdocs and material theme: https://wwu-mmll.github.io/photonai/
    • switch continuos integration protocol to github actions: https://github.com/wwu-mmll/photonai/actions
    • code clean ups
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Jul 14, 2020)

    • removed investigator, instead we offer the Explorer, a Javascript web application to visualize and analyze the results
    • moved photon.neuro module to an own package called photonai_neuro
    • updated repository structure, moved tests and examples to root directory
    • consistently named the repository photonai everywhere
    • included continuous integration pipeline with travis-ci
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Feb 21, 2019)

    Starting with this release, you should be able to install PHOTON via pip. Issues with installing the requirements have been resolved. This release includes the PHOTON Investigator to visualize the results.

    Source code(tar.gz)
    Source code(zip)
Owner
Medical Machine Learning Lab - University of Münster
Medical Machine Learning Lab - University of Münster
Benchmark datasets, data loaders, and evaluators for graph machine learning

Overview The Open Graph Benchmark (OGB) is a collection of benchmark datasets, data loaders, and evaluators for graph machine learning. Datasets cover

1.5k Jan 05, 2023
Official codebase for ICLR oral paper Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling

CLIORA This is the official codebase for ICLR oral paper: Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling. We introduce

Bo Wan 32 Dec 23, 2022
simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

Ramón Casero 1 Jan 07, 2022
Official Implementation of SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations

Official Implementation of SimIPU SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations Since

Zhyever 37 Dec 01, 2022
MRI reconstruction (e.g., QSM) using deep learning methods

deepMRI: Deep learning methods for MRI Authors: Yang Gao, Hongfu Sun This repo is devloped based on Pytorch (1.8 or later) and matlab (R2019a or later

Hongfu Sun 17 Dec 18, 2022
A Quick and Dirty Progressive Neural Network written in TensorFlow.

prog_nn .▄▄ · ▄· ▄▌ ▐ ▄ ▄▄▄· ▐ ▄ ▐█ ▀. ▐█▪██▌•█▌▐█▐█ ▄█▪ •█▌▐█ ▄▀▀▀█▄▐█▌▐█▪▐█▐▐▌ ██▀

SynPon 53 Dec 12, 2022
Cross-Modal Contrastive Learning for Text-to-Image Generation

Cross-Modal Contrastive Learning for Text-to-Image Generation This repository hosts the open source JAX implementation of XMC-GAN. Setup instructions

Google Research 94 Nov 12, 2022
Diverse graph algorithms implemented using JGraphT library.

# 1. Installing Maven & Pandas First, please install Java (JDK11) and Python 3 if they are not already. Next, make sure that Maven (for importing J

See Woo Lee 3 Dec 17, 2022
Example scripts for the detection of lanes using the ultra fast lane detection model in Tensorflow Lite.

TFlite Ultra Fast Lane Detection Inference Example scripts for the detection of lanes using the ultra fast lane detection model in Tensorflow Lite. So

Ibai Gorordo 12 Aug 27, 2022
ScriptProfilerPy - Module to visualize where your python script is slow

ScriptProfiler helps you track where your code is slow It provides: Code lines t

Lucas BLP 3 Jun 02, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples

Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples (WACV 2022) and Beyond Simple Meta-Learning: Multi-Purpose Model

PLAI Group at UBC 42 Dec 06, 2022
Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models

Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models This repo contains a barebones implementation for the atta

16 Dec 04, 2022
Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation

Unseen Object Clustering: Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation Introduction In this work, we propose a new method

NVIDIA Research Projects 132 Dec 13, 2022
Official pytorch code for SSC-GAN: Semi-Supervised Single-Stage Controllable GANs for Conditional Fine-Grained Image Generation(ICCV 2021)

SSC-GAN_repo Pytorch implementation for 'Semi-Supervised Single-Stage Controllable GANs for Conditional Fine-Grained Image Generation'.PDF SSC-GAN:Sem

tyty 4 Aug 28, 2022
IRON Kaggle project done while doing IRONHACK Bootcamp where we had to analyze and use a Machine Learning Project to predict future sales

IRON Kaggle project done while doing IRONHACK Bootcamp where we had to analyze and use a Machine Learning Project to predict future sales. In this case, we ended up using XGBoost because it was the o

1 Jan 04, 2022
Yet Another Reinforcement Learning Tutorial

This repo contains self-contained RL implementations

Sungjoon 65 Dec 10, 2022
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
Differentiable Quantum Chemistry (only Differentiable Density Functional Theory and Hartree Fock at the moment)

DQC: Differentiable Quantum Chemistry Differentiable quantum chemistry package. Currently only support differentiable density functional theory (DFT)

75 Dec 02, 2022