Bayesian Optimization using GPflow

Overview

Note: This package is for use with GPFlow 1.

For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind.

GPflowOpt

GPflowOpt is a python package for Bayesian Optimization using GPflow, and uses TensorFlow. It was initiated and is currently maintained by Joachim van der Herten and Ivo Couckuyt. The full list of contributors (in alphabetical order) is Ivo Couckuyt, Tom Dhaene, James Hensman, Nicolas Knudde, Alexander G. de G. Matthews and Joachim van der Herten. Special thanks also to all GPflow contributors as this package would not be able to exist without their effort.

Build Status Coverage Status Documentation Status

Install

The easiest way to install GPflowOpt involves cloning this repository and running

pip install . --process-dependency-links

in the source directory. This also installs all required dependencies (including TensorFlow, if needed). For more detailed installation instructions, see the documentation.

Contributing

If you are interested in contributing to this open source project, contact us through an issue on this repository. For more information, see the notes for contributors.

Citing GPflowOpt

To cite GPflowOpt, please reference the preliminary arXiv paper. Sample Bibtex is given below:

@ARTICLE{GPflowOpt2017,
   author = {Knudde, Nicolas and {van der Herten}, Joachim and Dhaene, Tom and Couckuyt, Ivo},
    title = "{{GP}flow{O}pt: {A} {B}ayesian {O}ptimization {L}ibrary using Tensor{F}low}",
  journal = {arXiv preprint -- arXiv:1711.03845},
  year    = {2017},
  url     = {https://arxiv.org/abs/1711.03845}
}
Comments
  • GPflow 1.0

    GPflow 1.0

    Following up on #86 , development of a 1.0 compatible GPflowOpt version. This is far from done, the biggest difficulty (Acquisition) is still ahead unfortunately.

    At the same time lots of tests are affected: I'm reworking them to use some of the cool pytest features. When this work is over, I hope to do another PR to improve testing further (split out computationally demanding tests to system, use mock to test things which now trigger BO runs and model optimizations)

    do not merge yet Discussion 
    opened by javdrher 17
  • Cholesky faillures due to inappropriate initial hyperparameters

    Cholesky faillures due to inappropriate initial hyperparameters

    As mentioned #4 , tests often failed (mostly on python 2.7) due to cholesky decomposition errors. First I thought this was mostly caused by updating the data and calling optimize() again, but resetting the hyperparameters wasn't working all the time. Increasing the likelihood variance sometimes helps slightly but isn't very robust either.

    Right now the tests specify lengthscales for the initial model, and apply a hyperprior on the kernel variance. Each BO iteration, the hyperparameters supplied with the initial model are applied as a starting point. In addition, restarts are applied by randomizing the Params. This approach made it a lot more stable but isn't perfect yet. Especially in longer runs of BO, reverting to the supplied lengthscales each time ultimately causes crashes.

    Some things we may consider:

    • Normalizing the input/output data. Tested this a bit, didn't solve the issue. Additionally, the model hyperparameters loose some interpretability. Note that I think we will ultimately need this for PES anyway.
    • Add a callback for re-configuring hyperparameters. Instead of reverting to the initially supplied hypers each iteration, this function is called and configures the initial state. I think for more complex modeling approaches this ultimately required, but for simple scenario's with GPR this has to work automatically.
    • Applying hyperpriors is going to be important.

    I'd love to hear thoughts on how to improve this.

    help wanted 
    opened by javdrher 14
  • Question regarding getting started example from documentation

    Question regarding getting started example from documentation

    Good morning guys,

    I am currently trying to get gpflowopt up and running for some optimization problem. Naturally, I first tried the example you provided in the documentation Now, while I do get the same result as in the documentation, I am a bit puzzled about the function evaluations the optimizer is choosing. To be more specific, the optimizer always chooses to evaluate the function at the point [0.0, 0.5] in all 15 iterations. I am probably overlooking something, as this does not seem to be the desired behavior, right? The optimizer seems to be not really optimizing. Can anyone point out the mistake I made during the setup of the problem? I am pretty sure I follow the instructions of the example in the documentation to the letter.

    This is the code, that I am running:

    import numpy as np
    from gpflowopt.domain import ContinuousParameter
    import gpflow
    from gpflowopt.bo import BayesianOptimizer
    from gpflowopt.design import LatinHyperCube
    from gpflowopt.acquisition import ExpectedImprovement
    #from gpflowopt.optim import SciPyOptimizer
    
    def fx(X):
        X = np.atleast_2d(X)
        result = np.sum(np.square(X), axis=1)[:, None]
        print("X: {}".format(X))
        print("fx: {}".format(result))
        return result
    
    
    domain = ContinuousParameter('x1', -2, 2) + ContinuousParameter('x2', -1, 2)
    
    # Use standard Gaussian process Regression
    lhd = LatinHyperCube(21, domain)
    X = lhd.generate()
    Y = fx(X)
    model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True))
    model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3)
    
    # Now create the Bayesian Optimizer
    alpha = ExpectedImprovement(model)
    optimizer = BayesianOptimizer(domain, alpha)
    
    # Run the Bayesian optimization
    #with optimizer.silent():
    r = optimizer.optimize(fx, n_iter=15)
    print(r)
    

    And this is the output I am seeing:

    python try_out_gpflowopt.py 
    2017-12-11 09:28:09.790044: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.
    1 SSE4.2 AVX AVX2 FMA
    X: [[ 0.2   0.65]
     [ 0.   -0.1 ]
     [-0.8   0.5 ]
     [ 0.4   1.4 ]
     [-0.6   1.25]
     [ 1.    0.05]
     [ 1.2   0.8 ]
     [-1.   -0.25]
     [ 0.8  -0.7 ]
     [ 1.4   1.55]
     [-1.6   1.1 ]
     [-1.8   0.35]
     [-0.2  -0.85]
     [ 1.8   0.2 ]
     [-0.4   1.85]
     [ 0.6   2.  ]
     [ 2.    0.95]
     [ 1.6  -0.55]
     [-1.4   1.7 ]
     [-1.2  -1.  ]
     [-2.   -0.4 ]]
    fx: [[ 0.4625]
     [ 0.01  ]
     [ 0.89  ]
     [ 2.12  ]
     [ 1.9225]
     [ 1.0025]
     [ 2.08  ]
     [ 1.0625]
     [ 1.13  ]
     [ 4.3625]
     [ 3.77  ]
     [ 3.3625]
     [ 0.7625]
     [ 3.28  ]
     [ 3.5825]
     [ 4.36  ]
     [ 4.9025]
     [ 2.8625]
     [ 4.85  ]
     [ 2.44  ]
     [ 4.16  ]]
    Warning: optimization restart 4/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ -3.76012797e-10   4.99988509e-01]]
    fx: [[ 0.24998851]]
    Warning: optimization restart 1/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 4/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 3/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
         fun: array([ 0.01])
     message: 'OK'
        nfev: 15
     success: True
           x: array([[ 0. , -0.1]])
    

    For the sake of completeness, these are the steps I took to setup GPFlowOpt:

    1. Created new conda environment based on Python 3.5
    2. Cloned the gpflowopt repo
    3. Ran pip install . --process-dependency-links

    BTW, this is the first issue I've ever created on GiHub, so please forgive me if I am violating any conventions and please let me know if I left out crucial information.

    opened by jbi35 9
  • Overflow warnings

    Overflow warnings

    During calls to optimize(), sometimes UserWarnings pop up:

    /home/javdrher/.virtualenvs/gpflowopt/lib/python3.5/site-packages/GPflow/transforms.py:129: RuntimeWarning: overflow encountered in exp result = np.log(1. + np.exp(x)) + self._lower

    Typically this is quite harmless, if its really causing troubles its usually followed by a cholesky decomposition exception. However, those warnings mess up output. Specifically in documentation notebooks. I was thinking of silencing the warnings, any reason not to?

    question 
    opened by javdrher 9
  • Bug fix in pareto.py, Pareto::divide_conquer_nd

    Bug fix in pareto.py, Pareto::divide_conquer_nd

    The algorithm in Pareto::divide_conquer_nd fails when two points in the Pareto set have the same value in a certain dimension. An example is included in the modified pareto.py: The Pareto set d21 contains three points, two of which have a value of 2.0 in the first dimension. Ordering the three points results in different ways results in different values of the hypervolume (28 and 32), which are all wrong (should be 29).

    The issue is in the dominance test associated with _is_test_required method and pseudo_pf. The array pseudo_pf assigns different ranks to the same values. Therefore, by reordering the Pareto set in the test case, different pseudo Pareto sets are generated for the same Pareto set.

    I figured out two ways for the fix. One is to fix pseudo_pf by sorting the Pareto set such that the same values are assigned the same rank (e.g. using scipy.stats.rankdata). The other is to fix the dominance test by checking the actual Pareto set. The first one leads to more iterations in the algorithm. Therefore, I implemented the second approach in pareto.py with a minimum modification - although some simplifications of the code are possible.

    opened by smanist 5
  • when installing GPflowOpt it is down grading GPflow from 1.2 to  0.4

    when installing GPflowOpt it is down grading GPflow from 1.2 to 0.4

    i have installed latest gpflow version (1.2) using pip Later when i tried to install gpflowopt using the following code pip install git+https://github.com/GPflow/GPflowOpt.git

    it is down grading GPflow from 1.2 to 0.4

    opened by pullanagari 5
  • Is GPflowOpt compatible anymore with GPflow functions?

    Is GPflowOpt compatible anymore with GPflow functions?

    I just downloaded GPflowOpt, yet nothing can run due to slight changes made in GPflow. For example, you have now made a 'core' subfolder and seemed to changed the AutoFlow function. I have tried to apply come changes on my own (it is not hard to change 'from gpflow.param import DataHolder, Autoflow' to two separate imports in their correct folders, params and core), but note @Autoflow calls now need to be @gpflow.autoflow. I spent several hours changing this for pretty much every function/class.

    Yet now it seems certain classes are also significantly changed. 'Parameterized' no longer has the attribute 'highest_parent', something needed for SciPyOptimizer.

    At this point, not a single thing can be called from GPflowOpt without error.

    opened by grahamski2323 5
  • Max-Value Entropy Search

    Max-Value Entropy Search

    I implemented the recent acquisition function Max-Value Entropy Search from:

    Wang, Z. & Jegelka, S.. (2017). Max-value Entropy Search for Efficient Bayesian Optimization. Proceedings of the 34th International Conference on Machine Learning, in PMLR 70:3627-3635

    We named it Min-Value Entropy Search because the GPflow framework seeks the minimum if the function. There is a notebook evaluating the method on the Shekel function and it seems to perform well.

    opened by nknudde 5
  • MGP

    MGP

    In this pull request I implemented the Approximatively Marginalised GP as described as indicated in issue #39 . It currently supports multi-output GPs. A notebook and some tests are included.

    opened by nknudde 5
  • GPR works, VGP doesn't

    GPR works, VGP doesn't

    Do improve the speed of optimisation, I replaced GPR with VGP as follows:

    domain = np.sum([GPflowOpt.domain.ContinuousParameter(f'mux{i}', mm[i], mx[i]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'muy{i}', mm[i+7], mx[i+7]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmax{i}', 1e-7, 1.) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmay{i}', 1e-7, 1.) for i in range(7)]) domain += GPflowOpt.domain.ContinuousParameter('offset', endo * 0.7, endo * 1.3) design = GPflowOpt.design.RandomDesign(500, domain) X = design.generate() Y = np.vstack([obj(x.reshape(1, -1)) for x in X]) model = GPflow.vgp.VGP(X, Y, GPflow.kernels.RBF(29, lengthscales=X.std(axis=0)), likelihood=GPflow.likelihoods.Gaussian()) acquisition = GPflowOpt.acquisition.ExpectedImprovement(model) opt = GPflowOpt.optim.StagedOptimizer([GPflowOpt.optim.MCOptimizer(domain, 500), GPflowOpt.optim.SciPyOptimizer(domain)]) optimizer = GPflowOpt.BayesianOptimizer(domain, acquisition, optimizer=opt) optimizer.optimize(obj, n_iter=500)

    GPR works, but with VGP I receive the following error:

    [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] 2017-07-18 23:03:28.798171: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [501,1] vs. [500,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] Warning: optimization restart 1/5 failed 2017-07-18 23:03:28.898935: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.898992: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899066: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899289: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] Warning: optimization restart 2/5 failed

    I'm using master GPflow and GPflowOpt on TensorFlow 1.2 and Python 3.6.

    Thanks.

    bug 
    opened by mccajm 5
  • Avoid (some) duplicate optimizes

    Avoid (some) duplicate optimizes

    Following #52 here is some code which gets rid of 1 optimize call (in case of no initial design specified). The PR also includes a context which can be used to suspend all optimizes. This is mostly useful for the lower-level api.

    note: In the following release the data mechanism will undergo some changes and enabling the scaling should get moved to avoid another scaling

    enhancement do not merge yet 
    opened by javdrher 4
  • Issue: Install package gpflowopt

    Issue: Install package gpflowopt

    Hello can you help me to solve this issue ERROR: Could not find a version that satisfies the requirement GPflow==0.5.0 (from gpflowopt) (from versions: 1.4.1.linux-x86_64, 1.5.0.linux-x86_64, 1.0.0, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.1, 1.5.0, 1.5.1, 2.0.0rc1, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.2.0, 2.2.1, 2.3.0, 2.3.1, 2.4.0, 2.5.1, 2.5.2) ERROR: No matching distribution found for GPflow==0.5.0

    I tried to install in google colab and spyder but not working

    opened by SamirLamin 2
  • Coupled of decoupled constrained BO?

    Coupled of decoupled constrained BO?

    I am very new to this BO domain and working on constraint BO and found this wonderful tool that has constrained BO application. As I can understand from the code that the acquisition function for objective and constraint are multiplied. I have a question about that. Is this constrained BO considered as coupled then?

    opened by pallavimitra 0
  • Ask/tell interface

    Ask/tell interface

    Is there a ask/tell interface? If not, is there a workaround?

    I'd like to initialize the optimizer with already known function evaluations. Likewise, I'd like to query the next data point that should be evaluated, according to the acquisition function.

    opened by moi90 2
  • Discrete variables optimization

    Discrete variables optimization

    Hey there, I found your project which seems promising for a problem, I am currently working on. However, I as far as I can see, the option to include discrete variables in the optimization is not yet implemented? Is this correct? and if so, are there any developement in this direction currently going on?

    opened by HolmKiilerich 2
  • Can I use PyTorch within the objective function?

    Can I use PyTorch within the objective function?

    Hi, Before I start writing my coding with GPflowOpt, I need to check with you whether I can use a PyTorch model within the objective function? My objective function needs the decoder of a VAE defined using PyTorch. In addition to other essential parameters, I want to pass this model as one parameter to the objective function. I understand that GPflowOpt is based on TF, but I am not sure whether the objective can be any function independent of TF. Thanks in advance...

    opened by yifeng-li 0
Releases(v0.1.0)
  • v0.1.0(Sep 11, 2017)

    Initial version of the GPflowOpt framework, including some basic acquisition functions and support for standard Bayesian Optimization strategies.

    Source code(tar.gz)
    Source code(zip)
Owner
GPflow
GPflow
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

Kent Sommer 297 Dec 26, 2022
This repo contains the source code and a benchmark for predicting user's utilities with Machine Learning techniques for Computational Persuasion

Machine Learning for Argument-Based Computational Persuasion This repo contains the source code and a benchmark for predicting user's utilities with M

Ivan Donadello 4 Nov 07, 2022
Official implementation of "An Image is Worth 16x16 Words, What is a Video Worth?" (2021 paper)

An Image is Worth 16x16 Words, What is a Video Worth? paper Official PyTorch Implementation Gilad Sharir, Asaf Noy, Lihi Zelnik-Manor DAMO Academy, Al

213 Nov 12, 2022
Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

28 Dec 25, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
buildseg is a building extraction plugin of QGIS based on PaddlePaddle.

buildseg buildseg is a building extraction plugin of QGIS based on PaddlePaddle. TODO Extract building on 512x512 remote sensing images. Extract build

Yizhou Chen 11 Sep 26, 2022
Dataset for the Research2Clinics @ NeurIPS 2021 Paper: What Do You See in this Patient? Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
AWS documentation corpus for zero-shot open-book question answering.

aws-documentation We present the AWS documentation corpus, an open-book QA dataset, which contains 25,175 documents along with 100 matched questions a

Sia Gholami 2 Jul 07, 2022
This is a simple face recognition mini project that was completed by a team of 3 members in 1 week's time

PeekingDuckling 1. Description This is an implementation of facial identification algorithm to detect and identify the faces of the 3 team members Cla

Eric Kwok 2 Jan 25, 2022
Code for ICLR2018 paper: Improving GAN Training via Binarized Representation Entropy (BRE) Regularization - Y. Cao · W Ding · Y.C. Lui · R. Huang

code for "Improving GAN Training via Binarized Representation Entropy (BRE) Regularization" (ICLR2018 paper) paper: https://arxiv.org/abs/1805.03644 G

21 Oct 12, 2020
Implementation of the ivis algorithm as described in the paper Structure-preserving visualisation of high dimensional single-cell datasets.

Implementation of the ivis algorithm as described in the paper Structure-preserving visualisation of high dimensional single-cell datasets.

beringresearch 285 Jan 04, 2023
一个多语言支持、易使用的 OCR 项目。An easy-to-use OCR project with multilingual support.

AgentOCR 简介 AgentOCR 是一个基于 PaddleOCR 和 ONNXRuntime 项目开发的一个使用简单、调用方便的 OCR 项目 本项目目前包含 Python Package 【AgentOCR】 和 OCR 标注软件 【AgentOCRLabeling】 使用指南 Pytho

AgentMaker 98 Nov 10, 2022
Implementation for Learning to Track with Object Permanence

Learning to Track with Object Permanence A video-based MOT approach capable of tracking through full occlusions: Learning to Track with Object Permane

Toyota Research Institute - Machine Learning 91 Jan 03, 2023
NumPy로 구현한 딥러닝 라이브러리입니다. (자동 미분 지원)

Deep Learning Library only using NumPy 본 레포지토리는 NumPy 만으로 구현한 딥러닝 라이브러리입니다. 자동 미분이 구현되어 있습니다. 자동 미분 자동 미분은 미분을 자동으로 계산해주는 기능입니다. 아래 코드는 자동 미분을 활용해 역전파

조준희 17 Aug 16, 2022
Graph Convolutional Networks in PyTorch

Graph Convolutional Networks in PyTorch PyTorch implementation of Graph Convolutional Networks (GCNs) for semi-supervised classification [1]. For a hi

Thomas Kipf 4.5k Dec 31, 2022
A simple root calculater for python

Root A simple root calculater Usage/Examples python3 root.py 9 3 4 # Order: number - grid - number of decimals # Output: 2.08

Reza Hosseinzadeh 5 Feb 10, 2022
The description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts.

FMFCC-A This project is the description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts. The FMFCC-A dataset is shared through BaiduCl

18 Dec 24, 2022
Official repository for the paper "Instance-Conditioned GAN"

Official repository for the paper "Instance-Conditioned GAN" by Arantxa Casanova, Marlene Careil, Jakob Verbeek, Michał Drożdżal, Adriana Romero-Soriano.

Facebook Research 510 Dec 30, 2022
A system used to detect whether a person is wearing a medical mask or not.

Mask_Detection_System A system used to detect whether a person is wearing a medical mask or not. To open the program, please follow these steps: Make

Mohamed Emad 0 Nov 17, 2022
Implementation of SiameseXML (ICML 2021)

SiameseXML Code for SiameseXML: Siamese networks meet extreme classifiers with 100M labels Best Practices for features creation Adding sub-words on to

Extreme Classification 35 Nov 06, 2022