library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

Related tags

Deep Learningnlopt
Overview

Latest Docs Build Status Build Status

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unified interface and packaging of several free/open-source nonlinear optimization libraries.

The latest release can be downloaded from the NLopt releases page on Github, and the NLopt manual is hosted on readthedocs.

NLopt is compiled and installed with the CMake build system (see CMakeLists.txt file for available options):

git clone git://github.com/stevengj/nlopt
cd nlopt
mkdir build
cd build
cmake ..
make
sudo make install

(To build the latest development sources from git, you will need SWIG to generate the Python and Guile bindings.)

Once it is installed, #include <nlopt.h> in your C/C++ programs and link it with -lnlopt -lm. You may need to use a C++ compiler to link in order to include the C++ libraries (which are used internally by NLopt, even though it exports a C API). See the C reference manual.

There are also interfaces for C++, Fortran, Python, Matlab or GNU Octave, OCaml, GNU Guile, GNU R, Lua, Rust, and Julia. Interfaces for other languages may be added in the future.

Comments
  • nlopt compilation failed at

    nlopt compilation failed at "make" step on AIX7.2

    I am trying to compile nlopt on AIX7.2. The 1st "cmake" step finished successfully. However, the 2nd "make" step failed with the ERROR: Undefined symbol: __tls_get_addr. Can you help me to figure out the issue? Thanks.

    [ 66%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/global.cc.o
    [ 68%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/linalg.cc.o
    [ 70%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/local.cc.o
    [ 71%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/stogo.cc.o
    [ 73%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/tools.cc.o
    [ 75%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/evolvent.cc.o
    [ 76%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/solver.cc.o
    [ 78%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/local_optimizer.cc.o
    [ 80%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/ags.cc.o
    [ 81%] Linking CXX shared library libnlopt
    ld: 0711-317 ERROR: Undefined symbol: __tls_get_addr
    ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information.
    collect2: error: ld returned 8 exit status
    make[2]: *** [CMakeFiles/nlopt.dir/build.make:767: libnlopt.a] Error 1
    make[2]: Leaving directory '/software/thirdparty/nlopt-master/build'
    make[1]: *** [CMakeFiles/Makefile2:179: CMakeFiles/nlopt.dir/all] Error 2
    make[1]: Leaving directory '/software/thirdparty/nlopt-master/build'
    make: *** [Makefile:163: all] Error 2
    
    opened by bergen288 29
  • Prefer target_include_directories in CMake build script

    Prefer target_include_directories in CMake build script

    This PR facilitates the inclusion of nlopt into other CMake projects.

    • The "private" include directories are defined in a per-target basis via target_include_directories instead of include_directories. This has the advantage that it doesn't "pollute" parent projects.
    • Public include directories are set via the INTERFACE argument in target_include_directories. To keep it simple the original header nlopt.h is simply copied in the ${PROJECT_BINARY_DIR}/api folder, which is used as the interface include directory for nlopt. Unfortunatelly, absolute paths cannot be given as interface include directories for installed targets, hence the need for the trick with $<BUILD_INTERFACE:...>. However this generator expression has been introduced in cmake 3.0, so the minimum required version is bumped to 3.0, but maybe you don't want that?
    • Similarly, use target_compile_definitions instead of a global add_definitions.
    • I couldn't make per-target include directories work with SWIG, so there is still a include_directories appearing in the swig/CMakeLists.txt. This is not ideal, but if you have any idea to improve this I'm all ears.

    Now, building nlopt as part of other projects is as simple as

    add_subdirectory(ext/nlopt)
    target_link_libraries(my_program nlopt)
    
    opened by jdumas 15
  • Implement C++ style functors as targets for objectives

    Implement C++ style functors as targets for objectives

    This PR implements a wrapper nlopt::functor_wrapper for C++ style functors via std::function, and two new overloads of nlopt::set_min_objective, nlopt::set_max_objective.

    In order to allow that, a new member field in the myfunc_data struct is added: functor_type functor;, where functor_type is defined as

    typedef std::function<double(unsigned, const double*, double*)> functor_type;
    

    This is not introduced as a pointer (like the other function-pointers are) because std::function is already a container that stores a pointer, and abstracts it away.

    Important: note that the signature for the functor does not include void* data unlike all other function-pointers. That is because it is assumed that the functor already has all the data it needs.

    This PR allows now to write the following:

    class UserDefinedObjective {
      private:
        ImportantData data;
      public:
        UserDefinedObjective() = delete;
        UserDefinedObjective(ImportantData data) :
          data(std::move(data)) {}
        double operator()(unsigned n, const double* x, double* grad) const
        {
          // compute objective(x) and ∇objective(x) using this->data
        }
    };
    
    int main()
    {
      ImportantData data;
      UserDefinedObjective objective(std::move(data));
    
      nlopt::opt optimizer;
      // other nlopt settings
      optimizer.set_max_objective(std::move(objective));
    
      optimizer.optimize(...);
    
      return 0;
    }
    

    Same with C++ lambdas, regular functions and even class member functions (check out std::function).

    This PR also introduces a CMake macro NLOPT_add_cpp_test to quickly add cpp tests, and creates a test cpp_functor.cxx to actually test the new functionality.

    Closes #219 .

    opened by dburov190 14
  • Added example of automatic tests

    Added example of automatic tests

    • Added new_test target to generate test executable
    • Updated CMakeLists.txt to add test as subfolder
    • Added two test using executable new_test
    • Now you can run test suite by just typing either "make test" or "ctest" in the build directory
    opened by boris-il-forte 14
  • C++11 idiom

    C++11 idiom

    I found nlopt a great library, but using it through the C++ is really frustrating. You need to provide a void* for passing data to the objective/constraint functions, with all the problemas that may have.

    Also, you need to pass a function pointer, so you cant use lambda functions with captures (that would avoid passing the void*).

    Ive written a thin wrapper on top of NLopt which tries to offer a more modern c++ API, enabling the use of lambdas and hiding from the API the use of void*.

    Would you be interested in this to be merge?

    opened by jjcasmar 13
  • DIRECT takes impossibly long to reach xtol

    DIRECT takes impossibly long to reach xtol

    Unless I'm mistaken, the XTOL stopping criterion for DIRECT (the cdirect version) can't be used when searching large multi-dimensional spaces, because it requires all hyper-rectangles everywhere to be divided down to below the x-tolerances before stopping. This will take an impossibly long time.

    Wouldn't it make more sense to stop as soon as one (or a few) of the rectangles is small? This could be done by inverting some of the logic for the xtol_reached variable within the cdirect.c function divide_good_rects().

    I can attach some test code here or work towards a pull request if that would be helpful.

    Cheers, Joel

    opened by jcottrell-ellex 11
  • Website & download URLs down

    Website & download URLs down

    I get the following while trying to install:

    configure: Need to download and build NLopt
    trying URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
    Warning in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz",  :
      unable to connect to 'ab-initio.mit.edu' on port 80.
    Error in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz",  : 
      cannot open URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
    Execution halted
    /bin/tar: This does not look like a tar archive
    
    gzip: stdin: unexpected end of file
    /bin/tar: Child returned status 1
    /bin/tar: Error is not recoverable: exiting now
    

    The website, http://ab-initio.mit.edu/nlopt/, also does not display in my browser (ERR_CONNECTION_TIMED_OUT)

    opened by IljaKroonen 11
  • Prevent a conditional jump based on uninitialized value in nlopt_create.

    Prevent a conditional jump based on uninitialized value in nlopt_create.

    nlopt_set_lower_bounds1 reads from opt->ub before it has ever been written.

    We caught this in our nightly memcheck CI build for RobotLocomotion/drake: https://drake-cdash.csail.mit.edu/viewDynamicAnalysisFile.php?id=14355

    Contributes to RobotLocomotion/drake#3873


    This change is Reviewable

    opened by david-german-tri 11
  • New release

    New release

    I'm running into issue https://github.com/stevengj/nlopt/issues/33. Could you make a new release that includes that fix, please? It would fix dozens of R packages on NixOS.

    Looks like the last one was in 2014!

    opened by langston-barrett 11
  • Generate missing nlopt.hpp and nlopt.f (CMake), use GNUInstallDirs (CMake), fix for MSVC 2015

    Generate missing nlopt.hpp and nlopt.f (CMake), use GNUInstallDirs (CMake), fix for MSVC 2015

    Create missing nlopt.hpp and nlopt.f when building with CMake Use CMake's GNUInstallDIrs (e.g, ${CMAKE_INSTALL_LIBDIR} instead of lib) depending on platform Fix for MSVC 2015 compiler

    opened by rickertm 11
  • Running tests with CTest is broken

    Running tests with CTest is broken

    If I check out NLopt, build it, and try to run the tests with ctest, it fails horribly because testopt was not built.

    The way this works is a non-standard workflow and prevents running the tests e.g. when NLopt is built as a CMake external project.

    Please, either just build testopt by default (i.e. remove EXCLUDE_FROM_ALL), or else add an option (e.g. NLOPT_ENABLE_TESTS) that controls whether testopt is built by default and whether any add_test are invoked.

    opened by mwoehlke-kitware 10
  • undefined reference to `nlopt_get_errmsg'

    undefined reference to `nlopt_get_errmsg'

    Hi,

    I'm getting the following linker error message :

    in function nlopt::opt::get_errmsg() const': Hamiltonian.cpp:(.text._ZNK5nlopt3opt10get_errmsgEv[_ZNK5nlopt3opt10get_errmsgEv]+0x5b): undefined reference tonlopt_get_errmsg' collect2: error: ld returned 1 exit status

    when trying to compile a code which call nlopt.

    I compiled the nlopt 2.7.1 using the make install command and got :

    Install the project...
    -- Install configuration: "Release"
    -- Installing: /usr/local/lib/pkgconfig/nlopt.pc
    -- Installing: /usr/local/include/nlopt.h
    -- Installing: /usr/local/include/nlopt.hpp
    -- Installing: /usr/local/include/nlopt.f
    -- Installing: /usr/local/lib/libnlopt.so.0.11.1
    -- Installing: /usr/local/lib/libnlopt.so.0
    -- Set runtime path of "/usr/local/lib/libnlopt.so.0.11.1" to "/usr/local/lib"
    -- Installing: /usr/local/lib/libnlopt.so
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptLibraryDepends.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptLibraryDepends-release.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptConfig.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptConfigVersion.cmake
    -- Installing: /usr/local/share/man/man3/nlopt.3
    -- Installing: /usr/local/share/man/man3/nlopt_minimize.3
    
    

    and my cmakeList look like

    cmake_minimum_required(VERSION` 3.13.4)
    project(myProject)
    
    set(CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
    set(CMAKE_CXX_STANDARD 20)
    
    add_executable(myProject main.cpp)
    target_link_libraries(myProject PUBLIC nlopt ${CPLEX_LIBRARIES} `${CMAKE_DL_LIBS})
    

    I found that someone already got a similar issue but the proposed fixing does not seem to apply to my settings

    opened by Griset 0
  • Reduce number of gradient calculations in LD-MMA

    Reduce number of gradient calculations in LD-MMA

    The Svanberg MMA paper notes that for the CCSA algorithms described, gradients are only required in the outer iterations. "Each new inner iteration requires function values, but no derivatives."

    However, it appears that the implementation of LD-MMA calculates a gradient in the inner as well as the outer iteration. I request that the implementation be updated to reduce gradient calculation.

    I believe this is a two-line change: This line could be changed to something like fcur = f(n, xcur, NULL, f_data);, and then after line 299 in the same file one could add the code if (inner_done) { fcur = f(n, xcur, dfdx_cur, f_data); }.

    This would duplicate objective calls once per outer iteration, but since gradient calculations tend to dominate run-time in objective function calls, there should be overall net savings whenever more than one inner iteration is used.

    I regret not being able to try this out myself. I don't have C set up on my machine and have never coded in C, so I would be extremely slow at running tests (I'm using the Python API). Thanks for considering!

    opened by cpixton 0
  • Simple Academic Use Case with unexpected MMA behavior.

    Simple Academic Use Case with unexpected MMA behavior.

    Hello, Doing some tests on multiple starting guess and multiple optimization algorithm I found a case on which your solver behaves unexpectedly. This is luckily a simple analytical example easily reproducible.

    import nlopt from numpy import array

    def f(x, grad): if grad.size > 0: grad[0] = 2. * (x[0] - 1.) grad[1] = 2. * (x[1] - 1.) return (x[0] - 1.) ** 2. + (x[1] - 1.) ** 2.

    def g(x, grad): if grad.size > 0: grad[0] = 1. grad[1] = 1. return x[0] + x[1] - 1.

    if name == "main": algorithm = nlopt.LD_MMA n = 2 opt = nlopt.opt(algorithm, n) lb = array([0., 0.]) ub = array([1., 1.]) x0 = array([0.25, 1]) opt.set_min_objective(f) opt.set_lower_bounds(lb) opt.set_upper_bounds(ub) opt.add_inequality_constraint(g, 1e-3) tol = 1e-6 maxeval = 50 opt.set_ftol_rel(tol) opt.set_ftol_abs(tol) opt.set_xtol_rel(tol) opt.set_xtol_rel(tol) opt.set_maxeval(maxeval) opt.set_param("verbosity", 10000) opt.set_param("inner_maxeval",10) xopt = opt.optimize(x0) print(xopt) opt_val = opt.last_optimum_value() print(opt_val) result = opt.last_optimize_result() print(result)

    The solution to this problem is simply [0.5, 0.5] correctly spotted by LD_MMA from most initial guess but not with the starting guess [0.25,1].

    In this case the log looks like that:

    MMA dual converged in 6 iterations to g=0.914369: MMA y[0]=1e+40, gc[0]=0.116025 MMA outer iteration: rho -> 0.1 MMA rhoc[0] -> 0.1 MMA dual converged in 3 iterations to g=1.34431: MMA y[0]=1e+40, gc[0]=-0.269712 MMA outer iteration: rho -> 0.01 MMA rhoc[0] -> 0.01 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.6 MMA dual converged in 3 iterations to g=2.23837: MMA y[0]=1e+40, gc[0]=-0.378669 MMA outer iteration: rho -> 0.001 MMA rhoc[0] -> 0.001 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.72 MMA dual converged in 3 iterations to g=2.79213: MMA y[0]=1e+40, gc[0]=-0.454075 MMA outer iteration: rho -> 0.0001 MMA rhoc[0] -> 0.0001 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.864 MMA dual converged in 3 iterations to g=3.13745: MMA y[0]=1e+40, gc[0]=-0.524222 MMA outer iteration: rho -> 1e-05 MMA rhoc[0] -> 1e-05 MMA sigma[0] -> 0.6 MMA sigma[1] -> 1.0368 MMA dual converged in 3 iterations to g=2.46249: MMA y[0]=1e+40, gc[0]=-0.587037 MMA outer iteration: rho -> 1e-05 MMA rhoc[0] -> 1e-05 MMA sigma[0] -> 0.6 MMA sigma[1] -> 1.24416 MMA dual converged in 3 iterations to g=1.81718: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.0001 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.81722: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.001 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.81766: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.01 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.82206: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.1 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.86611: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.410949 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=2.01828: MMA y[0]=1e+40, gc[0]=-0.625775 [0.1160254 0.8660254] 0.7993602791855875 3

    Your solver stops to the design point [0.1160254 0.8660254] that is nor a local minimum, nor a saddle point for the objective neither a kkt point . I would like to have your insight on this behavior. BRs Simone Coniglio

    opened by SimoneConiglio 0
  • How to set discrete values for NLopt

    How to set discrete values for NLopt

    hi, i have an optimization problem to solve. i have hundreds of input variables. the values of these variables can only be 1 or 0.
    How I can tell this the NLopt package? i tried it with an equality equation, but it does not work her my little code in R

    opt_test<-function(boundary){
      
      target<-rep(c(0,1),100)
      sum_of_square<-0
      for (i in 1:length(boundary)){
        sum_of_square<-sum_of_square+sum((boundary[i]-target[i])^2)
      }
      #print(boundary)
      #print(sum_of_square)
      return(sum_of_square)
    }
    
    opt_test(rep(c(1,0),100))
    
    eval_g_eq_test<-function(x) {
      
      ret<-1
      for (i in 1:length(x)){
        if (x[i]==1) {ret<-0}
        if (x[i]==0) {ret<-0}
      }
      return(ret)
    }
    
    opts <- list("algorithm"="NLOPT_GN_ISRES"   # NLOPT_GN_ORIG_DIRECT NLOPT_GNL_DIRECT_NOSCAL, NLOPT_GN_DIRECT_L_NOSCAL, and NLOPT_GN_DIRECT_L_RAND_NOSCAL NLOPT_GD_STOGO, or NLOPT_GD_STOGO_RAND
                 #geht gut: NLOPT_LN_PRAXIS   NLOPT_LN_COBYLA  
                 #NLOPT_LN_NEWUOA !!!!! +bound
                 #NLOPT_LN_BOBYQA   !si only
                 # nloptr.print.options()   all possible options
                 ,xtol_rel=1e-8
                 #stopval=as.numeric(stopval),
                 ,maxeval=2000
                 ,print_level=1
    )
    x0<-rep(0,200)
    lb<-rep(0,200)
    ub<-rep(1,200)
    
    jo<- nloptr(x0=x0
                ,eval_f=opt_test
                ,lb = lb
                ,ub = ub
                #,eval_g_eq=eval_g_eq_test()==0
                ,opts=opts
    )
    
    opened by Axyxo 0
  • Error installing `nloptr` from source on CentOS cluster

    Error installing `nloptr` from source on CentOS cluster

    I am trying to install nloptr from source in R version 4.1.3 on a cluster (CentOS). However I receive the following error:

    /cvmfs/argon.hpc.uiowa.edu/2022.1/prefix/usr/lib/gcc/x86_64-pc-linux-gnu/9.4.0/../../../../x86_64-pc-linux-gnu/bin/ld: cannot find -lnlopt
    collect2: error: ld returned 1 exit status
    make: *** [/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/r-4.1.3-ljofaul/rlib/R/share/make/shlib.mk:10: nloptr.so] Error 1
    ERROR: compilation failed for package ‘nloptr’
    

    I am on a university cluster and cannot run sudo commands. After encouragement from @eddelbuettel to contact my sys admin, he figured out the issue. Here's what he wrote:


    The build environment for nloptr uses pkg-config to get information about nlopt. It turns out that the pkg-config file has an error. It has

    libdir=${exec_prefix}/lib

    but the library is actually located in

    libdir=${exec_prefix}/lib64

    That does not show up in the packaging environment because LIBRARY_PATH is set for the dependency chain. I will need to fix the pkg-config file in the package recipe, but you can work around it as follows:

    1. load environment modules:
    module load stack/2022.1
    module load nlopt
    
    1. set LIBRARY_PATH so linker can find library while launching R session (single line below):
    LIBRARY_PATH=$ROOT_NLOPT/lib64:$LIBRARY_PATH R
    
    1. install nloptr in the R console (single line below):
    install.packages(verbose=1,'nloptr')
    

    I originally posted this issue in the nloptr repo: https://github.com/astamm/nloptr/issues/123. However, @eddelbuettel encouraged me to post an issue here because we suspect that the issue may be the pkg-config file created by nlopt.

    Here's the output of some of my commands in CentOS:

    [[email protected] ~]$ module load stack/2022.1
    
    The following have been reloaded with a version change:
      1) stack/2020.1 => stack/2022.1
    
    [[email protected] ~]$ module load r/4.1.3_gcc-9.4.0
    [[email protected] ~]$ module load nlopt
    [[email protected] ~]$ R CMD config --all | grep lib64
    LIBnn = lib64
    [[email protected] ~]$ pkg-config --libs nlopt
    -L/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/nlopt-2.7.0-u5x4377/lib -lnlopt
    

    We think we would want that to be (https://github.com/astamm/nloptr/issues/123#issuecomment-1317199965_):

    -L/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/nlopt-2.7.0-u5x4377/lib64 -lnlopt
    

    That is, /lib64 in my case instead of /lib. @eddelbuettel, please clarify if I missed anything or got anything wrong!

    opened by isaactpetersen 4
  • Nonlineay constraints get violated in the result.

    Nonlineay constraints get violated in the result.

    Hi there: During my optimization, I applied LN_COBYLA since it supports arbitrary nonlinear constraint. My "constraint function" is actually a collision avoidance function. The function returns 10.0 when collision hits, so that the constraint should be view as unsatisfied. However, the result in experiment shows the constraint is not satisfied, and the algorithm can still get finished and converge. Can anyone give me some advice? Thanks sincerely! Screenshot from 2022-10-29 21-14-44

    opened by Lbaron980810 2
Releases(v2.7.1)
Owner
Steven G. Johnson
Steven G. Johnson
Adds timm pretrained backbone to pytorch's FasterRcnn model

Operating Systems Lab (ETCS-352) Experiments for Operating Systems Lab (ETCS-352) performed by me in 2021 at uni. All codes are written by me except t

Mriganka Nath 12 Dec 03, 2022
Image Restoration Using Swin Transformer for VapourSynth

SwinIR SwinIR function for VapourSynth, based on https://github.com/JingyunLiang/SwinIR. Dependencies NumPy PyTorch, preferably with CUDA. Note that t

Holy Wu 11 Jun 19, 2022
Posterior temperature optimized Bayesian models for inverse problems in medical imaging

Posterior temperature optimized Bayesian models for inverse problems in medical imaging Max-Heinrich Laves*, Malte Tölle*, Alexander Schlaefer, Sandy

Artificial Intelligence in Cardiovascular Medicine (AICM) 6 Sep 19, 2022
Tools for robust generative diffeomorphic slice to volume reconstruction

RGDSVR Tools for Robust Generative Diffeomorphic Slice to Volume Reconstructions (RGDSVR) This repository provides tools to implement the methods in t

Lucilio Cordero-Grande 0 Oct 29, 2021
CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning

CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning This repository contains the code and relevant instructions

XiaoMing 5 Aug 19, 2022
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory = 8G Numpy 1.

46 Dec 14, 2022
Code for CVPR2019 paper《Unequal Training for Deep Face Recognition with Long Tailed Noisy Data》

Unequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data. This is the code of CVPR 2019 paper《Unequal Training for Deep Face Recognition

Zhong Yaoyao 68 Jan 07, 2023
Official Python implementation of the FuzionCoin protocol

PyFuzc Official Python implementation of the FuzionCoin protocol WARNING: Under construction. Use at your own risk. Some functions may not work. Setup

FuzionCoin 3 Jul 07, 2022
Binary Stochastic Neurons in PyTorch

Binary Stochastic Neurons in PyTorch http://r2rt.com/binary-stochastic-neurons-in-tensorflow.html https://github.com/pytorch/examples/tree/master/mnis

Onur Kaplan 54 Nov 21, 2022
Implementation of the Point Transformer layer, in Pytorch

Point Transformer - Pytorch Implementation of the Point Transformer self-attention layer, in Pytorch. The simple circuit above seemed to have allowed

Phil Wang 501 Jan 03, 2023
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 05, 2023
face_recognization (FaceNet) + TFHE (HNP) + hand_face_detection (Mediapipe)

SuperControlSystem Face_Recognization (FaceNet) 面部识别 (FaceNet) Fully Homomorphic Encryption over the Torus (HNP) 环面全同态加密 (TFHE) Hand_Face_Detection (M

liziyu0104 2 Dec 30, 2021
Code for "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection", ICRA 2021

FGR This repository contains the python implementation for paper "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection"(I

Yi Wei 31 Dec 08, 2022
This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine

LSHTM_RCS This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine (LSHTM) in collabo

Lukas Kopecky 3 Jan 30, 2022
DABO: Data Augmentation with Bilevel Optimization

DABO: Data Augmentation with Bilevel Optimization [Paper] The goal is to automatically learn an efficient data augmentation regime for image classific

ElementAI 24 Aug 12, 2022
Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)

Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)- Emirhan BULUT

Emirhan BULUT 102 Nov 18, 2022
Digan - Official PyTorch implementation of Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

DIGAN (ICLR 2022) Official PyTorch implementation of "Generating Videos with Dyn

Sihyun Yu 147 Dec 31, 2022
chen2020iros: Learning an Overlap-based Observation Model for 3D LiDAR Localization.

Overlap-based 3D LiDAR Monte Carlo Localization This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D

Photogrammetry & Robotics Bonn 219 Dec 15, 2022
MADE (Masked Autoencoder Density Estimation) implementation in PyTorch

pytorch-made This code is an implementation of "Masked AutoEncoder for Density Estimation" by Germain et al., 2015. The core idea is that you can turn

Andrej 498 Dec 30, 2022