Benchmark library for high-dimensional HPO of black-box models based on Weighted Lasso regression

Overview

LassoBench

LassoBench is a library for high-dimensional hyperparameter optimization benchmarks based on Weighted Lasso regression.

Note: LassoBench is under active construction. Follow for more benchmarks soon.

Install and work with the development version

From a console or terminal clone the repository and install LassoBench:

::

git clone https://github.com/ksehic/LassoBench.git
cd LassoBench/
pip install -e .

Overview

The objective is to optimize the multi-dimensional hyperparameter that balances the least-squares estimation and the penalty term that promotes the sparsity.

The ambient space bounds are defined between [-1, 1].

LassoBench comes with two classes SyntheticBenchmark and RealBenchmark. While RealBenchmark is based on real-world applications found in medicine and finance, SyntheticBenchmark covers synthetic well-defined conditions. The user can select one of the predefined synthetic benchmarks or create a different bechmark.

Each benchmark comes with .evaluate that is used to evaluate the objective function, .test that provides the post-processing metrics (such as MSE on the test data and the F-score for synt benchs) and the argument mf_opt to define the multi-fidelity framework that is evaluated via .fidelity_evaluate.

The results are compared with the baselines LassoCV (.run_LASSOCV), AdaptiveLassoCV (to be implemented soon) and Sparse-HO (.run_sparseho).

Simple experiments are provided in example.py. In hesbo_example.py and alebo_example.py, we demostrate how to use LassoBench with some well-known HPO algorithms for high-dimensional problems.

Please refer to the reference for more details.

.
├── ...
├── example                    # Examples how to use LassoBench for HDBO algorithms
│   ├── alebo_example.py       # ALEBO applied on synt bench
│   ├── example.py             # Simple cases how to run with synt, real, and multifidelity benchs
│   ├── hesbo_example.py        # HesBO applied on synt and real bench
│   ├── hesbo_lib.pu            # HesBO library
│
└── ...

License

LassoBench is distributed under the MIT license. More information on the license can be found here

Simple synthetic bench code

import numpy as np
import LassoBench
synt_bench = LassoBench.SyntheticBenchmark(pick_bench='synt_simple')
d = synt_bench.n_features
random_config = np.random.uniform(low=-1.0, high=1.0, size=(d,))
loss = synt_bench.evaluate(random_config)

Real-world bench code

import numpy as np
import LassoBench
real_bench = LassoBench.RealBenchmark(pick_data='rcv1')
d = real_bench.n_features
random_config = np.random.uniform(low=-1.0, high=1.0, size=(d,))
loss = real_bench.evaluate(random_config)

Multi-information source bench code

import numpy as np
import LassoBench
real_bench_mf = LassoBench.RealBenchmark(pick_data='rcv1', mf_opt='discrete_fidelity')
d = real_bench_mf.n_features
random_config = np.random.uniform(low=-1.0, high=1.0, size=(d,))
fidelity_pick = 0
loss = real_bench_mf.fidelity_evaluate(random_config, index_fidelity=fidelity_pick)

List of synthetic benchmarks

Name Dimensionality Axis-aligned Subspace
synt_simple 60 3
synt_medium 100 5
synt_high 300 15
synt_hard 1000 50

List of real world benchmarks

Name Dimensionality Approx. Axis-aligned Subspace
breast_cancer 10 3
diabetes 8 5
leukemia 7 129 22
dna 180 43
rcv1 19 959 75

Cite

If you use this code, please cite:


Šehić Kenan, Gramfort Alexandre, Salmon Joseph and Nardi Luigi. "LassoBench: A High-Dimensional Hyperparameter Optimization Benchmark Suite for Lasso", TBD, 2021.

Owner
Kenan Šehić
Postdoctoral research fellow at Lund University - Department of Computer Science with interest in machine learning and uncertainty quantification
Kenan Šehić
The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color

The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color Overview Code and dataset for The World of an Octopus: H

1 Nov 13, 2021
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

ActNN : Activation Compressed Training This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Comp

UC Berkeley RISE 178 Jan 05, 2023
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
Learn other languages ​​using artificial intelligence with python.

The main idea of ​​the project is to facilitate the learning of other languages. We created a simple AI that will interact with you. Just ask questions that if she knows, she will answer.

Pedro Rodrigues 2 Jun 07, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
A standard framework for modelling Deep Learning Models for tabular data

PyTorch Tabular aims to make Deep Learning with Tabular data easy and accessible to real-world cases and research alike.

801 Jan 08, 2023
An addernet CUDA version

Training addernet accelerated by CUDA Usage cd adder_cuda python setup.py install cd .. python main.py Environment pytorch 1.10.0 CUDA 11.3 benchmark

LingXY 4 Jun 20, 2022
Canonical Appearance Transformations

CAT-Net: Learning Canonical Appearance Transformations Code to accompany our paper "How to Train a CAT: Learning Canonical Appearance Transformations

STARS Laboratory 54 Dec 24, 2022
Code repo for "Transformer on a Diet" paper

Transformer on a Diet Reference: C Wang, Z Ye, A Zhang, Z Zhang, A Smola. "Transformer on a Diet". arXiv preprint arXiv (2020). Installation pip insta

cgraywang 31 Sep 26, 2021
Implementations of paper Controlling Directions Orthogonal to a Classifier

Classifier Orthogonalization Implementations of paper Controlling Directions Orthogonal to a Classifier , ICLR 2022, Yilun Xu, Hao He, Tianxiao Shen,

Yilun Xu 33 Dec 01, 2022
Rotary Transformer

[中文|English] Rotary Transformer Rotary Transformer is an MLM pre-trained language model with rotary position embedding (RoPE). The RoPE is a relative

325 Jan 03, 2023
Implementation of the Remixer Block from the Remixer paper, in Pytorch

Remixer - Pytorch Implementation of the Remixer Block from the Remixer paper, in Pytorch. It claims that substituting the feedforwards in transformers

Phil Wang 35 Aug 23, 2022
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Yue Zhao 127 Jan 05, 2023
ADOP: Approximate Differentiable One-Pixel Point Rendering

ADOP: Approximate Differentiable One-Pixel Point Rendering Abstract: We present a novel point-based, differentiable neural rendering pipeline for scen

Darius Rückert 1.9k Jan 06, 2023
Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding

Vision Longformer This project provides the source code for the vision longformer paper. Multi-Scale Vision Longformer: A New Vision Transformer for H

Microsoft 209 Dec 30, 2022
Some bravo or inspiring research works on the topic of curriculum learning.

Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN Official code for NeurIPS 2021 paper "Towards Scalable Unpaired Virtu

131 Jan 07, 2023
FastCover: A Self-Supervised Learning Framework for Multi-Hop Influence Maximization in Social Networks by Anonymous.

FastCover: A Self-Supervised Learning Framework for Multi-Hop Influence Maximization in Social Networks by Anonymous.

0 Apr 02, 2021
Various operations like path tracking, counting, etc by using yolov5

Object-tracing-with-YOLOv5 Various operations like path tracking, counting, etc by using yolov5

Pawan Valluri 5 Nov 28, 2022
Official implementation for ICDAR 2021 paper "Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer"

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer Description Convert offline handwritten mathematical expressi

Wenqi Zhao 87 Dec 27, 2022
Deep Two-View Structure-from-Motion Revisited

Deep Two-View Structure-from-Motion Revisited This repository provides the code for our CVPR 2021 paper Deep Two-View Structure-from-Motion Revisited.

Jianyuan Wang 145 Jan 06, 2023