Fast SHAP value computation for interpreting tree-based models

Overview

FastTreeSHAP

FastTreeSHAP package is built based on the paper Fast TreeSHAP: Accelerating SHAP Value Computation for Trees published in NeurIPS 2021 XAI4Debugging Workshop. It is a fast implementation of the TreeSHAP algorithm in the SHAP package.

For more detailed introduction of FastTreeSHAP package, please check out this blogpost.

Introduction

SHAP (SHapley Additive exPlanation) values are one of the leading tools for interpreting machine learning models. Even though computing SHAP values takes exponential time in general, TreeSHAP takes polynomial time on tree-based models (e.g., decision trees, random forest, gradient boosted trees). While the speedup is significant, TreeSHAP can still dominate the computation time of industry-level machine learning solutions on datasets with millions or more entries.

In FastTreeSHAP package we implement two new algorithms, FastTreeSHAP v1 and FastTreeSHAP v2, designed to improve the computational efficiency of TreeSHAP for large datasets. We empirically find that Fast TreeSHAP v1 is 1.5x faster than TreeSHAP while keeping the memory cost unchanged, and Fast TreeSHAP v2 is 2.5x faster than TreeSHAP, at the cost of a slightly higher memory usage (performance is measured on a single core).

The table below summarizes the time and space complexities of each variant of TreeSHAP algorithm ( is the number of samples to be explained, is the number of features, is the number of trees, is the maximum number of leaves in any tree, and is the maximum depth of any tree). Note that the (theoretical) average running time of FastTreeSHAP v1 is reduced to 25% of TreeSHAP.

TreeSHAP Version Time Complexity Space Complexity
TreeSHAP
FastTreeSHAP v1
FastTreeSHAP v2 (general case)
FastTreeSHAP v2 (balanced trees)

Performance with Parallel Computing

Parallel computing is fully enabled in FastTreeSHAP package. As a comparison, parallel computing is not enabled in SHAP package except for "shortcut" which calls TreeSHAP algorithms embedded in XGBoost, LightGBM, and CatBoost packages specifically for these three models.

The table below compares the execution times of FastTreeSHAP v1 and FastTreeSHAP v2 in FastTreeSHAP package against TreeSHAP algorithm (or "shortcut") in SHAP package on two datasets Adult (binary classification) and Superconductor (regression). All the evaluations were run in parallel on all available cores in Azure Virtual Machine with size Standard_D8_v3 (8 cores and 32GB memory) (except for scikit-learn models in SHAP package). We ran each evaluation on 10,000 samples, and the results were averaged over 3 runs.

Model # Trees Tree
Depth
Dataset SHAP (s) FastTree-
SHAP v1 (s)
Speedup FastTree-
SHAP v2 (s)
Speedup
sklearn random forest 500 8 Adult 318.44* 43.89 7.26 27.06 11.77
sklearn random forest 500 8 Super 466.04 58.28 8.00 36.56 12.75
sklearn random forest 500 12 Adult 2446.12 293.75 8.33 158.93 15.39
sklearn random forest 500 12 Super 5282.52 585.85 9.02 370.09 14.27
XGBoost 500 8 Adult 17.35** 12.31 1.41 6.53 2.66
XGBoost 500 8 Super 35.31 21.09 1.67 13.00 2.72
XGBoost 500 12 Adult 62.19 40.31 1.54 21.34 2.91
XGBoost 500 12 Super 152.23 82.46 1.85 51.47 2.96
LightGBM 500 8 Adult 7.64*** 7.20 1.06 3.24 2.36
LightGBM 500 8 Super 8.73 7.11 1.23 3.58 2.44
LightGBM 500 12 Adult 9.95 7.96 1.25 4.02 2.48
LightGBM 500 12 Super 14.02 11.14 1.26 4.81 2.91

* Parallel computing is not enabled in SHAP package for scikit-learn models, thus TreeSHAP algorithm runs on a single core.
** SHAP package calls TreeSHAP algorithm in XGBoost package, which by default enables parallel computing on all cores.
*** SHAP package calls TreeSHAP algorithm in LightGBM package, which by default enables parallel computing on all cores.

Installation

FastTreeSHAP package is available on PyPI and can be installed with pip:

pip install fasttreeshap

Installation troubleshooting:

  • On Macbook, if an error message ld: library not found for -lomp pops up, run the following command line before installation (Reference):
brew install libomp

Usage

The following screenshot shows a typical use case of FastTreeSHAP on Census Income Data. Note that the usage of FastTreeSHAP is exactly the same as the usage of SHAP, except for three additional arguments in the class TreeExplainer: algorithm, n_jobs, and shortcut.

algorithm: This argument specifies the TreeSHAP algorithm used to run FastTreeSHAP. It can take values "v0", "v1", "v2" or "auto", and its default value is "auto":

  • "v0": Original TreeSHAP algorithm in SHAP package.
  • "v1": FastTreeSHAP v1 algorithm proposed in FastTreeSHAP paper.
  • "v2": FastTreeSHAP v2 algorithm proposed in FastTreeSHAP paper.
  • "auto" (default): Automatic selection between "v0", "v1" and "v2" according to the number of samples to be explained and the constraint on the allocated memory. Specifically, "v1" is always preferred to "v0" in any use cases, and "v2" is preferred to "v1" when the number of samples to be explained is sufficiently large (), and the memory constraint is also satisfied (, is the number of threads). More detailed discussion of the above criteria can be found in FastTreeSHAP paper and in Section Notes.

n_jobs: This argument specifies the number of parallel threads used to run FastTreeSHAP. It can take values -1 or a positive integer. Its default value is -1, which means utilizing all available cores in parallel computing.

shortcut: This argument determines whether to use the TreeSHAP algorithm embedded in XGBoost, LightGBM, and CatBoost packages directly when computing SHAP values for XGBoost, LightGBM, and CatBoost models and when computing SHAP interaction values for XGBoost models. Its default value is False, which means bypassing the "shortcut" and using the code in FastTreeSHAP package directly to compute SHAP values for XGBoost, LightGBM, and CatBoost models. Note that currently shortcut is automaticaly set to be True for CatBoost model, as we are working on CatBoost component in FastTreeSHAP package. More details of the usage of "shortcut" can be found in the notebooks Census Income, Superconductor, and Crop Mapping.

FastTreeSHAP Adult Screenshot1

The code in the following screenshot was run on all available cores in a Macbook Pro (2.4 GHz 8-Core Intel Core i9 and 32GB Memory). We see that both "v1" and "v2" produce exactly the same SHAP value results as "v0". Meanwhile, "v2" has the shortest execution time, followed by "v1", and then "v0". "auto" selects "v2" as the most appropriate algorithm in this use case as desired. For more detailed comparisons between FastTreeSHAP v1, FastTreeSHAP v2 and the original TreeSHAP, check the notebooks Census Income, Superconductor, and Crop Mapping.

FastTreeSHAP Adult Screenshot2

Notes

  • In FastTreeSHAP paper, two scenarios in model interpretation use cases have been discussed: one-time usage (explaining all samples for once), and multi-time usage (having a stable model in the backend and receiving new scoring data to be explained on a regular basis). Current version of FastTreeSHAP package only supports one-time usage scenario, and we are working on extending it to multi-time usage scenario with parallel computing. Evaluation results in FastTreeSHAP paper shows that FastTreeSHAP v2 can achieve as high as 3x faster explanation in multi-time usage scenario.
  • The implementation of parallel computing is straightforward for FastTreeSHAP v1 and the original TreeSHAP, where a parallel for-loop is built over all samples. The implementation of parallel computing for FastTreeSHAP v2 is slightly more complicated: Two versions of parallel computing have been implemented. Version 1 builds a parallel for-loop over all trees, which requires memory allocation (each thread has its own matrices to store both SHAP values and pre-computed values). Version 2 builds two consecutive parallel for-loops over all trees and over all samples respectively, which requires memory allocation (first parallel for-loop stores pre-computed values across all trees). In FastTreeSHAP package, version 1 is selected for FastTreeSHAP v2 as long as its memory constraint is satisfied. If not, version 2 is selected as an alternative as long as its memory constraint is satisfied. If the memory constraints in both version 1 and version 2 are not satisfied, FastTreeSHAP v1 will replace FastTreeSHAP v2 with a less strict memory constraint.

Notebooks

The notebooks below contain more detailed comparisons between FastTreeSHAP v1, FastTreeSHAP v2 and the original TreeSHAP in classification and regression problems using scikit-learn, XGBoost and LightGBM:

Citation

Please cite FastTreeSHAP in your publications if it helps your research:

@article{yang2021fast,
  title={Fast TreeSHAP: Accelerating SHAP Value Computation for Trees},
  author={Yang, Jilei},
  journal={arXiv preprint arXiv:2109.09847},
  year={2021}
}

License

Copyright (c) LinkedIn Corporation. All rights reserved. Licensed under the BSD 2-Clause License.

Comments
  • Segmentation fault (core dumped) for shap_values

    Segmentation fault (core dumped) for shap_values

    Hi,

    I'm trying to apply the TreeExplainer to get shap_values on XGBoost model on regression problem with large dataset. During the hyperparameter tuning, it failed due to segmentation fault at the explainer.shap_values() step in certain hyperparameter sets. I used fasttreeshap=0.1.1, xgboost=1.4.1 (also tested 1.6.0) and the machine came with CPU:"Intel Xeon E5-2640 v4 (20) @ 3.400GHz" and Memory:"128GB". The sample code below is a toy script to reproduce the issue using the Superconductor dataset from example notebook:

    # for debugging
    import faulthandler
    faulthandler.enable()
    
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import train_test_split
    import xgboost as xgb
    import fasttreeshap
    
    print(f"XGBoost version: {xgb.__version__}")
    print(f"fasttreeshap version: {fasttreeshap.__version__}")
    
    # source of data: https://archive.ics.uci.edu/ml/datasets/superconductivty+data
    data = pd.read_csv("FastTreeSHAP/data/superconductor_train.csv", engine = "python")
    train, test = train_test_split(data, test_size = 0.5, random_state = 0)
    label_train = train["critical_temp"]
    label_test = test["critical_temp"]
    train = train.iloc[:, :-1]
    test = test.iloc[:, :-1]
    
    print("train XGBoost model")
    xgb_model = xgb.XGBRegressor(
        max_depth = 100, n_estimators = 200, learning_rate = 0.1, n_jobs = -1, alpha = 0.12, random_state = 0)
    xgb_model.fit(train, label_train)
    
    print("run TreeExplainer()")
    shap_explainer = fasttreeshap.TreeExplainer(xgb_model)
    
    print("run shap_values()")
    shap_values = shap_explainer.shap_values(train)
    

    The time report of program execution also showed that the "Maximum resident set size" was only about 32GB.

    ~$ /usr/bin/time -v python segfault.py 
    XGBoost version: 1.4.1
    fasttreeshap version: 0.1.1
    train XGBoost model
    run TreeExplainer()
    run shap_values()
    Fatal Python error: Segmentation fault
    
    Thread 0x00007ff2c2793740 (most recent call first):
      File "~/.local/lib/python3.8/site-packages/fasttreeshap/explainers/_tree.py", line 459 in shap_values
      File "segfault.py", line 27 in <module>
    Segmentation fault (core dumped)
    
    Command terminated by signal 11
            Command being timed: "python segfault.py"
            User time (seconds): 333.65
            System time (seconds): 27.79
            Percent of CPU this job got: 797%
            Elapsed (wall clock) time (h:mm:ss or m:ss): 0:45.30
            Average shared text size (kbytes): 0
            Average unshared data size (kbytes): 0
            Average stack size (kbytes): 0
            Average total size (kbytes): 0
            Maximum resident set size (kbytes): 33753096
            Average resident set size (kbytes): 0
            Major (requiring I/O) page faults: 0
            Minor (reclaiming a frame) page faults: 8188488
            Voluntary context switches: 3048
            Involuntary context switches: 3089
            Swaps: 0
            File system inputs: 0
            File system outputs: 0
            Socket messages sent: 0
            Socket messages received: 0
            Signals delivered: 0
            Page size (bytes): 4096
            Exit status: 0
    

    In some case (and the example above), forcing TreeExplainer(algorithm="v1") did help, which means the issue could only happen to "v2" (or "auto" passing the _memory_check()). However, by chance the v1 would raise another check_additivity issue which remained unsolved in the original algorithm.

    Alternatively, passing approximate=True to explainer.shap_values() would work but have the inconsistency concerns for the reproducibility of our studies...

    In this case, could you help me to debug with this issue?

    Thanks you so much!

    opened by ShaunFChen 8
  • TreeSHAP does not exact SHAP values on correlated features

    TreeSHAP does not exact SHAP values on correlated features

    As highlighted in https://github.com/slundberg/shap/issues/2345, it seems that TreeSHAP does not compute the expected SHAP values.

    The following notebook reproduces the problem with the FastTreeSHAP implementation: https://nbviewer.org/gist/glemaitre/9a30dd3a704675164b84d9bf7128882e

    Note that it is not only a problem in the implementation but rather a problem in algorithm 1 of the original TreeSHAP paper (i.e. tree traversal)

    opened by glemaitre 6
  • Numpy<1.22 requirement, could we upgrade it?

    Numpy<1.22 requirement, could we upgrade it?

    First of all, thank you for this amazing project, it has really sped up my team's shap execution.

    My problem arises because the numpy<1.22 restriction causes conflicts with other packages we use.

    In your setup.py code it says that this restriction is due to using numba, but in their own setup.py the minimum numpy version is 1.18 and there is no maximum version requirement. In fact, I've been able to upgrade numpy's version after installing fasttreeshap in a clean environment and have had no problem at all during my test executions (using numpy==1.23.5). This is how I setup the environment on Windows 10:

    $ python -m venv venv_fasttreeshap
    $ venv_fasttreeshap\Scripts\activate
    $ pip install fasttreeshap
    $ pip install --upgrade numpy==1.23.5
    

    It throws the following error, but pip is finaly able to install it anyway:

    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    fasttreeshap 0.1.2 requires numpy<1.22, but you have numpy 1.23.5 which is incompatible.
    

    Do you think you could upgrade the numpy version in the install_requires section of setup.py, so that it's easier to combine fasttreeshap with other packages?

    Thank you!

    opened by CarlaFernandez 3
  • Catboost bug

    Catboost bug

    Catboost produces a TreeEnsemble has no "num_nodes" error with this code. Btw do you support a background dataset parameter, like in shap for "interventional" vs "tree_path_dependent"? Because if your underlying code uses the "interventional" method this might be related to this bug: https://github.com/slundberg/shap/issues/2557

    from catboost import CatBoostRegressor
    import  fasttreeshap
    
    X, y = shap.datasets.boston()
    
    model = CatBoostRegressor(task_type="CPU",logging_level="Silent").fit(X, y)
    explainer = fasttreeshap.TreeExplainer(model, algorithm="v2", n_jobs=-1)
    shap_values = explainer(X)
    
    # visualize the first prediction's explanation
    shap.plots.waterfall(shap_values[25])
    
    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    Input In [131], in <cell line: 13>()
         10 # explain the model's predictions using SHAP
         11 # (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)
         12 explainer = fasttreeshap.TreeExplainer(model, algorithm="v2", n_jobs=-1)
    ---> 13 shap_values = explainer(X)
         15 # visualize the first prediction's explanation
         16 shap.plots.waterfall(shap_values[25])
    
    File ~\miniconda3\envs\Master\lib\site-packages\fasttreeshap\explainers\_tree.py:256, in Tree.__call__(self, X, y, interactions, check_additivity)
        253     feature_names = getattr(self, "data_feature_names", None)
        255 if not interactions:
    --> 256     v = self.shap_values(X, y=y, from_call=True, check_additivity=check_additivity, approximate=self.approximate)
        257 else:
        258     assert not self.approximate, "Approximate computation not yet supported for interaction effects!"
    
    File ~\miniconda3\envs\Master\lib\site-packages\fasttreeshap\explainers\_tree.py:379, in Tree.shap_values(self, X, y, tree_limit, approximate, check_additivity, from_call)
        376 algorithm = self.algorithm
        377 if algorithm == "v2":
        378     # check if memory constraint is satisfied (check Section Notes in README.md for justifications of memory check conditions in function _memory_check)
    --> 379     memory_check_1, memory_check_2 = self._memory_check(X)
        380     if memory_check_1:
        381         algorithm = "v2_1"
    
    File ~\miniconda3\envs\Master\lib\site-packages\fasttreeshap\explainers\_tree.py:483, in Tree._memory_check(self, X)
        482 def _memory_check(self, X):
    --> 483     max_leaves = (max(self.model.num_nodes) + 1) / 2
        484     max_combinations = 2**self.model.max_depth
        485     phi_dim = X.shape[0] * (X.shape[1] + 1) * self.model.num_outputs
    
    AttributeError: 'TreeEnsemble' object has no attribute 'num_nodes'
    
    opened by nilslacroix 3
  • Conda package

    Conda package

    It would be nice to add a FastTreeSHAP package to conda-forge https://conda-forge.org/ (in addition to pypi)

    You can use grayskull https://github.com/conda-incubator/grayskull to generate a boilerplate for the conda recipe.

    opened by candalfigomoro 1
  • Catboost improvement negligible?

    Catboost improvement negligible?

    Using a CatBoostRegressor() with your provided notebooks and the semiconductor dataset, I did not see any improvements regarding computing the shap files. Do you have any data on this regarding speed up or did I make a mistake? I did not find any information on this in the paper or blogpost.

    Also how does fasttreeshap handle GPU support?

    Code used:

    cat =CatBoostRegressor(iterations=2000, task_type="CPU")#, devices="0-1")
    cat.fit(train, label_train)
    
    run_fasttreeshap(
        model = cat sample = test, interactions = False, algorithm_version = "v0", n_jobs = n_jobs,
        num_round = num_round, num_sample = num_sample, shortcut = False)
    
    run_fasttreeshap(
        model =cat, sample = test, interactions = False, algorithm_version = "v1", n_jobs = n_jobs,
        num_round = num_round, num_sample = num_sample, shortcut = False)
    
    opened by nilslacroix 1
  • [BUG] LightGBM Booster 'objective' Parameter Membership with `shortcut=True`

    [BUG] LightGBM Booster 'objective' Parameter Membership with `shortcut=True`

    Note This corresponds to an issue on slundberg/shap.

    Description A LightGBM Booster does not necessarily have 'objective' in its param attribute dictionary. The documented default behavior in LightGBM is to assume regression without auto-setting the 'objective' key in the Booster.params dictionary if 'objective' is missing from param (a dictionary passed into the Training API function train).

    Reproduction The following is a link to a notebook containing details for error reproduction and a view of the stack trace: Notebook for Reproduction

    Thanks for your responsiveness!

    Nandish Gupta Data Science Engineer, SolasAI image

    opened by ngupta20 0
  • SHAP Values Change and Additivity Breaks on NumPy Upgrade

    SHAP Values Change and Additivity Breaks on NumPy Upgrade

    Description

    Following the 0.1.3 release, the additivity of my logit-linked xgboost local tree shaps broke. I found that pinning my numpy version back to either 1.21.6 or 1.22.4 temporarily solved the issue. I also tested every numpy version from 1.23.1 to 1.23.5 and found that all numpy versions 1.23.* were associated with the value changes.

    Brainstorming

    It makes me wonder if there is any numpy or numpy-adjacent usage in FastTreeSHAP that could be reimplemented to be compatible with new versions. I would be glad to contribute if that's the case and someone could help restrict the scope of the problem (if I can't restrict it myself quickly enough). As we all know from #14, that would be highly preferable to pinning numpy versions. My instinct tells me this issue may link to an existing NumPy update note or issue... haven't looked too deeply yet.

    Good Release

    In any case, I still think the decision to relax the numpy constraint is the right one and anyone who runs into the same behavior can just pin the version themselves in the short term.

    Environment Info

    Partial result of pip list

    Package                       Version
    ----------------------------- -----------
    fasttreeshap                  0.1.3
    pandas                        1.5.2
    numba                         0.56.4
    numpy                         1.23.5
    setuptools                    65.6.3
    shap                          0.41.0
    wheel                         0.38.4
    xgboost                       1.7.1
    
    

    uname --all

    Linux ip-**-**-*-*** 5.4.0-1088-aws #96~18.04.1-Ubuntu SMP Mon Oct 17 02:57:48 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    

    lscpu

    Architecture:        x86_64
    CPU op-mode(s):      32-bit, 64-bit
    Byte Order:          Little Endian
    CPU(s):              8
    On-line CPU(s) list: 0-7
    Thread(s) per core:  2
    Core(s) per socket:  4
    Socket(s):           1
    NUMA node(s):        1
    Vendor ID:           GenuineIntel
    CPU family:          6
    Model:               85
    Model name:          Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
    Stepping:            7
    CPU MHz:             3102.640
    BogoMIPS:            5000.00
    Hypervisor vendor:   KVM
    Virtualization type: full
    L1d cache:           32K
    L1i cache:           32K
    L2 cache:            1024K
    L3 cache:            36608K
    NUMA node0 CPU(s):   0-7
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
    

    Thanks for your responsiveness to the community!

    Nandish Gupta Senior AI Engineer, SolasAI image

    opened by ngupta20 1
  • Cannot build fasttreeshap in linux environment

    Cannot build fasttreeshap in linux environment

    We would like to use fasttreeshap to calculate explainability values for our machine learning model. To run our model we create a linux container with a venv for the model. All the requirements are specified in a file and installed in the venv with pip -r requirements.txt. Our model depends on numpy==1.21.4 and when we try to install fasttreeshap we incur into this issue:

    ERROR: Cannot install oldest-supported-numpy==0.12, oldest-supported-numpy==0.14, oldest-supported-numpy==0.15, oldest-supported-numpy==2022.1.30, oldest-supported-numpy==2022.3.27, oldest-supported-numpy==2022.4.10, oldest-supported-numpy==2022.4.18, oldest-supported-numpy==2022.4.8, oldest-supported-numpy==2022.5.27, oldest-supported-numpy==2022.5.28 and oldest-supported-numpy==2022.8.16 because these package versions have conflicting dependencies.
    The conflict is caused by:
    oldest-supported-numpy 2022.8.16 depends on numpy==1.17.3; python_version == "3.8" and platform_machine not in "arm64|aarch64|s390x|loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.5.28 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.5.27 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.4.18 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.4.10 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.4.8 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.3.27 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.1.30 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 0.15 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 0.14 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 0.12 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_python_implementation != "PyPy"
    

    I can volunteer time and resources to add a wheel file for linux. Would you be interested in distributing a wheel file created by us ?

    opened by n-karahan-afacan 1
  • Plotting example

    Plotting example

    Thanks for making FastTreeShap! I'm excited to use it. I want to generate some classic SHAP plots. Can you provide an example, maybe in a jupyter notebook, of how to use your plotting library to visualize the SHAP values.

    opened by jpfeil 3
  • Can not install FastTreeSHAP on linux

    Can not install FastTreeSHAP on linux

    Hi team, I have both windows and linux laptop. I was able to install FastTreeSHAP package on window but not on linux. this is the error message that I got on linux: ERROR: Could not build wheels for fasttreeshap, whichh is required tot install pyproject.toml-based projects

    opened by LongUEH 2
Releases(v0.1.3)
  • v0.1.3(Nov 29, 2022)

  • v0.1.2(May 20, 2022)

    PyPI: https://pypi.org/project/fasttreeshap/0.1.2/ Authors: @jlyang1990

    Added argument memory_tolerance to enable more flexible memory control, and fixed some bugs

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Mar 22, 2022)

    PyPI: https://pypi.org/project/fasttreeshap/0.1.1/ Authors: @jlyang1990

    Blog post for this release: https://engineering.linkedin.com/blog/2022/fasttreeshap--accelerating-shap-value-computation-for-trees Paper for this release: https://arxiv.org/abs/2109.09847

    Source code(tar.gz)
    Source code(zip)
Owner
LinkedIn
LinkedIn
Generative Art Using Neural Visual Grammars and Dual Encoders

Generative Art Using Neural Visual Grammars and Dual Encoders Arnheim 1 The original algorithm from the paper Generative Art Using Neural Visual Gramm

DeepMind 231 Jan 05, 2023
Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Automated Side Channel Analysis of Media Software with Manifold Learning Official implementation of USENIX Security 2022 paper: Automated Side Channel

Yuanyuan Yuan 175 Jan 07, 2023
SalGAN: Visual Saliency Prediction with Generative Adversarial Networks

SalGAN: Visual Saliency Prediction with Adversarial Networks Junting Pan Cristian Canton Ferrer Kevin McGuinness Noel O'Connor Jordi Torres Elisa Sayr

Image Processing Group - BarcelonaTECH - UPC 347 Nov 22, 2022
Face Recognition plus identification simply and fast | Python

PyFaceDetection Face Recognition plus identification simply and fast Ubuntu Setup sudo pip3 install numpy sudo pip3 install cmake sudo pip3 install dl

Peyman Majidi Moein 16 Sep 22, 2022
Yolov5 deepsort inference,使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

813 Dec 31, 2022
[SIGGRAPH Asia 2019] Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

AGIS-Net Introduction This is the official PyTorch implementation of the Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning. paper | suppl

Yue Gao 102 Jan 02, 2023
Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral)

Joint Discriminative and Generative Learning for Person Re-identification [Project] [Paper] [YouTube] [Bilibili] [Poster] [Supp] Joint Discriminative

NVIDIA Research Projects 1.2k Dec 30, 2022
Drone detection using YOLOv5

This drone detection system uses YOLOv5 which is a family of object detection architectures and we have trained the model on Drone Dataset. Overview I

Tushar Sarkar 27 Dec 20, 2022
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
Emotion classification of online comments based on RNN

emotion_classification Emotion classification of online comments based on RNN, the accuracy of the model in the test set reaches 99% data: Large Movie

1 Nov 23, 2021
A transformer which can randomly augment VOC format dataset (both image and bbox) online.

VocAug It is difficult to find a script which can augment VOC-format dataset, especially the bbox. Or find a script needs complex requirements so it i

Coder.AN 1 Mar 05, 2022
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"

Introduction This repository contains research code for the ACL 2021 paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual

AdapterHub 20 Aug 04, 2022
Point Cloud Registration using Representative Overlapping Points.

Point Cloud Registration using Representative Overlapping Points (ROPNet) Abstract 3D point cloud registration is a fundamental task in robotics and c

ZhuLifa 36 Dec 16, 2022
Lightweight tool to perform MITM attack on local network

ARPSpy - A lightweight tool to perform MITM attack Using many library to perform ARP Spoof and auto-sniffing HTTP packet containing credential. (Never

MinhItachi 8 Aug 28, 2022
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation

ENet in Caffe Execution times and hardware requirements Network 1024x512 1280x720 Parameters Model size (fp32) ENet 20.4 ms 32.9 ms 0.36 M 1.5 MB SegN

Timo Sämann 561 Jan 04, 2023
PyTorch reimplementation of minimal-hand (CVPR2020)

Minimal Hand Pytorch Unofficial PyTorch reimplementation of minimal-hand (CVPR2020). you can also find in youtube or bilibili bare hand youtube or bil

Hao Meng 228 Dec 29, 2022
[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

CC 4.4k Dec 27, 2022