zoofs is a Python library for performing feature selection using an variety of nature inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics based to Evolutionary. It's easy to use ,flexible and powerful tool to reduce your feature size.

Related tags

Algorithmszoofs
Overview

zoofs ( Zoo Feature Selection )

zoofs Logo Header

zoofs is a Python library for performing feature selection using an variety of nature inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics based to Evolutionary. It's easy to use ,flexible and powerful tool to reduce your feature size.

Installation

PyPI version

Using pip

Use the package manager to install zoofs.

pip install zoofs

Available Algorithms

Algorithm Name Class Name Description References doi
Particle Swarm Algorithm ParticleSwarmOptimization Utilizes swarm behaviour 10.1007/978-3-319-13563-2_51
Grey Wolf Algorithm GreyWolfOptimization Utilizes wolf hunting behaviour https://doi.org/10.1016/j.neucom.2015.06.083
Dragon Fly Algorithm DragonFlyOptimization Utilizes dragonfly swarm behaviour 10.1016/j.knosys.2020.106131
Genetic Algorithm Algorithm GeneticOptimization Utilizes genetic mutation behaviour 10.1109/ICDAR.2001.953980
Gravitational Algorithm GravitationalOptimization Utilizes newtons gravitational behaviour 10.1109/ICASSP.2011.5946916

More algos soon, stay tuned !

  • [Try It Now?] Open In Colab

Usage

Define your own objective function for optimization !

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import ParticleSwarmOptimization
# create object of algorithm
algo_object=ParticleSwarmOptimization(objective_function_topass,n_iteration=20,
                                       population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                       
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,verbose=True)
#plot your results
algo_object.plot_history()
   

Suggestions for Usage

  • As available algorithms are wrapper algos. It is better to use ml models that build quicker, e.g lightgbm, catboost.
  • Take sufficient amount for 'population_size' , as this will determine the extent of exploration and exploitation of the algo.
  • Ensure that your ml model has its hyperparamters optimized before passing it to zoofs algos.

objective score plot

objective score Header



Algorithms

Particle Swarm Algorithm

Particle Swarm


class zoofs.ParticleSwarmOptimization(objective_function,n_iteration=50,population_size=50,minimize=True,c1=2,c2=2,w=0.9)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
c1 : float, default=2.0
first acceleration coefficient of particle swarm
c2 : float, default=2.0
second acceleration coefficient of particle swarm
w : float, default=0.9
weight parameter
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train, y_train, X_test, y_test,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import ParticleSwarmOptimization
# create object of algorithm
algo_object=ParticleSwarmOptimization(objective_function_topass,n_iteration=20,
                                       population_size=20,minimize=True,c1=2,c2=2,w=0.9)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                      
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,verbose=True)
#plot your results
algo_object.plot_history()


Grey Wolf Algorithm

Grey Wolf


class zoofs.GreyWolfOptimization(objective_function,n_iteration=50,population_size=50,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,method=1,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
method : {1, 2}, default=1
Choose the between the two methods of grey wolf optimization
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GreyWolfOptimization
# create object of algorithm
algo_object=GreyWolfOptimization(objective_function_topass,n_iteration=20,
                                    population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                       
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,method=1,verbose=True)
#plot your results
algo_object.plot_history()


Dragon Fly Algorithm

Dragon Fly


class zoofs.DragonFlyOptimization(objective_function,n_iteration=50,population_size=50,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,method='sinusoidal',verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
method : {'linear','random','quadraic','sinusoidal'}, default='sinusoidal'
Choose the between the three methods of Dragon Fly optimization
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import DragonFlyOptimization
# create object of algorithm
algo_object=DragonFlyOptimization(objective_function_topass,n_iteration=20,
                                    population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                     
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid, method='sinusoidal', verbose=True)
#plot your results
algo_object.plot_history()


Genetic Algorithm

Dragon Fly


class zoofs.GeneticOptimization(objective_function,n_iteration=20,population_size=20,selective_pressure=2,elitism=2,mutation_rate=0.05,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration: int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
selective_pressure: int, default=2
measure of reproductive opportunities for each organism in the population
elitism: int, default=2
number of top individuals to be considered as elites
mutation_rate: float, default=0.05
rate of mutation in the population's gene
minimize: bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GeneticOptimization
# create object of algorithm
algo_object=GeneticOptimization(objective_function_topass,n_iteration=20,
                            population_size=20,selective_pressure=2,elitism=2,
                            mutation_rate=0.05,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                            
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train,X_valid, y_valid, verbose=True)
#plot your results
algo_object.plot_history()

Gravitational Algorithm

Gravitational Algorithm


class zoofs.GravitationalOptimization(self,objective_function,n_iteration=50,population_size=50,g0=100,eps=0.5,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration: int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
g0: float, default=100
gravitational strength constant
eps: float, default=0.5
distance constant
minimize: bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GravitationalOptimization
# create object of algorithm
algo_object=GravitationalOptimization(objective_function,n_iteration=50,
                                population_size=50,g0=100,eps=0.5,minimize=True) 
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid, verbose=True)
#plot your results
algo_object.plot_history()

Support zoofs

The development of zoofs relies completely on contributions.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

First roll out

18,08,2021

License

apache-2.0

Comments
  • Looking for integrated Harris Haw Optimization in the zoofs

    Looking for integrated Harris Haw Optimization in the zoofs

    Additional context Harris Haw Optimization (HHO) is a novel meta-heuristic optimization algorithm released in 2019 with an increasing of applied research papers. It would be great if the team can add the HHO to the zoofs which will be potential for further testing and make the zoofs more popular.

    enhancement 
    opened by hanamthang 9
  • [Snyk] Security upgrade mistune from 0.8.4 to 2.0.1

    [Snyk] Security upgrade mistune from 0.8.4 to 2.0.1

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.13 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.2.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | Cross-site Scripting (XSS)
    SNYK-PYTHON-MISTUNE-2328096 | mistune:
    0.8.4 -> 2.0.1
    | No | No Known Exploit

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the effected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic

    opened by snyk-bot 2
  • [Snyk] Fix for 3 vulnerabilities

    [Snyk] Fix for 3 vulnerabilities

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.13 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- high severity | 624/1000
    Why? Has a fix available, CVSS 8.2 | Arbitrary Code Execution
    SNYK-PYTHON-IPYTHON-2348630 | ipython:
    5.10.0 -> 7.16.3
    | No | No Known Exploit high severity | 696/1000
    Why? Proof of Concept exploit, Has a fix available, CVSS 7.5 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-PYGMENTS-1086606 | pygments:
    2.5.2 -> 2.7.4
    | No | Proof of Concept high severity | 589/1000
    Why? Has a fix available, CVSS 7.5 | Denial of Service (DoS)
    SNYK-PYTHON-PYGMENTS-1088505 | pygments:
    2.5.2 -> 2.7.4
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the effected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic

    opened by snyk-bot 2
  • Feature importance

    Feature importance

    Hi, Thanks for the great repo. I would like to know whether we can get the ranking of the selected features after using one of your algorithm (ex: particle swarm optimization)

    opened by veeresh-dammur 2
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    jaswinder9051998/zoofs now has a Chat Room on Gitter

    @jaswinder9051998 has just created a chat room. You can visit it here: https://gitter.im/zooFeatureSelection/general.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 1
  • Hyperparameter optimization for algorithms in zoofs

    Hyperparameter optimization for algorithms in zoofs

    Hi Jaswinder,

    Would you consider to add the function like GridSearch for hyper-parameter optimization of the algorithm, such as GWO, in the zoofs? This library, PySwarm (https://github.com/tisimst/pyswarm) for instance, they provide a GridSearch to find the best combination of the parameters c, w1, w2.

    For now, I have to do the trial and error to test which ranges of parameters in the GWO (population, iteration, method) deliver the best result for my dataset.

    Many thanks, Thang

    opened by hanamthang 1
  • Disabling verbose still prints logs

    Disabling verbose still prints logs

    Setting verbose=False still produces output at every iteration. This is problematic since the JSON file can get very large when the fit function runs for prolonged period of time.

    opened by aigarspetresevics 0
  • [Snyk] Fix for 2 vulnerabilities

    [Snyk] Fix for 2 vulnerabilities

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    mkdocs-material 8.0.1 requires pygments, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    jupyter-nbextensions-configurator 0.6.1 requires notebook, which is not installed.
    jupyter-contrib-nbextensions 0.7.0 requires nbconvert, which is not installed.
    jupyter-contrib-nbextensions 0.7.0 requires notebook, which is not installed.
    jupyter-contrib-core 0.4.2 requires notebook, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3180413 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS) 🦉 Regular Expression Denial of Service (ReDoS)

    opened by jaswinder9051998 2
  • [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.4.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    jupyter-nbextensions-configurator 0.5.0 has requirement notebook>=6.0, but you have notebook 5.7.16.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- low severity | 441/1000
    Why? Recently disclosed, Has a fix available, CVSS 3.1 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3113904 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by jaswinder9051998 1
  • [Snyk] Security upgrade wheel from 0.30.0 to 0.38.0

    [Snyk] Security upgrade wheel from 0.30.0 to 0.38.0

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.16 requires pyzmq, which is not installed.
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.4.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    jupyter-nbextensions-configurator 0.5.0 has requirement notebook>=6.0, but you have notebook 5.7.16.
    jupyter-client 5.3.5 requires pyzmq, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3092128 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by snyk-bot 1
  • Speed-up suggestions

    Speed-up suggestions

    It doesn't accept numpy arrays and so numba is out of question. Any suggestions to improve speed? When you have 100+ feature columns it takes atleast 2 weeks running 24/7

    opened by aigarspetresevics 0
  • Number of features

    Number of features

    First of all i want to thank you for this amazing library , just i want to ask can the size of best_feature_list can be declared before starting the algorithm ??

    opened by klil21 2
Releases(v0.1.24)
Owner
Jaswinder Singh
Associate Software Engineer - Data Science
Jaswinder Singh
This repository provides some codes to demonstrate several variants of Markov-Chain-Monte-Carlo (MCMC) Algorithms.

Demo-of-MCMC These files are based on the class materials of AEROSP 567 taught by Prof. Alex Gorodetsky at University of Michigan. Author: Hung-Hsiang

Sean 1 Feb 05, 2022
Rover. Finding the shortest pass by Dijkstra’s shortest path algorithm

rover Rover. Finding the shortest path by Dijkstra’s shortest path algorithm Задача Вы — инженер, проектирующий роверы-беспилотники. Вам надо спроекти

1 Nov 11, 2021
marching Squares algorithm in python with clean code.

Marching Squares marching Squares algorithm in python with clean code. Tools Python 3 EasyDraw Creators Mohammad Dori Run the Code Installation Requir

Mohammad Dori 3 Jul 15, 2022
A lightweight, pure-Python mobile robot simulator designed for experiments in Artificial Intelligence (AI) and Machine Learning, especially for Jupyter Notebooks

aitk.robots A lightweight Python robot simulator for JupyterLab, Notebooks, and other Python environments. Goals A lightweight mobile robotics simulat

3 Oct 22, 2021
Using Bayesian, KNN, Logistic Regression to classify spam and non-spam.

Make Sure the dataset file "spamData.mat" is in the folder spam\src Environment: Python --version = 3.7 Third Party: numpy, matplotlib, math, scipy

0 Dec 26, 2021
Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm

pyruct Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm The imaging setup is explained in these paper

Berkan Lafci 21 Dec 12, 2022
Genetic Algorithm for Robby Robot based on Complexity a Guided Tour by Melanie Mitchell

Robby Robot Genetic Algorithm A Genetic Algorithm based Robby the Robot in Chapter 9 of Melanie Mitchell's book Complexity: A Guided Tour Description

Matthew 2 Dec 01, 2022
Cormen-Lib - An academic tool for data structures and algorithms courses

The Cormen-lib module is an insular data structures and algorithms library based on the Thomas H. Cormen's Introduction to Algorithms Third Edition. This library was made specifically for administeri

Cormen Lib 12 Aug 18, 2022
SortingAlgorithmVisualization - A place for me to learn about sorting algorithms

SortingAlgorithmVisualization A place for me to learn about sorting algorithms.

1 Jan 15, 2022
Python implementation of Aho-Corasick algorithm for string searching

Python implementation of Aho-Corasick algorithm for string searching

Daniel O'Sullivan 1 Dec 31, 2021
A priority of preferences for teacher assignment problem

Genetic-Algorithm-for-Assignment-Problem A priority of preferences for teacher assignment problem Keywords k-partition; clustering; education 4.0 Abst

hades 2 Oct 31, 2022
A library for benchmarking, developing and deploying deep learning anomaly detection algorithms

A library for benchmarking, developing and deploying deep learning anomaly detection algorithms Key Features • Getting Started • Docs • License Introd

OpenVINO Toolkit 1.5k Jan 04, 2023
iAWE is a wonderful dataset for those of us who work on Non-Intrusive Load Monitoring (NILM) algorithms.

iAWE is a wonderful dataset for those of us who work on Non-Intrusive Load Monitoring (NILM) algorithms. You can find its main page and description via this link. If you are familiar with NILM-TK API

Mozaffar Etezadifar 3 Mar 19, 2022
This is a Python implementation of the HMRF algorithm on networks with categorial variables.

Salad Salad is an Open Source Python library to segment tissues into different biologically relevant regions based on Hidden Markov Random Fields. The

1 Nov 16, 2021
Implementation of Apriori Algorithm for Association Analysis

Implementation of Apriori Algorithm for Association Analysis

3 Nov 14, 2021
Using A * search algorithm and GBFS search algorithm to solve the Romanian problem

Romanian-problem-using-Astar-and-GBFS Using A * search algorithm and GBFS search algorithm to solve the Romanian problem Romanian problem: The agent i

Mahdi Hassanzadeh 6 Nov 22, 2022
A simple library for implementing common design patterns.

PyPattyrn from pypattyrn.creational.singleton import Singleton class DummyClass(object, metaclass=Singleton): # DummyClass is now a Singleton!

1.7k Jan 01, 2023
Data Model built using Logistic Regression Algorithm on Python.

Logistic-Regression Problem Statement: Your client is a retail banking institution. Term deposits are a major source of income for a bank. A term depo

Hemanth Babu Muthineni 0 Dec 25, 2021
QDax is a tool to accelerate Quality-Diveristy (QD) algorithms through hardware accelerators and massive parallelism

QDax: Accelerated Quality-Diversity QDax is a tool to accelerate Quality-Diveristy (QD) algorithms through hardware accelerators and massive paralleli

Adaptive and Intelligent Robotics Lab 183 Dec 30, 2022
It is a platform that implements some path planning algorithms.

PathPlanningAlgorithms It is a platform that implements some path planning algorithms. Main dependence: python3.7, opencv4.1.1.26 (for image show) Tip

5 Feb 24, 2022