Scikit-learn style model finetuning for NLP

Overview

Scikit-learn style model finetuning for NLP

Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide variety of downstream tasks.

Finetune currently supports TensorFlow implementations of the following models:

  1. BERT, from "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
  2. RoBERTa, from "RoBERTa: A Robustly Optimized BERT Pretraining Approach"
  3. GPT, from "Improving Language Understanding by Generative Pre-Training"
  4. GPT2, from "Language Models are Unsupervised Multitask Learners"
  5. TextCNN, from "Convolutional Neural Networks for Sentence Classification"
  6. Temporal Convolution Network, from "An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling"
  7. DistilBERT from "Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT"
Section Description
API Tour Base models, configurables, and more
Installation How to install using pip or directly from source
Finetune with Docker Finetune and inference within a Docker Container
Documentation Full API documentation

Finetune API Tour

Finetuning the base language model is as easy as calling Classifier.fit:

model = Classifier()               # Load base model
model.fit(trainX, trainY)          # Finetune base model on custom data
model.save(path)                   # Serialize the model to disk
...
model = Classifier.load(path)      # Reload models from disk at any time
predictions = model.predict(testX) # [{'class_1': 0.23, 'class_2': 0.54, ..}, ..]

Choose your desired base model from finetune.base_models:

from finetune.base_models import BERT, RoBERTa, GPT, GPT2, TextCNN, TCN
model = Classifier(base_model=BERT)

Optimize your model with a variety of configurables. A detailed list of all config items can be found in the finetune docs.

model = Classifier(low_memory_mode=True, lr_schedule="warmup_linear", max_length=512, l2_reg=0.01, oversample=True, ...)

The library supports finetuning for a number of tasks. A detailed description of all target models can be found in the finetune API reference.

from finetune import *
models = (Classifier, MultiLabelClassifier, MultiFieldClassifier, MultipleChoice, # Classify one or more inputs into one or more classes
          Regressor, OrdinalRegressor, MultifieldRegressor,                       # Regress on one or more inputs
          SequenceLabeler, Association,                                           # Extract tokens from a given class, or infer relationships between them
          Comparison, ComparisonRegressor, ComparisonOrdinalRegressor,            # Compare two documents for a given task
          LanguageModel, MultiTask,                                               # Further pretrain your base models
          DeploymentModel                                                         # Wrapper to optimize your serialized models for a production environment
          )

For example usage of each of these target types, see the finetune/datasets directory. For purposes of simplicity and runtime these examples use smaller versions of the published datasets.

If you have large amounts of unlabeled training data and only a small amount of labeled training data, you can finetune in two steps for best performance.

model = Classifier()               # Load base model
model.fit(unlabeledX)              # Finetune base model on unlabeled training data
model.fit(trainX, trainY)          # Continue finetuning with a smaller amount of labeled data
predictions = model.predict(testX) # [{'class_1': 0.23, 'class_2': 0.54, ..}, ..]
model.save(path)                   # Serialize the model to disk

Installation

Finetune can be installed directly from PyPI by using pip

pip3 install finetune

or installed directly from source:

git clone -b master https://github.com/IndicoDataSolutions/finetune && cd finetune
python3 setup.py develop              # symlinks the git directory to your python path
pip3 install tensorflow-gpu --upgrade # or tensorflow-cpu
python3 -m spacy download en          # download spacy tokenizer

In order to run finetune on your host, you'll need a working copy of tensorflow-gpu >= 1.14.0 and up to date nvidia-driver versions.

You can optionally run the provided test suite to ensure installation completed successfully.

pip3 install pytest
pytest

Docker

If you'd prefer you can also run finetune in a docker container. The bash scripts provided assume you have a functional install of docker and nvidia-docker.

git clone https://github.com/IndicoDataSolutions/finetune && cd finetune

# For usage with NVIDIA GPUs
./docker/build_gpu_docker.sh      # builds a docker image
./docker/start_gpu_docker.sh      # starts a docker container in the background, forwards $PWD to /finetune

docker exec -it finetune bash # starts a bash session in the docker container

For CPU-only usage:

./docker/build_cpu_docker.sh
./docker/start_cpu_docker.sh

Documentation

Full documentation and an API Reference for finetune is available at finetune.indico.io.

Comments
  • Very slow inference in 0.5.11

    Very slow inference in 0.5.11

    After training a default classifier, saving and loading. model.predict("lorem ipsum") and model.predict_prob take in average 14 seconds even on a hefty server such as AWS p3.16Xlarge.

    opened by dimidd 17
  • Out of Memory on Small Dataset

    Out of Memory on Small Dataset

    Describe the bug When attempting to train a classifier on a small dataset of 8,000 documents, I get an out of memory error and the script stops running.

    Minimum Reproducible Example Version of finetune = 0.4.1 Version of tensorflow-gpu = 1.8.0 Version of cuda = release 9.0, V9.0.176 Windows 10 Pro
    Load a dataset of documents (X_train) and labels (Y_train), where each document and label is simply a string. model = finetune.Classifier(max_length = 256, batch_size = 1) #tried reducing the memory footprint model.fit(X_train, Y_train)

    Expected behavior I expected the model to train, but it doesn't manage to start training.

    Additional context I get the following warnings in the jupyter notebook:

    C:\Users...\Python35\site-packages\finetune\encoding.py:294: UserWarning: Some examples are longer than the max_length. Please trim documents or increase max_length. Fallback behaviour is to use the first 254 byte-pair encoded tokens "Fallback behaviour is to use the first {} byte-pair encoded tokens".format(max_length - 2) C:\Users...\Python35\site-packages\finetune\encoding.py:233: UserWarning: Document is longer than max length allowed, trimming document to 256 tokens. max_length C:\Users...\tensorflow\python\ops\gradients_impl.py: 100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " WARNING:tensorflow:From C:\Users...\tensorflow\python\util\tf_should_use.py:118: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. Instructions for updating: Use tf.variables_initializer instead.

    And then I get the following diagnostic info showing up in the command prompt:

    2018-10-04 17:26:36.920118: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2018-10-04 17:26:37.716883: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties: name: Quadro M1200 major: 5 minor: 0 memoryClockRate(GHz): 1.148 pciBusID: 0000:01:00.0 totalMemory: 4.00GiB freeMemory: 3.35GiB 2018-10-04 17:26:37.725637: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0 2018-10-04 17:26:38.412484: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-10-04 17:26:38.417413: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0 2018-10-04 17:26:38.419392: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N 2018-10-04 17:26:38.421353: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/device:GPU:0 with 3083 MB memory) -> physical GPU (device: 0, name: Quadro M1200, pci bus id: 0000:01:00.0, compute capability: 5.0) [I 17:28:26.081 NotebookApp] Saving file at /projects/language-models/Finetune Package.ipynb 2018-10-04 17:29:14.118663: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0 2018-10-04 17:29:14.123595: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-10-04 17:29:14.127649: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0 2018-10-04 17:29:14.135411: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N 2018-10-04 17:29:14.138698: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3083 MB memory) -> physical GPU (device: 0, name: Quadro M1200, pci bus id: 0000:01:00.0, compute capability: 5.0) 2018-10-04 17:30:06.881174: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:275] Allocator (GPU_0_bfc) ran out of memory trying to allocate 9.00MiB. Current allocation summary follows. 2018-10-04 17:30:06.900550: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (256): Total Chunks: 60, Chunks in use: 60. 15.0KiB allocated for chunks. 15.0KiB in use in bin. 312B client-requested in use in bin. 2018-10-04 17:30:06.929551: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (512): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:06.964647: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (1024): Total Chunks: 2, Chunks in use: 2. 2.5KiB allocated for chunks. 2.5KiB in use in bin. 2.0KiB client-requested in use in bin. 2018-10-04 17:30:06.995394: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (2048): Total Chunks: 532, Chunks in use: 532. 1.56MiB allocated for chunks. 1.56MiB in use in bin. 1.56MiB client-requested in use in bin. 2018-10-04 17:30:07.031613: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (4096): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.061013: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (8192): Total Chunks: 137, Chunks in use: 137. 1.39MiB allocated for chunks. 1.39MiB in use in bin. 1.39MiB client-requested in use in bin. 2018-10-04 17:30:07.093603: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (16384): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.130530: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (32768): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.170321: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (65536): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.212730: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (131072): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.246329: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (262144): Total Chunks: 2, Chunks in use: 2. 512.0KiB allocated for chunks. 512.0KiB in use in bin. 512.0KiB client-requested in use in bin. 2018-10-04 17:30:07.288640: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (524288): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.303248: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (1048576): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.332990: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (2097152): Total Chunks: 71, Chunks in use: 71. 159.75MiB allocated for chunks. 159.75MiB in use in bin. 159.75MiB client-requested in use in bin. 2018-10-04 17:30:07.364897: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (4194304): Total Chunks: 69, Chunks in use: 68. 466.99MiB allocated for chunks. 459.00MiB in use in bin. 459.00MiB client-requested in use in bin. 2018-10-04 17:30:07.396862: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (8388608): Total Chunks: 140, Chunks in use: 140. 1.23GiB allocated for chunks. 1.23GiB in use in bin. 1.23GiB client-requested in use in bin. 2018-10-04 17:30:07.428029: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (16777216): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.464813: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (33554432): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.494067: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (67108864): Total Chunks: 10, Chunks in use: 10. 1.17GiB allocated for chunks. 1.17GiB in use in bin. 1.17GiB client-requested in use in bin. 2018-10-04 17:30:07.524156: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (134217728): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.550345: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (268435456): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.578392: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:646] Bin for 9.00MiB was 8.00MiB, Chunk State: 2018-10-04 17:30:07.600123: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980000 of size 1280 2018-10-04 17:30:07.629493: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980500 of size 1280 2018-10-04 17:30:07.649189: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980A00 of size 125144064 2018-10-04 17:30:07.676965: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 00000008090D9600 of size 7077888 2018-10-04 17:30:07.699245: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000809799600 of size 3072 2018-10-04 17:30:07.718738: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 000000080979A200 of size 3072

    ...and so on. This is, in my opinion a pretty small dataset and I've made the max characters pretty small so I don't think this is a hardware limitation, but a bug.

    opened by stevesmit 12
  • interpolate_pos_embed on MultiTask

    interpolate_pos_embed on MultiTask

    Hi, it looks like this parameter from the multitask is not being properly inherited by the input_pipeline.

    from finetune import Classifier, MultiTask
    
    MAX_LENGTH = 300
    finetune_config = {'batch_size': 4, 
                      'interpolate_pos_embed': True,
                       'n_epochs' : 1, #default 3
                       'train_embeddings': False, 
                       'num_layers_trained': 3, 
                       'max_length': MAX_LENGTH
                      }
    multi_model = MultiTask({"sentiment": Classifier, 
                             "tags": Classifier}, 
                            **finetune_config)
    

    The previous code builds the multitask object.

    The next code works fine finetuning it:

    multi_model.finetune(X={"sentiment": X_train.regex_text.values,
                             "tags": X_train.regex_text.values}, 
                         Y={"sentiment": y_train.sentiment,
                             "tags": y_train.full_topic},
                         batch_size=4
                       )
    

    Also, multi_model.input_pipeline.config['interpolate_pos_embed'] = True is verified.

    But when prediction time comes:

    y_pred = multi_model.predict({"sentiment": X_train.regex_text.values,
                                       "tags": X_train.regex_text.values})
    

    It does not work with:

    ValueError: Max Length cannot be greater than 300 if interpolate_pos_embed is turned off
    

    I do not know if I am missing something on the setup or it is a conflict between the parameters of the distinct objects.

    Thanks very much Madison for the great job! The MultiTask model is a fantastic tool for uneven multiobjective labeled data.

    opened by Guillermogsjc 11
  • Loading a model from 0.4.1 in 0.5.11

    Loading a model from 0.4.1 in 0.5.11

    Describe the bug After saving a model on 5.10 using Classifier.save("my_model.bin"), upgrading to 5.11. Loading using Classifier.load("my_model.bin") results in KeyError: 'base_model_path'

    opened by dimidd 11
  • A different way of doing the similarity/comparison task?

    A different way of doing the similarity/comparison task?

    Hey! Thanks for the awsome work. I was wondering if I could use and update finetune to do the following:

    Instead of using (Start, Text1, Delim, Text2, Extract) and (Start, Text2, Delim, Text1, Extract) as in the paper, can we use (Start, Text1, Extract) and (Start, Text2, Extract) separately through the transformer?

    This could be thought of as obtaining sentence/document embeddings for Text1 and Text2 separately. Upon doing that, I would like to compare their similarity using a distance metric such as cosine distance. (i.e. train the transformer as a siamese network.)

    Would you suggest I build such a model on top of a fork of finetune?

    opened by chaitjo 11
  • Support for pre-training the language model

    Support for pre-training the language model

    Is your feature request related to a problem? Please describe. In order to use the classifier on different languages / specific domains it would be useful to be able to pretrain the language model.

    Describe the solution you'd like Calling .fit on a corpus (i.e.) no labels should train the language model.

    model.fit(corpus)
    

    Describe alternatives you've considered Use the original repo which doesn't have a simple to use interface.

    enhancement 
    opened by elyase 11
  • ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    Describe the bug INFO:finetune:Saving tensorboard output to /tmp/Finetune14yvac9b


    ValueError Traceback (most recent call last) in 6 inputs = {"x": tf.placeholder(shape=xshapes, dtype=xtypes)} 7 return tf.estimator.export.ServingInputReceiver(inputs, inputs) ----> 8 estimator.export_saved_model(export_dir_base='saved_model', serving_input_receiver_fn=serving_input_receiver_fn)

    ~/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py in export_saved_model(self, export_dir_base, serving_input_receiver_fn, assets_extra, as_text, checkpoint_path, experimental_mode) 730 as_text=as_text, 731 checkpoint_path=checkpoint_path, --> 732 strip_default_attrs=True) 733 734 def experimental_export_all_saved_models(

    ~/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py in _export_all_saved_models(self, export_dir_base, input_receiver_fn_map, assets_extra, as_text, checkpoint_path, strip_default_attrs) 825 else: 826 raise ValueError("Couldn't find trained model at {}.".format( --> 827 self._model_dir)) 828 829 export_dir = export_lib.get_timestamped_export_dir(export_dir_base)

    ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    Minimum Reproducible Example from finetune import MultiLabelClassifier model = MultiLabelClassifier.load('comp_gpt2.model') estimator, hooks = model.get_estimator() (xtypes, ytypes), (xshapes, yshapes) = model.input_pipeline.feed_shape_type_def() def serving_input_receiver_fn(): inputs = {"x": tf.placeholder(shape=xshapes, dtype=xtypes)} return tf.estimator.export.ServingInputReceiver(inputs, inputs) estimator.export_saved_model(export_dir_base='saved_model', serving_input_receiver_fn=serving_input_receiver_fn)

    opened by emtropyml 10
  • getting much lower accuracy with new release of finetune library

    getting much lower accuracy with new release of finetune library

    Describe the bug I updated my finetune library to the latest version two days ago. For sanity check, I loaded my fine-tuned and saved models from previous model. I get totally different training and test accuracies. In the previous version, my train and test accuracy was 90% and 82%, now with this new release, with the same fine-tuned model, and same datasets, but I am getting 34% for training set, and 16% for test set. This is a huge difference. I assume there is a bug, or something else going on?

    My code lines for fine tuning:

    import time
    start = time.time()
    model = Classifier(n_epochs=2 , base_model=GPT2Model, tensorboard_folder ='/workspace/checkpoints', max_length= 1024, val_size = 1000, chunk_long_sequences=False, keep_best_model= True)
    model.fit(trainX, trainY)
    print("total training time:", time.time() - start)
    

    for testing:

    #Load the saved model
    model= Classifier.load('./checkpoints/2epochs_GPT2')
    #test accuracy for the test set 
    pred_test = model.predict(testX)
    accuracy = np.mean(pred_test == testY)
    print('Test Accuracy: {:0.3f}'.format(accuracy))
    
    opened by rnyak 9
  • Can I use to generate text?

    Can I use to generate text?

    Hi, seems great work done by the team. According to the documentation, I understand that every model uses a pre-trained language model. Can I use it for the following scenario, if yes how?:

    1. Fine-tune the pre-trained language model on my own text corpus and then generate(sample) text.
    2. Fine-tune the pre-trained language model on my own text corpus and then score any given text/sentence. Thanks.
    opened by abubakar-ucr 9
  • Slow unsupervised training

    Slow unsupervised training

    Thank you for your library, the supervised finetuning works very well. However, when I try to train on unlabelled data ( model.fit(unlabeledX) ), the training is much slower (9s/it) compared to supervised training (1.7s/it). This is on one K80 gpu. I am not sure why unsupervised training is slower, as doesn't the supervised training tune the language model as well?

    opened by chiayewken 9
  •  eval_acc parameter

    eval_acc parameter

    Describe the bug I set the eval_acc = true, and val_size = 1000. I am fine-tuning the Classifier model for 3 epochs. I get 90% training 82% test set accuracies, but when I check the TensorBoard accuracy plot, I see val accuracy is 49%. That does not seem correct to me.

    I am not sure if the eval_acc is calculated correctly.

    Expected behavior How can we print put val accuracy during fine tuning at least for epoch?

    opened by rnyak 8
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to latest

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to latest

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.cpu

    We recommend upgrading to tensorflow/tensorflow:latest, as this image has only 29 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to latest-gpu

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to latest-gpu

    This PR was automatically created by Snyk using the credentials of a real user.


    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.gpu

    We recommend upgrading to tensorflow/tensorflow:latest-gpu, as this image has only 49 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | medium severity | 514 | CVE-2022-32221
    SNYK-UBUNTU2004-CURL-3070971 | No Known Exploit | | medium severity | 514 | Arbitrary Code Injection
    SNYK-UBUNTU2004-GNUPG2-2940666 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-GZIP-2442549 | No Known Exploit | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by ashmuck 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to 2.11.0

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to 2.11.0

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.cpu

    We recommend upgrading to tensorflow/tensorflow:2.11.0, as this image has only 29 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to 2.11.0-gpu

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to 2.11.0-gpu

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.gpu

    We recommend upgrading to tensorflow/tensorflow:2.11.0-gpu, as this image has only 49 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
Releases(0.8.6)
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models

Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model.

Prithivida 681 Jan 01, 2023
Kestrel Threat Hunting Language

Kestrel Threat Hunting Language What is Kestrel? Why we need it? How to hunt with XDR support? What is the science behind it? You can find all the ans

Open Cybersecurity Alliance 201 Dec 16, 2022
Maix Speech AI lib, including ASR, chat, TTS etc.

Maix-Speech 中文 | English Brief Now only support Chinese, See 中文 Build Clone code by: git clone https://github.com/sipeed/Maix-Speech Compile x86x64 c

Sipeed 267 Dec 25, 2022
BiQE: Code and dataset for the BiQE paper

BiQE: Bidirectional Query Embedding This repository includes code for BiQE and the datasets introduced in Answering Complex Queries in Knowledge Graph

Bhushan Kotnis 1 Oct 20, 2021
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)

CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specia

Zihan Liu 89 Nov 10, 2022
American Sign Language (ASL) to Text Converter

Signterpreter American Sign Language (ASL) to Text Converter Recommendations Although there is grayscale and gaussian blur, we recommend that you use

0 Feb 20, 2022
HuggingTweets - Train a model to generate tweets

HuggingTweets - Train a model to generate tweets Create in 5 minutes a tweet generator based on your favorite Tweeter Make my own model with the demo

Boris Dayma 318 Jan 04, 2023
PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI

data2vec-pytorch PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI (F

Aryan Shekarlaban 105 Jan 04, 2023
[ICLR'19] Trellis Networks for Sequence Modeling

TrellisNet for Sequence Modeling This repository contains the experiments done in paper Trellis Networks for Sequence Modeling by Shaojie Bai, J. Zico

CMU Locus Lab 460 Oct 13, 2022
Utilizing RBERT model for KLUE Relation Extraction task

RBERT for Relation Extraction task for KLUE Project Description Relation Extraction task is one of the task of Korean Language Understanding Evaluatio

snoop2head 14 Nov 15, 2022
Retraining OpenAI's GPT-2 on Discord Chats

Train OpenAI's GPT-2 on Discord Chats Retraining a Text Generation Model on Discord Chats using gpt-2-simple that wraps existing model fine-tuning and

Ayush Mishra 4 Oct 27, 2022
A spaCy wrapper of OpenTapioca for named entity linking on Wikidata

spaCyOpenTapioca A spaCy wrapper of OpenTapioca for named entity linking on Wikidata. Table of contents Installation How to use Local OpenTapioca Vizu

Universitätsbibliothek Mannheim 80 Jan 03, 2023
Concept Modeling: Topic Modeling on Images and Text

Concept is a technique that leverages CLIP and BERTopic-based techniques to perform Concept Modeling on images.

Maarten Grootendorst 120 Dec 27, 2022
Weird Sort-and-Compress Thing

Weird Sort-and-Compress Thing A weird integer sorting + compression algorithm inspired by a conversation with Luthingx (it probably already exists by

Douglas 1 Jan 03, 2022
🧪 Cutting-edge experimental spaCy components and features

spacy-experimental: Cutting-edge experimental spaCy components and features This package includes experimental components and features for spaCy v3.x,

Explosion 65 Dec 30, 2022
Problem: Given a nepali news find the category of the news

Classification of category of nepali news catorgory using different algorithms Problem: Multiclass Classification Approaches: TFIDF for vectorization

pudasainishushant 2 Jan 09, 2022
Paddlespeech Streaming ASR GUI

Paddlespeech-Streaming-ASR-GUI Introduction A paddlespeech Streaming ASR GUI. Us

Niek Zhen 3 Jan 05, 2022
Natural Language Processing for Adverse Drug Reaction (ADR) Detection

Natural Language Processing for Adverse Drug Reaction (ADR) Detection This repo contains code from a project to identify ADRs in discharge summaries a

Medicines Optimisation Service - Austin Health 21 Aug 05, 2022
this repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

1 Nov 02, 2021