Management Dashboard for Torchserve

Overview

Torchserve Dashboard

Total Downloads

Torchserve Dashboard using Streamlit

Related blog post

Demo

Usage

Additional Requirement: torchserve (recommended:v0.5.2)

Simply run:

pip3 install torchserve-dashboard --user
# torchserve-dashboard [streamlit_options(optional)] -- [config_path(optional)] [model_store(optional)] [log_location(optional)] [metrics_location(optional)]
torchserve-dashboard
#OR change port 
torchserve-dashboard --server.port 8105 -- --config_path ./torchserve.properties
#OR provide a custom configuration 
torchserve-dashboard -- --config_path ./torchserve.properties --model_store ./model_store

❗ Keep in mind that If you change any of the --config_path,--model_store,--metrics_location,--log_location options while there is a torchserver already running before starting torch-dashboard they won't come into effect until you stop&start torchserve. These options are used instead of their respective environment variables TS_CONFIG_FILE, METRICS_LOCATION, LOG_LOCATION.

OR

git clone https://github.com/cceyda/torchserve-dashboard.git
streamlit run torchserve_dashboard/dash.py 
#OR
streamlit run torchserve_dashboard/dash.py --server.port 8105 -- --config_path ./torchserve.properties 

Example torchserve config:

inference_address=http://127.0.0.1:8443
management_address=http://127.0.0.1:8444
metrics_address=http://127.0.0.1:8445
grpc_inference_port=7070
grpc_management_port=7071
number_of_gpu=0
batch_size=1
model_store=./model_store

If the server doesn't start for some reason check if your ports are already in use!

Updates

[15-oct-2020] add scale workers tab

[16-feb-2021] (functionality) make logpath configurable,(functionality)remove model_name requirement,(UI)add cosmetic error messages

[10-may-2021] update config & make it optional. update streamlit. Auto create folders

[31-may-2021] Update to v0.4 (Add workflow API) Refactor out streamlit from api.py.

[30-nov-2021] Update to v0.5, adding support for encrypted model serving (not tested). Update streamlit to v1+

FAQs

  • Does torchserver keep running in the background?

    The torchserver is spawned using Popen and keeps running in the background even if you stop the dashboard.

  • What about environment variables?

    These environment variables are passed to the torchserve command:

    ENVIRON_WHITELIST=["LD_LIBRARY_PATH","LC_CTYPE","LC_ALL","PATH","JAVA_HOME","PYTHONPATH","TS_CONFIG_FILE","LOG_LOCATION","METRICS_LOCATION","AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "AWS_DEFAULT_REGION"]

  • How to change the logging format of torchserve?

    You can set the location of your custom log4j2 config in your configuration file as in here

    vmargs=-Dlog4j.configurationFile=file:///path/to/custom/log4j2.xml

  • What is the meaning behind the weird versioning?

    The minor follows the compatible torchserve version, patch version reflects the dashboard versioning

Help & Question & Feedback

Open an issue

TODOs

  • Async?
  • Better logging
  • Remote only mode
Comments
  • Update to serve 0.4

    Update to serve 0.4

    I love your project and was hoping we can feature it more prominently in the main torchserve repo - I was wondering if you'd be OK and interested in this. And if so I was wondering if you could give me some feedback on the below

    Installation instructions

    I tried to torchserve-dashboard --server.port 8105 -- --config_path ./torchserve.properties --model_store ./model_store but the page never seems to load regardless of whether I use the network or external url that I have

    I setup a config

    (torchservedashboard) [email protected]:~/torchserve-dashboard$ cat torchserve.properties 
    inference_address=http://127.0.0.1:8443
    management_address=http://127.0.0.1:8444
    metrics_address=http://127.0.0.1:8445
    grpc_inference_port=7070
    grpc_management_port=7071
    number_of_gpu=0
    batch_size=1
    model_store=model_store
    

    But perhaps makes the most sense to just add a default one to the repo so things just work. I'm happy to make the PR just let me know what you suggest. Ideally things just work with zero config and people can come back and change stuff once they feel more comfortable.

    Also on Ubuntu I had to type export PATH="$HOME/.local/bin:$PATH" so I could call torchserve-dashboard

    Features

    Also there's some new features we're excited like the below which would be very interesting to see like

    1. Model interpretability with Captum https://github.com/pytorch/serve/blob/master/captum/Captum_visualization_for_bert.ipynb
    2. Workflow support coming in 0.4 which will allow much more configurable pipelines https://github.com/pytorch/serve/pull/1024/files

    In all cases please let me know if you think we're on the right track and how we can make the torchserve more useful to you. I liked your suggestion on automatic doc generation and it's something I'm looking into so please keep them coming!

    opened by msaroufim 5
  • Improvements of package setup logic

    Improvements of package setup logic

    This PR is related to #1 . It improves the structure of the package setup: All package related info is moved to torchserve_dashboard.init.py.

    Requirement files are added which are split up depending on the usage of the repo/package.

    All functions linked to setup are moved to torchserve_dashboard.setup_tools.py. The function parsing the requirements can handle commented requirements as well as references to github etc (#egg included in requirement)

    opened by FlorianMF 3
  • click >=8 possibly not compatible

    click >=8 possibly not compatible

    Couldn't run the dashboard initially

    Traceback (most recent call last):
      File "/Users/me/Desktop/pytorch-mnist/venv/bin/torchserve-dashboard", line 8, in <module>
        sys.exit(main())
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1137, in __call__
        return self.main(*args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1062, in main
        rv = self.invoke(ctx)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 763, in invoke
        return __callback(*args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
        return f(get_current_context(), *args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/torchserve_dashboard/cli.py", line 16, in main
        ctx.forward(streamlit.cli.main_run, target=filename, args=args, *kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 784, in forward
        return __self.invoke(__cmd, *args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 763, in invoke
        return __callback(*args, **kwargs)
    TypeError: main_run() got multiple values for argument 'target'
    

    After a bit of googling, I found this: https://github.com/rytilahti/python-eq3bt/issues/30

    The default install brought in click==8.0.1. I had to downgrade to 7.1.2 to get past the error.

    opened by jsphweid 1
  • better caching, init option, v0.6 update

    better caching, init option, v0.6 update

    • Better caching using @st.experimental_singleton

      • argument parsing and API class initialization should only happen once (across sessions) on initial load.
      • Should be way better compared to before which ran those functions after each page refresh 😱 Might be optimized further later...need to refactor cli-param parsing/init logic.
    • Added --init option to initialize torchserve on start. as per this issue: https://github.com/cceyda/torchserve-dashboard/issues/16 Use like: torchserve-dashboard --init Although you still have to load the dashboard screen once for it to actually start!

    • Update to match changes in torchserve v0.6

      • there seems to be only one update to ManagementAPI in v0.6 https://github.com/pytorch/serve/pull/1421 which adds ?customized=true option to return custom_metadata in model details. Although the feature seems to be buggy for old .mar files not implementing it. (tested on: https://github.com/pytorch/serve/blob/master/frontend/archive/src/test/resources/models/noop-customized.mar)

      Anyway I added a checkbox (defaulted to False) to return custom_metadata if needed.

    opened by cceyda 0
  • update streamlit version to v1.11.1

    update streamlit version to v1.11.1

    update streamlit version to include security update v1.11.1 Although torchserve-dashboard isn't using any custom components and therefore not effected by it.

    opened by cceyda 0
  • Update to 0.5.0

    Update to 0.5.0

    update torchserve:

    • v0.5
    • add aws encrypted model feature
    • add log-config to options

    update steamlit:

    • v1.2.0 -> drop beta_ prefix
    • drop python 3.6

    https://github.com/cceyda/torchserve-dashboard/issues/13

    opened by cceyda 0
  • Update to v0.4

    Update to v0.4

    • Add workflow management endpoints (untested)
    • Add version check
    • Refactor api.py (remove streamlit)

    Closes: https://github.com/cceyda/torchserve-dashboard/issues/2

    opened by cceyda 0
  • Add workflow Management API

    Add workflow Management API

    Self-todo ETA: june 3rd https://github.com/pytorch/serve/tree/release_0.4.0/examples/Workflows https://github.com/pytorch/serve/blob/release_0.4.0/docs/workflow_management_api.md

    opened by cceyda 0
  • Can't start torchserve-dashboard

    Can't start torchserve-dashboard

    I'm getting this error while on the start

    Traceback (most recent call last):
      File "/home/kavan/.local/bin/torchserve-dashboard", line 5, in <module>
        from torchserve_dashboard.cli import main
      File "/home/kavan/.local/lib/python3.8/site-packages/torchserve_dashboard/cli.py", line 2, in <module>
        import streamlit.cli
      File "/home/kavan/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 49, in <module>
        from streamlit.proto.RootContainer_pb2 import RootContainer
      File "/home/kavan/.local/lib/python3.8/site-packages/streamlit/proto/RootContainer_pb2.py", line 22, in <module>
        create_key=_descriptor._internal_create_key,
    AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
    

    Btw thanks for this awesome lib.

    opened by Kavan72 1
  • Explanations API

    Explanations API

    Mentioned in https://github.com/cceyda/torchserve-dashboard/issues/1 Model interpretability with Captum https://github.com/pytorch/serve/blob/master/captum/Captum_visualization_for_bert.ipynb

    This would be good to add if we end up adding an InferenceAPI

    opened by cceyda 0
  • Inference API

    Inference API

    Moving the discussion from https://github.com/cceyda/torchserve-dashboard/issues/1#issuecomment-863911194 to here

    Current challenges blocking this:

    • there is no way to know the format of the expected request/response. Especially for custom handlers.

    (I prefer not do model_name->type matching manually)

    If a request/response schema is added to the returned OpenAPI definitions, I can probably auto generate something like SwaggerUI.

    opened by cceyda 0
  • Docker container

    Docker container

    I think it would be great for users and for developers to be able to easily share their dashboard or run it in production without deploying via Streamlit. I could add a simple Dockerfile wrapping everything up into a container.

    Torchserve-Dashbord would be to Torchserve what MongoExpress is to MongoDB. Thoughts?

    opened by FlorianMF 3
Releases(v0.6.0)
  • v0.6.0(Aug 1, 2022)

    What's Changed

    • update streamlit version to v1.11.1 by @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/18
    • Better caching using @st.experimental_singleton https://github.com/cceyda/torchserve-dashboard/pull/19
    • Added --init option to initialize torchserve on start https://github.com/cceyda/torchserve-dashboard/pull/19
    • Update to match changes in torchserve v0.6 @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/19

    Full Changelog: https://github.com/cceyda/torchserve-dashboard/compare/v0.5.0...v0.6.0

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Nov 30, 2021)

    Update to v0.5, adding support for encrypted model serving (not tested). Update streamlit to v1+

    What's Changed

    • Improvements of package setup logic by @FlorianMF in https://github.com/cceyda/torchserve-dashboard/pull/5
    • WIP: Add type annotations by @FlorianMF in https://github.com/cceyda/torchserve-dashboard/pull/7
    • Update to 0.5.0 by @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/15

    New Contributors

    • @FlorianMF made their first contribution in https://github.com/cceyda/torchserve-dashboard/pull/5

    Full Changelog: https://github.com/cceyda/torchserve-dashboard/compare/v0.4.0...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Jun 12, 2021)

  • v0.3.2(May 9, 2021)

  • v0.3.1(May 9, 2021)

  • v0.2.5(Feb 16, 2021)

  • v0.2.4(Feb 16, 2021)

  • v0.2.3(Oct 15, 2020)

  • v0.2.2(Oct 13, 2020)

  • v0.2.0(Oct 13, 2020)

Owner
Ceyda Cinarel
AI researcher & engineer~ all things NLP πŸ€– generative models β˜… like trying out new libraries & tools β™₯ Python
Ceyda Cinarel
CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement

CBREN This is the Pytorch implementation for our IEEE TCSVT paper : CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhanceme

Zhao Hengrun 3 Nov 04, 2022
Dataset for the Research2Clinics @ NeurIPS 2021 Paper: What Do You See in this Patient? Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
Unofficial implementation of the paper: PonderNet: Learning to Ponder in TensorFlow

PonderNet-TensorFlow This is an Unofficial Implementation of the paper: PonderNet: Learning to Ponder in TensorFlow. Official PyTorch Implementation:

1 Oct 23, 2022
State-Relabeling Adversarial Active Learning

State-Relabeling Adversarial Active Learning Code for SRAAL [2020 CVPR Oral] Requirements torch = 1.6.0 numpy = 1.19.1 tqdm = 4.31.1 AL Results The

10 Jul 14, 2022
This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
Vignette is a face tracking software for characters using osu!framework.

Vignette is a face tracking software for characters using osu!framework. Unlike most solutions, Vignette is: Made with osu!framework, the game framewo

Vignette 412 Dec 28, 2022
Pytorch domain adaptation package

DomainAdaptation This package is created to tackle the problem of domain shifts when dealing with two domains of different feature distributions. In d

Institute of Computational Perception 7 Oct 22, 2022
PyTorch implementation of deep GRAph Contrastive rEpresentation learning (GRACE).

GRACE The official PyTorch implementation of deep GRAph Contrastive rEpresentation learning (GRACE). For a thorough resource collection of self-superv

Big Data and Multi-modal Computing Group, CRIPAC 186 Dec 27, 2022
This is the official repository of the paper Stocastic bandits with groups of similar arms (NeurIPS 2021). It contains the code that was used to compute the figures and experiments of the paper.

Experiments How to reproduce experimental results of Stochastic bandits with groups of similar arms submitted paper ? Section 5 of the paper To reprod

Fabien 0 Oct 25, 2021
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

Junha Lee 10 Dec 02, 2022
This is the official github repository of the Met dataset

The Met dataset This is the official github repository of the Met dataset. The official webpage of the dataset can be found here. What is it? This cod

Nikolaos-Antonios Ypsilantis 35 Dec 17, 2022
πŸ₯ˆ78th place in Riiid SolutionπŸ₯ˆ

Riiid Answer Correctness Prediction Introduction This repository is the code that placed 78th in Riiid Answer Correctness Prediction competition. Requ

ds wook 14 Apr 26, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han ηŽ‹ζ™— 1.3k Jan 08, 2023
Behind the Curtain: Learning Occluded Shapes for 3D Object Detection

Behind the Curtain: Learning Occluded Shapes for 3D Object Detection Acknowledgement We implement our model, BtcDet, based on [OpenPcdet 0.3.0]. Insta

Qiangeng Xu 163 Dec 19, 2022
Densely Connected Search Space for More Flexible Neural Architecture Search (CVPR2020)

DenseNAS The code of the CVPR2020 paper Densely Connected Search Space for More Flexible Neural Architecture Search. Neural architecture search (NAS)

Jamin Fong 291 Nov 18, 2022
TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A good teacher is patient and consistent by Beyer et al.

FunMatch-Distillation TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A g

Sayak Paul 67 Dec 20, 2022
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Welcome to AirSim AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open

Microsoft 13.8k Jan 03, 2023
Fake videos detection by tracing the source using video hashing retrieval.

Vision Transformer Based Video Hashing Retrieval for Tracing the Source of Fake Videos πŸŽ‰οΈ πŸ“œ Directory Introduction VTL Trace Samples and Acc of Hash

56 Dec 22, 2022
Real-Time High-Resolution Background Matting

Real-Time High-Resolution Background Matting Official repository for the paper Real-Time High-Resolution Background Matting. Our model requires captur

Peter Lin 6.1k Jan 03, 2023
UniFormer - official implementation of UniFormer

UniFormer This repo is the official implementation of "Uniformer: Unified Transf

SenseTime X-Lab 573 Jan 04, 2023