Simple ONNX operation generator. Simple Operation Generator for ONNX.

Overview

sog4onnx

Simple ONNX operation generator. Simple Operation Generator for ONNX.

https://github.com/PINTO0309/simple-onnx-processing-tools

Downloads GitHub PyPI CodeQL

Key concept

  • Variable, Constant, Operation and Attribute can be generated externally.
  • Allow Opset to be specified externally.
  • No check for consistency of Operations within the tool, as new OPs are added frequently and the definitions of existing OPs change with each new version of ONNX's Opset.
  • Only one OP can be defined at a time, and the goal is to generate free ONNX graphs using a combination of snc4onnx, sne4onnx, snd4onnx and scs4onnx.
  • List of parameters that can be specified: https://github.com/onnx/onnx/blob/main/docs/Operators.md

1. Setup

1-1. HostPC

### option
$ echo export PATH="~/.local/bin:$PATH" >> ~/.bashrc \
&& source ~/.bashrc

### run
$ pip install -U onnx \
&& python3 -m pip install -U onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com \
&& pip install -U sog4onnx

1-2. Docker

### docker pull
$ docker pull pinto0309/sog4onnx:latest

### docker build
$ docker build -t pinto0309/sog4onnx:latest .

### docker run
$ docker run --rm -it -v `pwd`:/workdir pinto0309/sog4onnx:latest
$ cd /workdir

2. CLI Usage

$ sog4onnx -h

usage: sog4onnx [-h]
  --op_type OP_TYPE
  --opset OPSET
  --op_name OP_NAME
  [--input_variables NAME TYPE VALUE]
  [--output_variables NAME TYPE VALUE]
  [--attributes NAME DTYPE VALUE]
  [--output_onnx_file_path OUTPUT_ONNX_FILE_PATH]
  [--non_verbose]

optional arguments:
  -h, --help
        show this help message and exit

  --op_type OP_TYPE
        ONNX OP type.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

  --opset OPSET
        ONNX opset number.

  --op_name OP_NAME
        OP name.

  --input_variables NAME DTYPE VALUE
        input_variables can be specified multiple times.
        --input_variables variable_name numpy.dtype shape
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --input_variables i1 float32 [1,3,5,5] \
        --input_variables i2 int32 [1] \
        --input_variables i3 float64 [1,3,224,224]

  --output_variables NAME DTYPE VALUE
        output_variables can be specified multiple times.
        --output_variables variable_name numpy.dtype shape
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --output_variables o1 float32 [1,3,5,5] \
        --output_variables o2 int32 [1] \
        --output_variables o3 float64 [1,3,224,224]

  --attributes NAME DTYPE VALUE
        attributes can be specified multiple times.
        dtype is one of "float32" or "float64" or "int32" or "int64" or "str".
        --attributes name dtype value
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --attributes alpha float32 1.0 \
        --attributes beta float32 1.0 \
        --attributes transA int32 0 \
        --attributes transB int32 0

  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
        Output onnx file path.
        If not specified, a file with the OP type name is generated.

        e.g. op_type="Gemm" -> Gemm.onnx

  --non_verbose
        Do not show all information logs. Only error logs are displayed.

3. In-script Usage

$ python
>>> from sog4onnx import generate
>>> help(generate)
Help on function generate in module sog4onnx.onnx_operation_generator:

generate(
  op_type: str,
  opset: int,
  op_name: str,
  input_variables: dict,
  output_variables: dict,
  attributes: Union[dict, NoneType] = None,
  output_onnx_file_path: Union[str, NoneType] = '',
  non_verbose: Union[bool, NoneType] = False
) -> onnx.onnx_ml_pb2.ModelProto

    Parameters
    ----------
    op_type: str
        ONNX op type.
        See below for the types of OPs that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g. "Add", "Div", "Gemm", ...

    opset: int
        ONNX opset number.

        e.g. 11

    op_name: str
        OP name.

    input_variables: Optional[dict]
        Specify input variables for the OP to be generated.
        See below for the variables that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"input_var_name1": [numpy.dtype, shape], "input_var_name2": [dtype, shape], ...}

        e.g.
        input_variables = {
          "name1": [np.float32, [1,224,224,3]],
          "name2": [np.bool_, [0]],
          ...
        }

    output_variables: Optional[dict]
        Specify output variables for the OP to be generated.
        See below for the variables that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"output_var_name1": [numpy.dtype, shape], "output_var_name2": [dtype, shape], ...}

        e.g.
        output_variables = {
          "name1": [np.float32, [1,224,224,3]],
          "name2": [np.bool_, [0]],
          ...
        }

    attributes: Optional[dict]
        Specify output attributes for the OP to be generated.
        See below for the attributes that can be specified.
        When specifying Tensor format values, specify an array converted to np.ndarray.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"attr_name1": value1, "attr_name2": value2, "attr_name3": value3, ...}

        e.g.
        attributes = {
          "alpha": 1.0,
          "beta": 1.0,
          "transA": 0,
          "transB": 0
        }
        Default: None

    output_onnx_file_path: Optional[str]
        Output of onnx file path.
        If not specified, no .onnx file is output.
        Default: ''

    non_verbose: Optional[bool]
        Do not show all information logs. Only error logs are displayed.
        Default: False

    Returns
    -------
    single_op_graph: onnx.ModelProto
        Single op onnx ModelProto

4. CLI Execution

$ sog4onnx \
--op_type Gemm \
--opset 1 \
--op_name gemm_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,1] \
--input_variables i3 int32 [0] \
--output_variables o1 float32 [1,2,3] \
--attributes alpha float32 1.0 \
--attributes beta float32 1.0 \
--attributes transA int32 0 \
--attributes transB int32 0

5. In-script Execution

import numpy as np
from sog4onnx import generate

single_op_graph = generate(
    op_type = 'Gemm',
    opset = 1,
    op_name = "gemm_custom1",
    input_variables = {
      "i1": [np.float32, [1,2,3]],
      "i2": [np.float32, [1,1]],
      "i3": [np.int32, [0]],
    },
    output_variables = {
      "o1": [np.float32, [1,2,3]],
    },
    attributes = {
      "alpha": 1.0,
      "beta": 1.0,
      "broadcast": 0,
      "transA": 0,
      "transB": 0,
    },
    non_verbose = True,
)

6. Sample

6-1. opset=1, Gemm

$ sog4onnx \
--op_type Gemm \
--opset 1 \
--op_name gemm_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,1] \
--input_variables i3 int32 [0] \
--output_variables o1 float32 [1,2,3] \
--attributes alpha float32 1.0 \
--attributes beta float32 1.0 \
--attributes transA int32 0 \
--attributes transB int32 0
--non_verbose

image image

6-2. opset=11, Add

$ sog4onnx \
--op_type Add \
--opset 11 \
--op_name add_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,2,3] \
--output_variables o1 float32 [1,2,3] \
--non_verbose

image image

6-3. opset=11, NonMaxSuppression

$ sog4onnx \
--op_type NonMaxSuppression \
--opset 11 \
--op_name nms_custom1 \
--input_variables boxes float32 [1,6,4] \
--input_variables scores float32 [1,1,6] \
--input_variables max_output_boxes_per_class int64 [1] \
--input_variables iou_threshold float32 [1] \
--input_variables score_threshold float32 [1] \
--output_variables selected_indices int64 [3,3] \
--attributes center_point_box int64 1

image image

6-4. opset=11, Constant

$ sog4onnx \
--op_type Constant \
--opset 11 \
--op_name const_custom1 \
--output_variables boxes float32 [1,6,4] \
--attributes value float32 \
[[\
[0.5,0.5,1.0,1.0],\
[0.5,0.6,1.0,1.0],\
[0.5,0.4,1.0,1.0],\
[0.5,10.5,1.0,1.0],\
[0.5,10.6,1.0,1.0],\
[0.5,100.5,1.0,1.0]\
]]

image

7. Reference

  1. https://github.com/onnx/onnx/blob/main/docs/Operators.md
  2. https://docs.nvidia.com/deeplearning/tensorrt/onnx-graphsurgeon/docs/index.html
  3. https://github.com/NVIDIA/TensorRT/tree/main/tools/onnx-graphsurgeon
  4. https://github.com/PINTO0309/sne4onnx
  5. https://github.com/PINTO0309/snd4onnx
  6. https://github.com/PINTO0309/snc4onnx
  7. https://github.com/PINTO0309/scs4onnx
  8. https://github.com/PINTO0309/PINTO_model_zoo

8. Issues

https://github.com/PINTO0309/simple-onnx-processing-tools/issues

You might also like...
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Accelerated SMPL operation, commonly used in generate 3D human mesh, STAR included.

SMPL2 An enchanced and accelerated SMPL operation which commonly used in 3D human mesh generation. It takes a poses, shapes, cam_trans as inputs, outp

Liecasadi - liecasadi implements Lie groups operation written in CasADi

liecasadi liecasadi implements Lie groups operation written in CasADi, mainly di

A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Simple node deletion tool for onnx.
Simple node deletion tool for onnx.

snd4onnx Simple node deletion tool for onnx. I only test very miscellaneous and limited patterns as a hobby. There are probably a large number of bugs

MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

PyTorch ,ONNX and TensorRT implementation of YOLOv4
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

YOLOv5 in PyTorch > ONNX > CoreML > TFLite
YOLOv5 in PyTorch ONNX CoreML TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. All code and models are under active development, and are subject to modification or deletion without notice.

Comments
  • Small fixes to README

    Small fixes to README

    Thank you for the tool. There are small fixes needed in the README: the attributes of one example missing the type, and the numpy import in another one.

    Otherwise, it works perfectly.

    opened by ibaiGorordo 1
Releases(1.0.15)
  • 1.0.15(Nov 20, 2022)

    • Fixed a bug where Constant and ConstantOfShape opsets were not set

    Full Changelog: https://github.com/PINTO0309/sog4onnx/compare/1.0.14...1.0.15

    Source code(tar.gz)
    Source code(zip)
  • 1.0.14(Sep 8, 2022)

    • Add short form parameter
      $ sog4onnx -h
      
      usage: sog4onnx [-h]
        --ot OP_TYPE
        --os OPSET
        --on OP_NAME
        [-iv NAME TYPE VALUE]
        [-ov NAME TYPE VALUE]
        [-a NAME DTYPE VALUE]
        [-of OUTPUT_ONNX_FILE_PATH]
        [-n]
      
      optional arguments:
        -h, --help
          show this help message and exit
      
        -ot OP_TYPE, --op_type OP_TYPE
          ONNX OP type.
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
        -os OPSET, --opset OPSET
          ONNX opset number.
      
        -on OP_NAME, --op_name OP_NAME
          OP name.
      
        -iv INPUT_VARIABLES INPUT_VARIABLES INPUT_VARIABLES, --input_variables INPUT_VARIABLES INPUT_VARIABLES INPUT_VARIABLES
          input_variables can be specified multiple times.
          --input_variables variable_name numpy.dtype shape
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --input_variables i1 float32 [1,3,5,5] \
          --input_variables i2 int32 [1] \
          --input_variables i3 float64 [1,3,224,224]
      
        -ov OUTPUT_VARIABLES OUTPUT_VARIABLES OUTPUT_VARIABLES, --output_variables OUTPUT_VARIABLES OUTPUT_VARIABLES OUTPUT_VARIABLES
          output_variables can be specified multiple times.
          --output_variables variable_name numpy.dtype shape
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --output_variables o1 float32 [1,3,5,5] \
          --output_variables o2 int32 [1] \
          --output_variables o3 float64 [1,3,224,224]
      
        -a ATTRIBUTES ATTRIBUTES ATTRIBUTES, --attributes ATTRIBUTES ATTRIBUTES ATTRIBUTES
          attributes can be specified multiple times.
          dtype is one of "float32" or "float64" or "int32" or "int64" or "str".
          --attributes name dtype value
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --attributes alpha float32 1.0 \
          --attributes beta float32 1.0 \
          --attributes transA int32 0 \
          --attributes transB int32 0
      
        -of OUTPUT_ONNX_FILE_PATH, --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
          Output onnx file path.
          If not specified, a file with the OP type name is generated.
      
          e.g. op_type="Gemm" -> Gemm.onnx
      
        -n, --non_verbose
          Do not show all information logs. Only error logs are displayed.
      
    Source code(tar.gz)
    Source code(zip)
  • 1.0.13(Jun 10, 2022)

  • 1.0.12(Jun 7, 2022)

  • 1.0.11(May 25, 2022)

  • 1.0.10(May 15, 2022)

  • 1.0.9(Apr 26, 2022)

    • Added op_name as an input parameter, allowing OPs to be named.
      • CLI
        sog4onnx [-h]
          --op_type OP_TYPE
          --opset OPSET
          --op_name OP_NAME
          [--input_variables NAME TYPE VALUE]
          [--output_variables NAME TYPE VALUE]
          [--attributes NAME DTYPE VALUE]
          [--output_onnx_file_path OUTPUT_ONNX_FILE_PATH]
          [--non_verbose]
        
      • In-script
        generate(
          op_type: str,
          opset: int,
          op_name: str,
          input_variables: dict,
          output_variables: dict,
          attributes: Union[dict, NoneType] = None,
          output_onnx_file_path: Union[str, NoneType] = '',
          non_verbose: Union[bool, NoneType] = False
        ) -> onnx.onnx_ml_pb2.ModelProto
        
    Source code(tar.gz)
    Source code(zip)
  • 1.0.8(Apr 15, 2022)

  • 1.0.7(Apr 14, 2022)

  • 1.0.6(Apr 14, 2022)

  • 1.0.5(Apr 13, 2022)

  • 1.0.4(Apr 13, 2022)

  • 1.0.3(Apr 12, 2022)

  • 1.0.2(Apr 12, 2022)

  • 1.0.1(Apr 12, 2022)

  • 1.0.0(Apr 12, 2022)

  • 0.0.2(Apr 12, 2022)

  • 0.0.1(Apr 12, 2022)

Owner
Katsuya Hyodo
Hobby programmer. Intel Software Innovator Program member.
Katsuya Hyodo
Traductor de lengua de señas al español basado en Python con Opencv y MedaiPipe

Traductor de señas Traductor de lengua de señas al español basado en Python con Opencv y MedaiPipe Requerimientos 🔧 Python 3.8 o inferior para evitar

Jahaziel Hernandez Hoyos 3 Nov 12, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022
As-ViT: Auto-scaling Vision Transformers without Training

As-ViT: Auto-scaling Vision Transformers without Training [PDF] Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wang, Denny Zhou In ICLR 2

VITA 68 Sep 05, 2022
PyTorch implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

Simple PyTorch Implementation of "Grokking" Implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets Usage Running

Teddy Koker 15 Sep 29, 2022
Codebase for Inducing Causal Structure for Interpretable Neural Networks

Interchange Intervention Training (IIT) Codebase for Inducing Causal Structure for Interpretable Neural Networks Release Notes 12/01/2021: Code and Pa

Zen 6 Oct 10, 2022
An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax

Simple Transformer An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax. Note: The only ex

29 Jun 16, 2022
Code for Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021)

Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021) Single-cause Perturbation (SCP) is a framework to estimate the m

Zhaozhi Qian 9 Sep 28, 2022
Code for our EMNLP 2021 paper “Heterogeneous Graph Neural Networks for Keyphrase Generation”

GATER This repository contains the code for our EMNLP 2021 paper “Heterogeneous Graph Neural Networks for Keyphrase Generation”. Our implementation is

Jiacheng Ye 12 Nov 24, 2022
My implementation of DeepMind's Perceiver

DeepMind Perceiver (in PyTorch) Disclaimer: This is not official and I'm not affiliated with DeepMind. My implementation of the Perceiver: General Per

Louis Arge 55 Dec 12, 2022
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
The official code repository for examples in the O'Reilly book 'Generative Deep Learning'

Generative Deep Learning Teaching Machines to paint, write, compose and play The official code repository for examples in the O'Reilly book 'Generativ

David Foster 1.3k Dec 29, 2022
This is the PyTorch implementation of GANs N’ Roses: Stable, Controllable, Diverse Image to Image Translation

Official PyTorch repo for GAN's N' Roses. Diverse im2im and vid2vid selfie to anime translation.

1.1k Jan 01, 2023
Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation.

DuoRec Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation. Usage Download datasets fr

Qrh 46 Dec 19, 2022
🍷 Gracefully claim weekly free games and monthly content from Epic Store.

EPIC 免费人 🚀 优雅地领取 Epic 免费游戏 Introduction 👋 Epic AwesomeGamer 帮助玩家优雅地领取 Epic 免费游戏。 使用 「Epic免费人」可以实现如下需求: get:搬空游戏商店,获取所有常驻免费游戏与免费附加内容; claim:领取周免游戏及其免

571 Dec 28, 2022
PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains a PyTorch implementation for the paper Score-Based Genera

Yang Song 757 Jan 04, 2023
Unified API to facilitate usage of pre-trained "perceptor" models, a la CLIP

mmc installation git clone https://github.com/dmarx/Multi-Modal-Comparators cd 'Multi-Modal-Comparators' pip install poetry poetry build pip install d

David Marx 37 Nov 25, 2022
Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase

Ranger-Deep-Learning-Optimizer Ranger - a synergistic optimizer combining RAdam (Rectified Adam) and LookAhead, and now GC (gradient centralization) i

Less Wright 1.1k Dec 21, 2022
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 69 Nov 28, 2022
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022
RIM: Reliable Influence-based Active Learning on Graphs.

RIM: Reliable Influence-based Active Learning on Graphs. This repository is the official implementation of RIM. Requirements To install requirements:

Wentao Zhang 4 Aug 29, 2022