Simple ONNX operation generator. Simple Operation Generator for ONNX.

Overview

sog4onnx

Simple ONNX operation generator. Simple Operation Generator for ONNX.

https://github.com/PINTO0309/simple-onnx-processing-tools

Downloads GitHub PyPI CodeQL

Key concept

  • Variable, Constant, Operation and Attribute can be generated externally.
  • Allow Opset to be specified externally.
  • No check for consistency of Operations within the tool, as new OPs are added frequently and the definitions of existing OPs change with each new version of ONNX's Opset.
  • Only one OP can be defined at a time, and the goal is to generate free ONNX graphs using a combination of snc4onnx, sne4onnx, snd4onnx and scs4onnx.
  • List of parameters that can be specified: https://github.com/onnx/onnx/blob/main/docs/Operators.md

1. Setup

1-1. HostPC

### option
$ echo export PATH="~/.local/bin:$PATH" >> ~/.bashrc \
&& source ~/.bashrc

### run
$ pip install -U onnx \
&& python3 -m pip install -U onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com \
&& pip install -U sog4onnx

1-2. Docker

### docker pull
$ docker pull pinto0309/sog4onnx:latest

### docker build
$ docker build -t pinto0309/sog4onnx:latest .

### docker run
$ docker run --rm -it -v `pwd`:/workdir pinto0309/sog4onnx:latest
$ cd /workdir

2. CLI Usage

$ sog4onnx -h

usage: sog4onnx [-h]
  --op_type OP_TYPE
  --opset OPSET
  --op_name OP_NAME
  [--input_variables NAME TYPE VALUE]
  [--output_variables NAME TYPE VALUE]
  [--attributes NAME DTYPE VALUE]
  [--output_onnx_file_path OUTPUT_ONNX_FILE_PATH]
  [--non_verbose]

optional arguments:
  -h, --help
        show this help message and exit

  --op_type OP_TYPE
        ONNX OP type.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

  --opset OPSET
        ONNX opset number.

  --op_name OP_NAME
        OP name.

  --input_variables NAME DTYPE VALUE
        input_variables can be specified multiple times.
        --input_variables variable_name numpy.dtype shape
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --input_variables i1 float32 [1,3,5,5] \
        --input_variables i2 int32 [1] \
        --input_variables i3 float64 [1,3,224,224]

  --output_variables NAME DTYPE VALUE
        output_variables can be specified multiple times.
        --output_variables variable_name numpy.dtype shape
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --output_variables o1 float32 [1,3,5,5] \
        --output_variables o2 int32 [1] \
        --output_variables o3 float64 [1,3,224,224]

  --attributes NAME DTYPE VALUE
        attributes can be specified multiple times.
        dtype is one of "float32" or "float64" or "int32" or "int64" or "str".
        --attributes name dtype value
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --attributes alpha float32 1.0 \
        --attributes beta float32 1.0 \
        --attributes transA int32 0 \
        --attributes transB int32 0

  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
        Output onnx file path.
        If not specified, a file with the OP type name is generated.

        e.g. op_type="Gemm" -> Gemm.onnx

  --non_verbose
        Do not show all information logs. Only error logs are displayed.

3. In-script Usage

$ python
>>> from sog4onnx import generate
>>> help(generate)
Help on function generate in module sog4onnx.onnx_operation_generator:

generate(
  op_type: str,
  opset: int,
  op_name: str,
  input_variables: dict,
  output_variables: dict,
  attributes: Union[dict, NoneType] = None,
  output_onnx_file_path: Union[str, NoneType] = '',
  non_verbose: Union[bool, NoneType] = False
) -> onnx.onnx_ml_pb2.ModelProto

    Parameters
    ----------
    op_type: str
        ONNX op type.
        See below for the types of OPs that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g. "Add", "Div", "Gemm", ...

    opset: int
        ONNX opset number.

        e.g. 11

    op_name: str
        OP name.

    input_variables: Optional[dict]
        Specify input variables for the OP to be generated.
        See below for the variables that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"input_var_name1": [numpy.dtype, shape], "input_var_name2": [dtype, shape], ...}

        e.g.
        input_variables = {
          "name1": [np.float32, [1,224,224,3]],
          "name2": [np.bool_, [0]],
          ...
        }

    output_variables: Optional[dict]
        Specify output variables for the OP to be generated.
        See below for the variables that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"output_var_name1": [numpy.dtype, shape], "output_var_name2": [dtype, shape], ...}

        e.g.
        output_variables = {
          "name1": [np.float32, [1,224,224,3]],
          "name2": [np.bool_, [0]],
          ...
        }

    attributes: Optional[dict]
        Specify output attributes for the OP to be generated.
        See below for the attributes that can be specified.
        When specifying Tensor format values, specify an array converted to np.ndarray.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"attr_name1": value1, "attr_name2": value2, "attr_name3": value3, ...}

        e.g.
        attributes = {
          "alpha": 1.0,
          "beta": 1.0,
          "transA": 0,
          "transB": 0
        }
        Default: None

    output_onnx_file_path: Optional[str]
        Output of onnx file path.
        If not specified, no .onnx file is output.
        Default: ''

    non_verbose: Optional[bool]
        Do not show all information logs. Only error logs are displayed.
        Default: False

    Returns
    -------
    single_op_graph: onnx.ModelProto
        Single op onnx ModelProto

4. CLI Execution

$ sog4onnx \
--op_type Gemm \
--opset 1 \
--op_name gemm_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,1] \
--input_variables i3 int32 [0] \
--output_variables o1 float32 [1,2,3] \
--attributes alpha float32 1.0 \
--attributes beta float32 1.0 \
--attributes transA int32 0 \
--attributes transB int32 0

5. In-script Execution

import numpy as np
from sog4onnx import generate

single_op_graph = generate(
    op_type = 'Gemm',
    opset = 1,
    op_name = "gemm_custom1",
    input_variables = {
      "i1": [np.float32, [1,2,3]],
      "i2": [np.float32, [1,1]],
      "i3": [np.int32, [0]],
    },
    output_variables = {
      "o1": [np.float32, [1,2,3]],
    },
    attributes = {
      "alpha": 1.0,
      "beta": 1.0,
      "broadcast": 0,
      "transA": 0,
      "transB": 0,
    },
    non_verbose = True,
)

6. Sample

6-1. opset=1, Gemm

$ sog4onnx \
--op_type Gemm \
--opset 1 \
--op_name gemm_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,1] \
--input_variables i3 int32 [0] \
--output_variables o1 float32 [1,2,3] \
--attributes alpha float32 1.0 \
--attributes beta float32 1.0 \
--attributes transA int32 0 \
--attributes transB int32 0
--non_verbose

image image

6-2. opset=11, Add

$ sog4onnx \
--op_type Add \
--opset 11 \
--op_name add_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,2,3] \
--output_variables o1 float32 [1,2,3] \
--non_verbose

image image

6-3. opset=11, NonMaxSuppression

$ sog4onnx \
--op_type NonMaxSuppression \
--opset 11 \
--op_name nms_custom1 \
--input_variables boxes float32 [1,6,4] \
--input_variables scores float32 [1,1,6] \
--input_variables max_output_boxes_per_class int64 [1] \
--input_variables iou_threshold float32 [1] \
--input_variables score_threshold float32 [1] \
--output_variables selected_indices int64 [3,3] \
--attributes center_point_box int64 1

image image

6-4. opset=11, Constant

$ sog4onnx \
--op_type Constant \
--opset 11 \
--op_name const_custom1 \
--output_variables boxes float32 [1,6,4] \
--attributes value float32 \
[[\
[0.5,0.5,1.0,1.0],\
[0.5,0.6,1.0,1.0],\
[0.5,0.4,1.0,1.0],\
[0.5,10.5,1.0,1.0],\
[0.5,10.6,1.0,1.0],\
[0.5,100.5,1.0,1.0]\
]]

image

7. Reference

  1. https://github.com/onnx/onnx/blob/main/docs/Operators.md
  2. https://docs.nvidia.com/deeplearning/tensorrt/onnx-graphsurgeon/docs/index.html
  3. https://github.com/NVIDIA/TensorRT/tree/main/tools/onnx-graphsurgeon
  4. https://github.com/PINTO0309/sne4onnx
  5. https://github.com/PINTO0309/snd4onnx
  6. https://github.com/PINTO0309/snc4onnx
  7. https://github.com/PINTO0309/scs4onnx
  8. https://github.com/PINTO0309/PINTO_model_zoo

8. Issues

https://github.com/PINTO0309/simple-onnx-processing-tools/issues

You might also like...
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Accelerated SMPL operation, commonly used in generate 3D human mesh, STAR included.

SMPL2 An enchanced and accelerated SMPL operation which commonly used in 3D human mesh generation. It takes a poses, shapes, cam_trans as inputs, outp

Liecasadi - liecasadi implements Lie groups operation written in CasADi

liecasadi liecasadi implements Lie groups operation written in CasADi, mainly di

A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Simple node deletion tool for onnx.
Simple node deletion tool for onnx.

snd4onnx Simple node deletion tool for onnx. I only test very miscellaneous and limited patterns as a hobby. There are probably a large number of bugs

MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

PyTorch ,ONNX and TensorRT implementation of YOLOv4
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

YOLOv5 in PyTorch > ONNX > CoreML > TFLite
YOLOv5 in PyTorch ONNX CoreML TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. All code and models are under active development, and are subject to modification or deletion without notice.

Comments
  • Small fixes to README

    Small fixes to README

    Thank you for the tool. There are small fixes needed in the README: the attributes of one example missing the type, and the numpy import in another one.

    Otherwise, it works perfectly.

    opened by ibaiGorordo 1
Releases(1.0.15)
  • 1.0.15(Nov 20, 2022)

    • Fixed a bug where Constant and ConstantOfShape opsets were not set

    Full Changelog: https://github.com/PINTO0309/sog4onnx/compare/1.0.14...1.0.15

    Source code(tar.gz)
    Source code(zip)
  • 1.0.14(Sep 8, 2022)

    • Add short form parameter
      $ sog4onnx -h
      
      usage: sog4onnx [-h]
        --ot OP_TYPE
        --os OPSET
        --on OP_NAME
        [-iv NAME TYPE VALUE]
        [-ov NAME TYPE VALUE]
        [-a NAME DTYPE VALUE]
        [-of OUTPUT_ONNX_FILE_PATH]
        [-n]
      
      optional arguments:
        -h, --help
          show this help message and exit
      
        -ot OP_TYPE, --op_type OP_TYPE
          ONNX OP type.
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
        -os OPSET, --opset OPSET
          ONNX opset number.
      
        -on OP_NAME, --op_name OP_NAME
          OP name.
      
        -iv INPUT_VARIABLES INPUT_VARIABLES INPUT_VARIABLES, --input_variables INPUT_VARIABLES INPUT_VARIABLES INPUT_VARIABLES
          input_variables can be specified multiple times.
          --input_variables variable_name numpy.dtype shape
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --input_variables i1 float32 [1,3,5,5] \
          --input_variables i2 int32 [1] \
          --input_variables i3 float64 [1,3,224,224]
      
        -ov OUTPUT_VARIABLES OUTPUT_VARIABLES OUTPUT_VARIABLES, --output_variables OUTPUT_VARIABLES OUTPUT_VARIABLES OUTPUT_VARIABLES
          output_variables can be specified multiple times.
          --output_variables variable_name numpy.dtype shape
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --output_variables o1 float32 [1,3,5,5] \
          --output_variables o2 int32 [1] \
          --output_variables o3 float64 [1,3,224,224]
      
        -a ATTRIBUTES ATTRIBUTES ATTRIBUTES, --attributes ATTRIBUTES ATTRIBUTES ATTRIBUTES
          attributes can be specified multiple times.
          dtype is one of "float32" or "float64" or "int32" or "int64" or "str".
          --attributes name dtype value
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --attributes alpha float32 1.0 \
          --attributes beta float32 1.0 \
          --attributes transA int32 0 \
          --attributes transB int32 0
      
        -of OUTPUT_ONNX_FILE_PATH, --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
          Output onnx file path.
          If not specified, a file with the OP type name is generated.
      
          e.g. op_type="Gemm" -> Gemm.onnx
      
        -n, --non_verbose
          Do not show all information logs. Only error logs are displayed.
      
    Source code(tar.gz)
    Source code(zip)
  • 1.0.13(Jun 10, 2022)

  • 1.0.12(Jun 7, 2022)

  • 1.0.11(May 25, 2022)

  • 1.0.10(May 15, 2022)

  • 1.0.9(Apr 26, 2022)

    • Added op_name as an input parameter, allowing OPs to be named.
      • CLI
        sog4onnx [-h]
          --op_type OP_TYPE
          --opset OPSET
          --op_name OP_NAME
          [--input_variables NAME TYPE VALUE]
          [--output_variables NAME TYPE VALUE]
          [--attributes NAME DTYPE VALUE]
          [--output_onnx_file_path OUTPUT_ONNX_FILE_PATH]
          [--non_verbose]
        
      • In-script
        generate(
          op_type: str,
          opset: int,
          op_name: str,
          input_variables: dict,
          output_variables: dict,
          attributes: Union[dict, NoneType] = None,
          output_onnx_file_path: Union[str, NoneType] = '',
          non_verbose: Union[bool, NoneType] = False
        ) -> onnx.onnx_ml_pb2.ModelProto
        
    Source code(tar.gz)
    Source code(zip)
  • 1.0.8(Apr 15, 2022)

  • 1.0.7(Apr 14, 2022)

  • 1.0.6(Apr 14, 2022)

  • 1.0.5(Apr 13, 2022)

  • 1.0.4(Apr 13, 2022)

  • 1.0.3(Apr 12, 2022)

  • 1.0.2(Apr 12, 2022)

  • 1.0.1(Apr 12, 2022)

  • 1.0.0(Apr 12, 2022)

  • 0.0.2(Apr 12, 2022)

  • 0.0.1(Apr 12, 2022)

Owner
Katsuya Hyodo
Hobby programmer. Intel Software Innovator Program member.
Katsuya Hyodo
The project of phase's key role in complex and real NN

Phase-in-NN This is the code for our project at Princeton (co-authors: Yuqi Nie, Hui Yuan). The paper title is: "Neural Network is heterogeneous: Phas

YuqiNie-lab 1 Nov 04, 2021
It's final year project of Diploma Engineering. This project is based on Computer Vision.

Face-Recognition-Based-Attendance-System It's final year project of Diploma Engineering. This project is based on Computer Vision. Brief idea about ou

Neel 10 Nov 02, 2022
[CVPR 2022] Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement

Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement Announcement 🔥 We have not tested the code yet. We will fini

Xiuwei Xu 7 Oct 30, 2022
Job Assignment System by Real-time Emotion Detection

Emotion-Detection Job Assignment System by Real-time Emotion Detection Emotion is the essential role of facial expression and it could provide a lot o

1 Feb 08, 2022
Garbage classification using structure data.

垃圾分类模型使用说明 1.包含以下数据文件 文件 描述 data/MaterialMapping.csv 物体以及其归类的信息 data/TestRecords 光谱原始测试数据 CSV 文件 data/TestRecordDesc.zip CSV 文件描述文件 data/Boundaries.cs

wenqi 1 Dec 10, 2021
A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

StyleGAN3 CLIP-based guidance StyleGAN3 + CLIP StyleGAN3 + inversion + CLIP This repo is a collection of Jupyter notebooks made to easily play with St

Eugenio Herrera 176 Dec 30, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
Tensorflow AffordanceNet and AffContext implementations

AffordanceNet and AffContext This is tensorflow AffordanceNet and AffContext implementations. Both are implemented and tested with tensorflow 2.3. The

Beatriz Pérez 6 Dec 01, 2022
An Inverse Kinematics library aiming performance and modularity

IKPy Demo Live demos of what IKPy can do (click on the image below to see the video): Also, a presentation of IKPy: Presentation. Features With IKPy,

Pierre Manceron 481 Jan 02, 2023
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.8k Jan 08, 2023
Relaxed-machines - explorations in neuro-symbolic differentiable interpreters

Relaxed Machines Explorations in neuro-symbolic differentiable interpreters. Baby steps: inc_stop Libraries JAX Haiku Optax Resources Chapter 3 (∂4: A

Nada Amin 6 Feb 02, 2022
Auditing Black-Box Prediction Models for Data Minimization Compliance

Data-Minimization-Auditor An auditing tool for model-instability based data minimization that is introduced in "Auditing Black-Box Prediction Models f

Bashir Rastegarpanah 2 Mar 24, 2022
Hierarchical Clustering: O(1)-Approximation for Well-Clustered Graphs

Hierarchical Clustering: O(1)-Approximation for Well-Clustered Graphs This repository contains code to accompany the paper "Hierarchical Clustering: O

3 Sep 25, 2022
Implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT : Cross-Attention Multi-Scale Vision Transformer for Image Classification This is an unofficial PyTorch implementation of CrossViT: Cross-Att

Rishikesh (ऋषिकेश) 103 Nov 25, 2022
「PyTorch Implementation of AnimeGANv2」を用いて、生成した顔画像を元の画像に上書きするデモ

AnimeGANv2-Face-Overlay-Demo PyTorch Implementation of AnimeGANv2を用いて、生成した顔画像を元の画像に上書きするデモです。

KazuhitoTakahashi 21 Oct 18, 2022
YOLOv2 in PyTorch

YOLOv2 in PyTorch NOTE: This project is no longer maintained and may not compatible with the newest pytorch (after 0.4.0). This is a PyTorch implement

Long Chen 1.5k Jan 02, 2023
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022
A Real-ESRGAN equipped Colab notebook for CLIP Guided Diffusion

#360Diffusion automatically upscales your CLIP Guided Diffusion outputs using Real-ESRGAN. Latest Update: Alpha 1.61 [Main Branch] - 01/11/22 Layout a

78 Nov 02, 2022
The code for paper "Contrastive Spatio-Temporal Pretext Learning for Self-supervised Video Representation" which is accepted by AAAI 2022

Contrastive Spatio Temporal Pretext Learning for Self-supervised Video Representation (AAAI 2022) The code for paper "Contrastive Spatio-Temporal Pret

8 Jun 30, 2022
Learning to Reconstruct 3D Manhattan Wireframes from a Single Image

Learning to Reconstruct 3D Manhattan Wireframes From a Single Image This repository contains the PyTorch implementation of the paper: Yichao Zhou, Hao

Yichao Zhou 50 Dec 27, 2022