Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Overview

Latest Version Supported Python versions Downloads

Visual Automata

Copyright 2021 Lewi Lie Uberg
Released under the MIT license

Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Contents

Prerequisites

pip install automata-lib
pip install pandas
pip install graphviz
pip install colormath
pip install jupyterlab

Installing

pip install visual-automata

VisualDFA

Importing

Import needed classes.

from automata.fa.dfa import DFA

from visual_automata.fa.dfa import VisualDFA

Instantiating DFAs

Define an automata-lib DFA that can accept any string ending with 00 or 11.

dfa = VisualDFA(
    states={"q0", "q1", "q2", "q3", "q4"},
    input_symbols={"0", "1"},
    transitions={
        "q0": {"0": "q3", "1": "q1"},
        "q1": {"0": "q3", "1": "q2"},
        "q2": {"0": "q3", "1": "q2"},
        "q3": {"0": "q4", "1": "q1"},
        "q4": {"0": "q4", "1": "q1"},
    },
    initial_state="q0",
    final_states={"q2", "q4"},
)

Converting

An automata-lib DFA can be converted to a VisualDFA.

Define an automata-lib DFA that can accept any string ending with 00 or 11.

dfa = DFA(
    states={"q0", "q1", "q2", "q3", "q4"},
    input_symbols={"0", "1"},
    transitions={
        "q0": {"0": "q3", "1": "q1"},
        "q1": {"0": "q3", "1": "q2"},
        "q2": {"0": "q3", "1": "q2"},
        "q3": {"0": "q4", "1": "q1"},
        "q4": {"0": "q4", "1": "q1"},
    },
    initial_state="q0",
    final_states={"q2", "q4"},
)

Convert automata-lib DFA to VisualDFA.

dfa = VisualDFA(dfa)

Minimal-DFA

Creates a minimal DFA which accepts the same inputs as the old one. Unreachable states are removed and equivalent states are merged. States are renamed by default.

new_dfa = VisualDFA(
    states={'q0', 'q1', 'q2'},
    input_symbols={'0', '1'},
    transitions={
        'q0': {'0': 'q0', '1': 'q1'},
        'q1': {'0': 'q0', '1': 'q2'},
        'q2': {'0': 'q2', '1': 'q1'}
    },
    initial_state='q0',
    final_states={'q1'}
)
new_dfa.table
      0    1
→q0  q0  *q1
*q1  q0   q2
q2   q2  *q1
new_dfa.show_diagram()

alt text

minimal_dfa = VisualDFA.minify(new_dfa)
minimal_dfa.show_diagram()

alt text

minimal_dfa.table
                0        1
→{q0,q2}  {q0,q2}      *q1
*q1       {q0,q2}  {q0,q2}

Transition Table

Outputs the transition table for the given DFA.

dfa.table
       0    1
→q0   q3   q1
q1    q3  *q2
*q2   q3  *q2
q3   *q4   q1
*q4  *q4   q1

Check input strings

1001 does not end with 00 or 11, and is therefore Rejected

dfa.input_check("1001")
          [Rejected]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1

10011 does end with 11, and is therefore Accepted

dfa.input_check("10011")
          [Accepted]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1
5                 q1             1        *q2

Show Diagram

For IPython dfa.show_diagram() may be used.
For a python script dfa.show_diagram(view=True) may be used to automatically view the graph as a PDF file.

dfa.show_diagram()

alt text

The show_diagram method also accepts input strings, and will return a graph with gradient red arrows for Rejected results, and gradient green arrows for Accepted results. It will also display a table with transitions states stepwise. The steps in this table will correspond with the [number] over each traversed arrow.

Please note that for visual purposes additional arrows are added if a transition is traversed more than once.

dfa.show_diagram("1001")
          [Rejected]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1

alt text

dfa.show_diagram("10011")
          [Accepted]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1
5                 q1             1        *q2

alt text

Authors

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Acknowledgments

You might also like...
An open-source NLP research library, built on PyTorch.
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

An open-source NLP research library, built on PyTorch.
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

A deep learning-based translation library built on Huggingface transformers

DL Translate A deep learning-based translation library built on Huggingface transformers and Facebook's mBART-Large 💻 GitHub Repository 📚 Documentat

Natural Language Processing library built with AllenNLP 🌲🌱
Natural Language Processing library built with AllenNLP 🌲🌱

Custom Natural Language Processing with big and small models 🌲🌱

A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

This repository contains Python scripts for extracting linguistic features from Filipino texts.

Filipino Text Linguistic Feature Extractors This repository contains scripts for extracting linguistic features from Filipino texts. The scripts were

A pytorch implementation of the ACL2019 paper
A pytorch implementation of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".

RE2 This is a pytorch implementation of the ACL 2019 paper "Simple and Effective Text Matching with Richer Alignment Features". The original Tensorflo

Code for paper "Role-oriented Network Embedding Based on Adversarial Learning between Higher-order and Local Features"

Role-oriented Network Embedding Based on Adversarial Learning between Higher-order and Local Features Train python main.py --dataset brazil-flights C

An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.

Extracting OpenAI CLIP (Global/Grid) Features from Image and Text This repo aims at providing an easy to use and efficient code for extracting image &

Comments
  • FrozenNFA constructor attempts to call deepcopy on frozendicts

    FrozenNFA constructor attempts to call deepcopy on frozendicts

    The VisualNFA constructor attempts to create a deep copy of the passed nfa, especially the transitions dictionary: https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/nfa.py#L469

    The deepcopy method is monkeypatched onto dict via curse: https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/nfa.py#L32

    However, automata-lib 7.0.1 returns a frozendict from the frozendict package instead, so the method call fails. It is not clear if copying the frozendict is at all necessary; deepcopy returns the object as-is.

    MRE

    Using most recent versions:

    • automata-lib 7.0.1
    • visual_automata 1.1.1
    from automata.fa.nfa import NFA
    from visual_automata.fa.nfa import VisualNFA
    
    nfa = NFA(states={"q0"}, input_symbols={"i0"}, transitions={"q0": {"i0": {"q0"}}}, initial_state="q0",
              final_states={"q0"})
    VisualNFA(nfa).show_diagram(view=True)
    

    Expected Behavior

    The automaton is shown.

    Actual Behavior

    Traceback (most recent call last):
      File "/path/to/scratch_1.py", line 6, in <module>
        VisualNFA(nfa).show_diagram(view=True)
      File "/path/to/site-packages/visual_automata/fa/nfa.py", line 619, in show_diagram
        all_transitions_pairs = self._transitions_pairs(self.nfa.transitions)
      File "/path/to/site-packages/visual_automata/fa/nfa.py", line 469, in _transitions_pairs
        all_transitions = all_transitions.deepcopy()
    AttributeError: 'frozendict.frozendict' object has no attribute 'deepcopy'
    
    opened by no-preserve-root 3
  • VisualDFA constructor implicitly checks wrapped automaton cardinality

    VisualDFA constructor implicitly checks wrapped automaton cardinality

    The VisualDFA constructor checks the dfa parameter using https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/dfa.py#L34

    This checks if dfa is truthy. Since the DFA class defines a __len__ method (and no __bool__), is is truthy iff len(dfa) != 0. Unfortunately, the length checks the dfa's cardinality, i.e., the size if the input language. For infinite-language DFAs, an exception is then raised. As a result, infinite DFAs cannot be visualized.

    This could be fixed by testing if dfa is None. VisualNFA is not affected since NFA does not define a __len__ method at the moment, but would fail if a similar method would be added to NFA.

    MRE

    Using most recent versions:

    • automata-lib 7.0.1
    • visual_automata 1.1.1
    from automata.fa.dfa import DFA
    from visual_automata.fa.dfa import VisualDFA
    
    dfa = DFA(states={"q0"}, input_symbols={"i0"}, transitions={"q0": {"i0": "q0"}}, initial_state="q0",
              final_states={"q0"})
    VisualDFA(dfa).show_diagram(view=True)
    

    Expected Behavior

    The automaton is shown.

    Actual Behavior

    Traceback (most recent call last):
      File "/path/to/scratch_1.py", line 6, in <module>
        VisualDFA(dfa).show_diagram(view=True)
      File "/path/to/site-packages/visual_automata/fa/dfa.py", line 34, in __init__
        if dfa:
      File "/path/to/site-packages/automata/fa/dfa.py", line 160, in __len__
        return self.cardinality()
      File "/path/to/site-packages/automata/fa/dfa.py", line 792, in cardinality
        raise exceptions.InfiniteLanguageException("The language represented by the DFA is infinite.")
    automata.base.exceptions.InfiniteLanguageException: The language represented by the DFA is infinite.
    

    Workaround

    Manually copying the automaton works:

    VisualDFA(states=dfa.states, input_symbols=dfa.input_symbols, transitions=dfa.transitions,
              initial_state=dfa.initial_state, final_states=dfa.final_states).show_diagram(view=True)
    
    opened by no-preserve-root 1
Releases(1093bea)
Owner
Lewi Uberg
Lewi Uberg
The official implementation of VAENAR-TTS, a VAE based non-autoregressive TTS model.

VAENAR-TTS This repo contains code accompanying the paper "VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis". Sa

THUHCSI 138 Oct 28, 2022
This is a simple item2vec implementation using gensim for recbole

recbole-item2vec-model This is a simple item2vec implementation using gensim for recbole( https://recbole.io ) Usage When you want to run experiment f

Yusuke Fukasawa 2 Oct 06, 2022
This repository contains the codes for LipGAN. LipGAN was published as a part of the paper titled "Towards Automatic Face-to-Face Translation".

LipGAN Generate realistic talking faces for any human speech and face identity. [Paper] | [Project Page] | [Demonstration Video] Important Update: A n

Rudrabha Mukhopadhyay 438 Dec 31, 2022
I label phrases on a scale of five values: negative, somewhat negative, neutral, somewhat positive, positive

I label phrases on a scale of five values: negative, somewhat negative, neutral, somewhat positive, positive. Obstacles like sentence negation, sarcasm, terseness, language ambiguity, and many others

1 Jan 13, 2022
Longformer: The Long-Document Transformer

Longformer Longformer and LongformerEncoderDecoder (LED) are pretrained transformer models for long documents. ***** New December 1st, 2020: Longforme

AI2 1.6k Dec 29, 2022
EdiTTS: Score-based Editing for Controllable Text-to-Speech

Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech

Neosapience 99 Jan 02, 2023
NeurIPS'21: Probabilistic Margins for Instance Reweighting in Adversarial Training (Pytorch implementation).

source code for NeurIPS21 paper robabilistic Margins for Instance Reweighting in Adversarial Training

9 Dec 20, 2022
👑 spaCy building blocks and visualizers for Streamlit apps

spacy-streamlit: spaCy building blocks for Streamlit apps This package contains utilities for visualizing spaCy models and building interactive spaCy-

Explosion 620 Dec 29, 2022
Auto-researching tool generating word documents.

About ResearchTE automates researching by generating document with answers to given questions. Supports getting results from: Google DuckDuckGo (with

1 Feb 14, 2022
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
Include MelGAN, HifiGAN and Multiband-HifiGAN, maybe NHV in the future.

Fast (GAN Based Neural) Vocoder Chinese README Todo Submit demo Support NHV Discription Include MelGAN, HifiGAN and Multiband-HifiGAN, maybe include N

Zhengxi Liu (刘正曦) 134 Dec 16, 2022
The source code of HeCo

HeCo This repo is for source code of KDD 2021 paper "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning". Paper Link: htt

Nian Liu 106 Dec 27, 2022
Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"

BP-Transformer This repo contains the code for our paper BP-Transformer: Modeling Long-Range Context via Binary Partition Zihao Ye, Qipeng Guo, Quan G

Zihao Ye 119 Nov 14, 2022
Bidirectional Variational Inference for Non-Autoregressive Text-to-Speech (BVAE-TTS)

Bidirectional Variational Inference for Non-Autoregressive Text-to-Speech (BVAE-TTS) Yoonhyung Lee, Joongbo Shin, Kyomin Jung Abstract: Although early

LEE YOON HYUNG 147 Dec 05, 2022
CJK computer science terms comparison / 中日韓電腦科學術語對照 / 日中韓のコンピュータ科学の用語対照 / 한·중·일 전산학 용어 대조

CJK computer science terms comparison This repository contains the source code of the website. You can see the website from the following link: Englis

Hong Minhee (洪 民憙) 88 Dec 23, 2022
Two-stage text summarization with BERT and BART

Two-Stage Text Summarization Description We experiment with a 2-stage summarization model on CNN/DailyMail dataset that combines the ability to filter

Yukai Yang (Alexis) 6 Oct 22, 2022
Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP

Stat4ML Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP This is the first course from our trio courses: Statistics Foundatio

Omid Safarzadeh 83 Dec 29, 2022
DeepAmandine is an artificial intelligence that allows you to talk to it for hours, you won't know the difference.

DeepAmandine This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes. We wish you a good ex

BuyWithCrypto 3 Apr 19, 2022
BERN2: an advanced neural biomedical namedentity recognition and normalization tool

BERN2 We present BERN2 (Advanced Biomedical Entity Recognition and Normalization), a tool that improves the previous neural network-based NER tool by

DMIS Laboratory - Korea University 99 Jan 06, 2023
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility. Main features: Train new vocabularies and tok

Hugging Face 6.2k Dec 31, 2022