A Domain Specific Language (DSL) for building language patterns. These can be later compiled into spaCy patterns, pure regex, or any other format

Overview

Rita Logo

RITA DSL

Documentation Status codecov made-with-python Maintenance PyPI version fury.io PyPI download month GitHub license

This is a language, loosely based on language Apache UIMA RUTA, focused on writing manual language rules, which compiles into either spaCy compatible patterns, or pure regex. These patterns can be used for doing manual NER as well as used in other processes, like retokenizing and pure matching

An Introduction Video

Intro

Links

Support

reddit Gitter

Install

pip install rita-dsl

Simple Rules example

rules = """
cuts = {"fitted", "wide-cut"}
lengths = {"short", "long", "calf-length", "knee-length"}
fabric_types = {"soft", "airy", "crinkled"}
fabrics = {"velour", "chiffon", "knit", "woven", "stretch"}

{IN_LIST(cuts)?, IN_LIST(lengths), WORD("dress")}->MARK("DRESS_TYPE")
{IN_LIST(lengths), IN_LIST(cuts), WORD("dress")}->MARK("DRESS_TYPE")
{IN_LIST(fabric_types)?, IN_LIST(fabrics)}->MARK("DRESS_FABRIC")
"""

Loading in spaCy

import spacy
from rita.shortcuts import setup_spacy


nlp = spacy.load("en")
setup_spacy(nlp, rules_string=rules)

And using it:

>>> r = nlp("She was wearing a short wide-cut dress")
>>> [{"label": e.label_, "text": e.text} for e in r.ents]
[{'label': 'DRESS_TYPE', 'text': 'short wide-cut dress'}]

Loading using Regex (standalone)

import rita

patterns = rita.compile_string(rules, use_engine="standalone")

And using it:

>>> list(patterns.execute("She was wearing a short wide-cut dress"))
[{'end': 38, 'label': 'DRESS_TYPE', 'start': 18, 'text': 'short wide-cut dress'}]
Comments
  • Jetbrains RITA Plugin not compatible with PyCharm 2020.2.1

    Jetbrains RITA Plugin not compatible with PyCharm 2020.2.1

    Plugin Version: 1.2 https://plugins.jetbrains.com/plugin/15011-rita-language/versions/

    Tested Version: PyCharm 2020.2.1 (Professional Edition)

    Error when trying to install from disk: grafik

    On the plugin site https://plugins.jetbrains.com/plugin/15011-rita-language/versions/ it says, that this should be uncompitable for all IntellJ-based IDEs in the 2020.2 version:

    The list of supported products was determined by dependencies defined in the plugin.xml: Android Studio — build 201.7223 — 201.* DataGrip — 2020.1.3 — 2020.1.5 IntelliJ IDEA Ultimate — 2020.1.1 — 2020.1.4 Rider — 2020.1.3 PyCharm Professional — 2020.1.1 — 2020.1.4 PyCharm Community — 2020.1.1 — 2020.1.4 PhpStorm — 2020.1.1 — 2020.1.4 IntelliJ IDEA Educational — 2020.1.1 — 2020.1.2 CLion — 2020.1.1 — 2020.1.3 PyCharm Educational — 2020.1.1 — 2020.1.2 GoLand — 2020.1.1 — 2020.1.4 AppCode — 2020.1.2 — 2020.1.6 RubyMine — 2020.1.1 — 2020.1.4 MPS — 2020.1.1 — 2020.1.4 IntelliJ IDEA Community — 2020.1.1 — 2020.1.4 WebStorm — 2020.1.1 — 2020.1.4

    opened by rolandmueller 3
  • IN_LIST ignores OP quantifier

    IN_LIST ignores OP quantifier

    Somehow I get this unexpected behaviour when using OP quantifiers (?, *, +, etc) with the IN_LIST element:

    rules = """
    list_elements = {"one", "two"}
    {IN_LIST(list_elements)?}->MARK("LABEL")
    """
    rules = rita.compile_string(rules)
    expected_result = "[{'label': 'LABEL', 'pattern': [{'LOWER': {'REGEX': '^(one|two)$'}, 'OP': '?'}]}]"
    print("expected_result:", expected_result)
    print("result:", rules)
    assert str(rules) == expected_result
    

    Version: 0.5.0

    bug 
    opened by rolandmueller 3
  • Add module regex

    Add module regex

    This feature would introduce the REGEX element as a module.

    Matches words based on a Regex pattern e.g. all words that start with an 'a' would be REGEX("^a")

    !IMPORT("rita.modules.regex")
    
    {REGEX("^a")}->MARK("TAGGED_MATCH")
    
    opened by rolandmueller 2
  • Feature/pluralize

    Feature/pluralize

    Add a new module for a PLURALIZE tag For a noun or a list of nouns, it will match any singular or plural word. Usage for a single word, e.g.:

    PLURALIZE("car")
    

    Usage for lists, e.g.:

    vehicles = {"car", "bicycle", "ship"}
    PLURALIZE(vehicles)
    

    Will work even for regex or if the lemmatizer of spaCy is making an error. Has dependency to the Python inflect package https://pypi.org/project/inflect/

    opened by rolandmueller 2
  • Feature/regex tag

    Feature/regex tag

    This feature would introduce the TAG element as a module. Needs a new parser for the SpaCy translate. Would allow more flexible matching of detailed part-of-speech tag, like all adjectives or nouns: TAG("^NN|^JJ").

    opened by rolandmueller 2
  • Feature/improve robustness

    Feature/improve robustness

    In general - measure how long it takes to compile and avoid situations when pattern creates infinite loop (possible to get to this situation using regex).

    Closes: https://github.com/zaibacu/rita-dsl/issues/78

    opened by zaibacu 1
  • Add TAG_WORD macro to Tag module

    Add TAG_WORD macro to Tag module

    This feature would introduce the TAG_WORD element to the Tag module

    TAG_WORD is for generating TAG patterns with a word or a list.

    e.g. match only "proposed" when it is in the sentence a verb (and not an adjective):

    !IMPORT("rita.modules.tag")
    
    TAG_WORD("^VB", "proposed")
    

    or e.g. match a list of words only to verbs

    !IMPORT("rita.modules.tag")
    
    words = {"percived", "proposed"}
    {TAG_WORD("^VB", words)}->MARK("LABEL")
    
    opened by rolandmueller 1
  • Add Orth module

    Add Orth module

    This feature would introduce the ORTH element as a module.

    Ignores case-insensitive configuration and checks words as written that means case-sensitive even if configuration is case-insensitive. Especially useful for acronyms and proper names.

    Works only with spaCy engine

    Usage:

    !IMPORT("rita.modules.orth")
    
    {ORTH("IEEE")}->MARK("TAGGED_MATCH")
    
    opened by rolandmueller 1
  • Add conifugration for implicit hyphon characters between words

    Add conifugration for implicit hyphon characters between words

    Add a new Configuration implicit_hyphon (default false) for automatically adding hyphon characters - to the rules. Enabling implicit_hyphon is disabling implicit_punct. Rationale: implicit_punct is often to much inclusive. The implicit_punct has the hyphon token included, but it is adding (at least in my use case) unwanted tokens (like parentheses) to the matches, especially for more complex rules. So implicit_hyphon is a little bit more strict than implicit_punct.

    opened by rolandmueller 1
  • Fix sequencial optional

    Fix sequencial optional

    Closes https://github.com/zaibacu/rita-dsl/issues/69

    Turns out it is a bug related to - character which in most cases used as a splitter, but in this case as a stand alone word

    opened by zaibacu 1
  • Method to validate syntax

    Method to validate syntax

    Currently it can be partially done:

    from rita.parser import RitaParser
    from rita.config import SessionConfig
    config = SessionConfig()
    p = RitaParser(config)
    p.build()
    result = p.parse(rules)
    if result is None:
        raise RuntimeError("... Something is wrong with syntax")
    

    But it would be nice to have single method for that and have actual error info.

    enhancement 
    opened by zaibacu 0
  • Dynamic case sensitivity for Standalone Engine

    Dynamic case sensitivity for Standalone Engine

    We want to be able to make specified word inside pattern to be case sensitive, while rest of the pattern is case insensitive.

    It looks like it can be achieved using inline modifier groups regex feature, it requires Python3.6+ version

    enhancement 
    opened by zaibacu 0
  • JS rule engine

    JS rule engine

    Should work similarly to standalone engine, maybe even inherit most of it, but it should result into valid JavaScript code, preferably a single function to which you give raw text and get result of multiple parsed entities

    enhancement help wanted 
    opened by zaibacu 0
  • Allow LOAD macro to load from external locations

    Allow LOAD macro to load from external locations

    now LOAD(file_name) macro searches text file in current path.

    Usually reading from the local file is the best, but it should be cool, to be able just give like github GIST url and just load everything we need. This would be very useful for Demo page case

    good first issue 
    opened by zaibacu 0
Releases(0.7.0)
  • 0.7.0(Feb 2, 2021)

    0.7.0 (2021-02-02)


    Features

    • standalone engine now will return submatches list containing start and end for each part of match #93

    • Partially covered https://github.com/zaibacu/rita-dsl/issues/70

      Allow nested patterns, like:

          num_with_fractions = {NUM, WORD("-")?, IN_LIST(fractions)}
          complex_number = {NUM|PATTERN(num_with_fractions)}
    
          {PATTERN(complex_number)}->MARK("NUMBER")
    

    #95

    • Submatches for rita-rust engine #96

    • Regex module which allows to specify word pattern, eg. REGEX(^a) means word must start with letter "a"

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #101

    • ORTH module which allows you to specify case sensitive entry while rest of the rules ignores case. Used for acronyms and proper names

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #102

    • Additional macro for tag module, allowing to tag specific word/list of words

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #103

    • Added names module which allows to generate person names variations #105

    • spaCy v3 Support #109

    Fix

    • Optimizations for Rust Engine

      • No need for passing text forward and backward, we can calculate from text[start:end]

      • Grouping and sorting logic can be done in binary code #88

    • Fix NUM parsing bug #90

    • Switch from (^\s) to \b when doing IN_LIST. Should solve several corner cases #91

    • Fix floating point number matching #92

    • revert #91 changes. Keep old way for word boundary #94

    Source code(tar.gz)
    Source code(zip)
  • 0.6.0(Aug 29, 2020)

    0.6.0 (2020-08-29)


    Features

    • Implemented ability to alias macros, eg.:
          numbers = {"one", "two", "three"}
          @alias IN_LIST IL
    
          IL(numbers) -> MARK("NUMBER")
    

    Now using "IL" will actually call "IN_LIST" macro. #66

    • introduce the TAG element as a module. Needs a new parser for the SpaCy translate. Would allow more flexible matching of detailed part-of-speech tag, like all adjectives or nouns: TAG("^NN|^JJ").

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #81

    • Add a new module for a PLURALIZE tag For a noun or a list of nouns, it will match any singular or plural word.

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #82

    • Add a new Configuration implicit_hyphon (default false) for automatically adding hyphon characters - to the rules.

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #84

    • Allow to give custom regex impl. By default re is used #86

    • An interface to be able to use rust engine.

      In general it's identical to standalone, but differs in one crucial part - all of the rules are compiled into actual binary code and that provides large performance boost. It is proprietary, because there are various caveats, engine itself is a bit more fragile and needs to be tinkered to be optimized to very specific case (eg. few long texts with many matches vs a lot short texts with few matches). #87

    Fix

    • Fix - bug when it is used as stand alone word #71
    • Fix regex matching, when shortest word is selected from IN_LIST #72
    • Fix IN_LIST regex so that it wouldn't take part of word #75
    • Fix IN_LIST operation bug - it was ignoring them #77
    • Use list branching only when using spaCy Engine #80
    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(Jun 18, 2020)

    Features

    • Added PREFIX macro which allows to attach word in front of list items or words #47

    • Allow to pass variables directly when doing compile and compile_string #51

    • Allow to compile (and later load) rules using rita CLI while using standalone engine (spacy is already supported) #53

    • Added ability to import rule files into rule file. Recursive import is supported as well. #55

    • Added possibility to define pattern as a variable and reuse it in other patterns:

      Example:

    ComplexNumber = {NUM+, WORD("/")?, NUM?}
    
    {PATTERN(ComplexNumber), WORD("inches"), WORD("Height")}->MARK("HEIGHT")
    {PATTERN(ComplexNumber), WORD("inches"), WORD("Width")}->MARK("WIDTH")
    

    #64

    Fix

    • Fix issue with multiple wildcard words using standalone engine #46
    • Don't crash when no rules are provided #50
    • Fix Number and ANY-OF parsing #59
    • Allow escape characters inside LITERAL #62
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Jan 25, 2020)

    0.4.0 (2020-01-25)


    Features

    • Support for deaccent. In general, if accented version of word is given, both deaccented and accented will be used to match. To turn iit off - !CONFIG("deaccent", "N") #38
    • Added shortcuts module to simplify injecting into spaCy #42

    Fix

    • Fix issue regarding Spacy rules with IN_LIST and using case-sensitive mode. It was creating Regex pattern which is not valid spacy pattern #40
    Source code(tar.gz)
    Source code(zip)
  • 0.3.2(Dec 19, 2019)

    Features

      • Introduced towncrier to track changes
      • Added linter flake8
      • Refactored code to match pep8 #32

    Fix

      • Fix WORD split by -

      • Split by (empty space) as well

      • Coverage score increase #35

    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Dec 14, 2019)

    Now there's one global config and child config created per-session (one session = one rule file compilation). Imports and variables are stored in this config as well.

    Remove context argument from MACROS, making code cleaner and easier to read

    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(Dec 8, 2019)

    Features of up to this point:

    • Standalone parser - can use internal regex rather than spaCy if you need to
    • Ability to do logical OR in rule. eg.: {WORD(w1)|WORD(w2),WORD(w3)} would result into two rules: {WORD(w1),WORD(w3)} and {WORD(w2),WORD(w3)}
    • Exclude operator {WORD(w1), WORD(w2)!} would match w1 and anything but w2
    Source code(tar.gz)
    Source code(zip)
Owner
Šarūnas Navickas
Data Engineer @ TokenMill. Doing BJJ @ Voras-Bjj. Dad @ Home.
Šarūnas Navickas
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)

ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python) 日本語は以下に続きます (Japanese follows) English: This book is written in Japanese and primaril

Ryuichi Yamamoto 189 Dec 29, 2022
AI and Machine Learning workflows on Anthos Bare Metal.

Hybrid and Sovereign AI on Anthos Bare Metal Table of Contents Overview Terraform as IaC Substrate ABM Cluster on GCE using Terraform TensorFlow ResNe

Google Cloud Platform 8 Nov 26, 2022
Open-source offline translation library written in Python. Uses OpenNMT for translations

Open source neural machine translation in Python. Designed to be used either as a Python library or desktop application. Uses OpenNMT for translations and PyQt for GUI.

Argos Open Tech 1.6k Jan 01, 2023
Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification"

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
Twitter-Sentiment-Analysis - Twitter sentiment analysis for india's top online retailers(2019 to 2022)

Twitter-Sentiment-Analysis Twitter sentiment analysis for india's top online retailers(2019 to 2022) Project Overview : Sentiment Analysis helps us to

Balaji R 1 Jan 01, 2022
Train 🤗transformers with DeepSpeed: ZeRO-2, ZeRO-3

Fork from https://github.com/huggingface/transformers/tree/86d5fb0b360e68de46d40265e7c707fe68c8015b/examples/pytorch/language-modeling at 2021.05.17.

Junbum Lee 12 Oct 26, 2022
State of the art faster Natural Language Processing in Tensorflow 2.0 .

tf-transformers: faster and easier state-of-the-art NLP in TensorFlow 2.0 ****************************************************************************

74 Dec 05, 2022
Pretrained Japanese BERT models

Pretrained Japanese BERT models This is a repository of pretrained Japanese BERT models. The models are available in Transformers by Hugging Face. Mod

Inui Laboratory 387 Dec 30, 2022
Get list of common stop words in various languages in Python

Python Stop Words Table of contents Overview Available languages Installation Basic usage Python compatibility Overview Get list of common stop words

Alireza Savand 142 Dec 21, 2022
nlp基础任务

NLP算法 说明 此算法仓库包括文本分类、序列标注、关系抽取、文本匹配、文本相似度匹配这五个主流NLP任务,涉及到22个相关的模型算法。 框架结构 文件结构 all_models ├── Base_line │   ├── __init__.py │   ├── base_data_process.

zuxinqi 23 Sep 22, 2022
A fast Text-to-Speech (TTS) model. Work well for English, Mandarin/Chinese, Japanese, Korean, Russian and Tibetan (so far). 快速语音合成模型,适用于英语、普通话/中文、日语、韩语、俄语和藏语(当前已测试)。

简体中文 | English 并行语音合成 [TOC] 新进展 2021/04/20 合并 wavegan 分支到 main 主分支,删除 wavegan 分支! 2021/04/13 创建 encoder 分支用于开发语音风格迁移模块! 2021/04/13 softdtw 分支 支持使用 Sof

Atomicoo 161 Dec 19, 2022
Protein Language Model

ProteinLM We pretrain protein language model based on Megatron-LM framework, and then evaluate the pretrained model results on TAPE (Tasks Assessing P

THUDM 77 Dec 27, 2022
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)

CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specia

Zihan Liu 89 Nov 10, 2022
Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention

Sinkhorn Transformer This is a reproduction of the work outlined in Sparse Sinkhorn Attention, with additional enhancements. It includes a parameteriz

Phil Wang 217 Nov 25, 2022
Neural network sequence labeling model

Sequence labeler This is a neural network sequence labeling system. Given a sequence of tokens, it will learn to assign labels to each token. Can be u

Marek Rei 250 Nov 03, 2022
Japanese Long-Unit-Word Tokenizer with RemBertTokenizerFast of Transformers

Japanese-LUW-Tokenizer Japanese Long-Unit-Word (国語研長単位) Tokenizer for Transformers based on 青空文庫 Basic Usage from transformers import RemBertToken

Koichi Yasuoka 3 Dec 22, 2021
CJK computer science terms comparison / 中日韓電腦科學術語對照 / 日中韓のコンピュータ科学の用語対照 / 한·중·일 전산학 용어 대조

CJK computer science terms comparison This repository contains the source code of the website. You can see the website from the following link: Englis

Hong Minhee (洪 民憙) 88 Dec 23, 2022
A calibre plugin that generates Word Wise and X-Ray files then sends them to Kindle. Supports KFX, AZW3 and MOBI eBooks. X-Ray supports 18 languages.

WordDumb A calibre plugin that generates Word Wise and X-Ray files then sends them to Kindle. Supports KFX, AZW3 and MOBI eBooks. Languages X-Ray supp

172 Dec 29, 2022
GPT-3 command line interaction

Writer_unblock Straight-forward command line interfacing with GPT-3. Finding yourself stuck at a conceptual stage? Spinning your wheels needlessly on

Seth Nuzum 6 Feb 10, 2022
تولید اسم های رندوم فینگیلیش

karafs کرفس تولید اسم های رندوم فینگیلیش installation ➜ pip install karafs usage دو زبانه ➜ karafs -n 10 توت فرنگی بی ناموس toot farangi-ye bi_namoos

Vaheed NÆINI (9E) 36 Nov 24, 2022