Learning Logic Rules for Document-Level Relation Extraction

Related tags

Deep LearningLogiRE
Overview

LogiRE

Learning Logic Rules for Document-Level Relation Extraction

We propose to introduce logic rules to tackle the challenges of doc-level RE.

Equipped with logic rules, our LogiRE framework can not only explicitly capture long-range semantic dependencies, but also show more interpretability.

We combine logic rules and outputs of neural networks for relation extraction.

drawing

As shown in the example, the relation between kate and Britain can be identified according to the other relations and the listed logic rule.

The overview of LogiRE framework is shown below.

drawing

Data

  • Download the preprocessing script and meta data

    DWIE
    ├── data
    │   ├── annos
    │   └── annos_with_content
    ├── en_core_web_sm-2.3.1
    │   ├── build
    │   ├── dist
    │   ├── en_core_web_sm
    │   ├── en_core_web_sm.egg-info
    │   ├── MANIFEST.in
    │   ├── meta.json
    │   ├── PKG-INFO
    │   ├── setup.cfg
    │   └── setup.py
    ├── glove.6B.100d.txt
    ├── md5sum.txt
    └── read_docred_style.py
    
  • Install Spacy (en_core_web_sm-2.3.1)

    cd en_core_web_sm-2.3.1
    pip install .
  • Download the original data from DWIE

  • Generate docred-style data

    python3 read_docred_style.py

    The docred-style doc-RE data will be generated at DWIE/data/docred-style. Please compare the md5sum codes of generated files with the records in md5sum.txt to make sure you generate the data correctly.

Train & Eval

Requirements

  • pytorch >= 1.7.1
  • tqdm >= 4.62.3
  • transformers >= 4.4.2

Backbone Preparation

The LogiRE framework requires a backbone NN model for the initial probabilistic assessment on each triple.

The probabilistic assessments of the backbone model and other related meta data should be organized in the following format. In other words, please train any doc-RE model with the docred-style RE data before and dump the outputs as below.

{
    'train': [
        {
            'N': <int>,
            'logits': <torch.FloatTensor of size (N, N, R)>,
            'labels': <torch.BoolTensor of size (N, N, R)>,
            'in_train': <torch.BoolTensor of size (N, N, R)>,
        },
        ...
    ],
    'dev': [
        ...
    ]
    'test': [
        ...
    ]
}

Each example contains four items:

  • N: the number of entities in this example.
  • logits: the logits of all triples as a tensor of size (N, N, R). R is the number of relation types (Na excluded)
  • labels: the labels of all triples as a tensor of size (N, N, R).
  • in_train: the in_train masks of all triples as a tensor of size(N, N, R), used for ign f1 evaluation. True indicates the existence of the triple in the training split.

For convenience, we provide the dump of ATLOP as examples. Feel free to download and try it directly.

Train

python3 main.py --mode train \
    --save_dir <the directory for saving logs and checkpoints> \
    --rel_num <the number of relation types (Na excluded)> \
    --ent_num <the number of entity types> \
    --n_iters <the number of iterations for optimization> \
    --max_depth <max depths of the logic rules> \
    --data_dir <the directory of the docred-style data> \
    --backbone_path <the path of the backbone model dump>

Evaluation

python3 main.py --mode test \
    --save_dir <the directory for saving logs and checkpoints> \
    --rel_num <the number of relation types (Na excluded)> \
    --ent_num <the number of entity types> \
    --n_iters <the number of iterations for optimization> \
    --max_depth <max depths of the logic rules> \
    --data_dir <the directory of the docred-style data> \
    --backbone_path <the path of the backbone model dump>

Results

  • LogiRE framework outperforms strong baselines on both relation performance and logical consistency.

    drawing
  • Injecting logic rules can improve long-range dependencies modeling, we show the relation performance on each interval of different entity pair distances. LogiRE framework outperforms the baseline and the gap becomes larger when entity pair distances increase. Logic rules actually serve as shortcuts for capturing long-range semantics in concept-level instead of token-level.

    drawing

Acknowledgements

We sincerely thank RNNLogic which largely inspired us and DWIE & DocRED for providing the benchmarks.

Reference

@inproceedings{ru-etal-2021-learning,
    title = "Learning Logic Rules for Document-Level Relation Extraction",
    author = "Ru, Dongyu  and
      Sun, Changzhi  and
      Feng, Jiangtao  and
      Qiu, Lin  and
      Zhou, Hao  and
      Zhang, Weinan  and
      Yu, Yong  and
      Li, Lei",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.95",
    pages = "1239--1250",
}
Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22)

Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22) Ok-Topk is a scheme for distributed training with sparse gradients

Shigang Li 9 Oct 29, 2022
This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised and Tiny ML scenarios"

TinyWeaklyIsolationForest This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised a

2 Mar 21, 2022
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022
This is the official released code for our paper, The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos

The-Emergence-of-Objectness This is the official released code for our paper, The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos

44 Oct 08, 2022
Google Recaptcha solver.

byerecaptcha - Google Recaptcha solver. Model and some codes takes from embium's repository -Installation- pip install byerecaptcha -How to use- from

Vladislav Zenkevich 21 Dec 19, 2022
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022

DG-TrajGen The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022. Our Meth

Wang 25 Sep 26, 2022
Easily Process a Batch of Cox Models

ezcox: Easily Process a Batch of Cox Models The goal of ezcox is to operate a batch of univariate or multivariate Cox models and return tidy result. ⏬

Shixiang Wang 15 May 23, 2022
Implementation of Segnet, FCN, UNet , PSPNet and other models in Keras.

Image Segmentation Keras : Implementation of Segnet, FCN, UNet, PSPNet and other models in Keras. Implementation of various Deep Image Segmentation mo

Divam Gupta 2.6k Jan 05, 2023
Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks

Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks This is our Pytorch implementation for the paper: Zirui Zhu, Chen Gao, Xu C

Zirui Zhu 3 Dec 30, 2022
[WWW 2022] Zero-Shot Stance Detection via Contrastive Learning

PT-HCL for Zero-Shot Stance Detection The code of this repository is constantly being updated... Please look forward to it! Introduction This reposito

Akuchi 12 Dec 21, 2022
Simple Python project using Opencv and datetime package to recognise faces and log attendance data in a csv file.

Attendance-System-based-on-Facial-recognition-Attendance-data-stored-in-csv-file- Simple Python project using Opencv and datetime package to recognise

3 Aug 09, 2022
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.

openpifpaf Continuously tested on Linux, MacOS and Windows: New 2021 paper: OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Te

VITA lab at EPFL 50 Dec 29, 2022
Deep learning model, heat map, data prepo

deep learning model, heat map, data prepo

Pamela Dekas 1 Jan 14, 2022
Pytorch implementation of the DeepDream computer vision algorithm

deep-dream-in-pytorch Pytorch (https://github.com/pytorch/pytorch) implementation of the deep dream (https://en.wikipedia.org/wiki/DeepDream) computer

102 Dec 05, 2022
CAST: Character labeling in Animation using Self-supervision by Tracking

CAST: Character labeling in Animation using Self-supervision by Tracking (Published as a conference paper at EuroGraphics 2022) Note: The CAST paper c

15 Nov 18, 2022
Author: Wenhao Yu ([email protected]). ACL 2022. Commonsense Reasoning on Knowledge Graph for Text Generation

Diversifying Commonsense Reasoning Generation on Knowledge Graph Introduction -- This is the pytorch implementation of our ACL 2022 paper "Diversifyin

DM2 Lab @ ND 61 Dec 30, 2022
SMPLpix: Neural Avatars from 3D Human Models

subject0_validation_poses.mp4 Left: SMPL-X human mesh registered with SMPLify-X, middle: SMPLpix render, right: ground truth video. SMPLpix: Neural Av

Sergey Prokudin 292 Dec 30, 2022
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Nerdy Rodent 2.3k Jan 04, 2023