Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Overview

Introduction

Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning".

We construct a new large-scale benchmark, Geometry3K, which consists of 3,002 geometry problems with dense annotation in formal language. We define 91 predicates and their corresponding literal templates to describe each problem. All predicates are defined in here. Four data examples in the Geometry3K dataset are shown below:

example

We further propose a novel geometry solving approach with formal language and symbolic reasoning, called Interpretable Geometry Problem Solver (Inter-GPS). Inter-GPS is the first geometry problem solver that achieves automatic program parsing and interpretable symbolic reasoning. Inter-GPS parses the problem text and diagram into formal language automatically via rule-based text parsing and neural object detecting, respectively. Moreover, Inter-GPS incorporates theorem knowledge as conditional rules and performs symbolic reasoning step by step.

model

Prepare the Dataset

First, unzip data files into data/geometry3k:

. data/unzip_data.sh

You can alternatively visit the Google Drive link to download the Geometry dataset and unzip it.

Requirements

Python 3.6+
torch 1.7.1
transformers 4.8.2
python3-pip

Install all required python dependencies:

pip3 install -r requirement.txt

Run Inter-GPS Directly

The Final Search Strategy

Run the final symbolic solver Inter-GPS without preprocessing data by the following commands:

cd symbolic_solver
python test.py --label final --strategy final

It applies the final search strategy (predict + low-first) with generated logic forms from the diagram parser and text parser. The solving result file and logging file will be saved in pred_results and logs, respectively.

It takes about 5 minutes for the solving process over the 601 test problems with an Intel CPU 10900K with 20 threads. If you don't have a high-performance CPU, please assign a smaller number of threads and larger searching time limit for each problem. For example:

python test.py --label final --strategy final --time_limit 200 --num_threads 4

Run the symbolic solver with annotated logic forms from the dataset:

python test.py --label final-gt --strategy final --use_annotated

Note that the results could differ slightly in each individual experiment and on different computing platforms. The differences are mainly resulted from randomness of the search process, dependency versions, CPU features, and running parameters. It is highly recommended to run the solver with multiple times and report the average result of the multiple experiments.

Other Search Strategies

Also, you can run the solver with other search strategies listed in Table 7 in the paper by running the following commands, receptively:

  • Predict-based search strategy (predict + random) with annotated logic forms:
python test.py --label predict --strategy predict --use_annotated
  • Random search strategy with annotated logic forms:
python test.py --label random --strategy random --use_annotated
  • Low-first search strategy with annotated logic forms:
python test.py --label low-first --strategy low-first --use_annotated

All these result files reported in the Table 7 are released in symbolic_solver/pred_results and symbolic_solver/logs, respectively.

Calculate Accuracies

You can obtain accuracies for different question types by running python sub_acc.py --result_file {result_json_file} . For example:

python sub_acc.py --result_file pred_results/final/logic_1612098244-predict_low-first_1.json

Run Inter-GPS from Scratch

Text Parser

Parse the problem text into literals (logic forms).

cd text_parser
python text_parser.py

Diagram Parser

The diagram parser converts a problem diagram into literals (logic forms). Only the most core running code is shown as following. If you would like to know every detail, please refer to this README file.

Unzip our detection results of text regions and symbols:

cd detection_results
unzip -d box_results box_results.zip
unzip -d ocr_results ocr_results.zip

Generate diagram logic forms by running the following command:

cd parser
python diagram_parser.py \
--data_path ../../data/geometry3k \
--ocr_path ../detection_results/ocr_results \
--box_path ../detection_results/box_results \
--output_path ../diagram_logic_forms.json

Theorem Predictor

  1. Generate template-based and random-ordered theorem sequences:
cd theorem_predict/tools
python generate_random_seqs.py

It generates two files:

  • results/train/pred_seqs_train_l30_n100_template.json: 100 template-based sequences with a maximum length of 30 for each training data
  • results/test/pred_seqs_test_l100_random.json: 1 random-order sequence with a maximum length of 100 for each testing data
  1. (Optional) Generate pseudo-optimal theorem sequences for each training data:
python check_theorem_seq.py

It will take about 20 hours to attempt 100 tries over all training data! If you want to save time, just skip this step and use our generated data in theorem_predict/results/train/splits instead.

  1. (Optional) Merge 100 tries of pseudo-optimal theorem sequences into one file.
python merge_all_correct_json.py
  1. (Optional) Train the theorem predictor from scratch:
python train_transformer.py

If you want save time, you could skip the step above and download checkpoint model directly:

cd theorem_predict/models
wget https://acl2021-intergps.s3.us-west-1.amazonaws.com/tp_model_best.pt
  1. Evaluate the the theorem predictor to generate predicted theorem sequences:
cd theorem_predict
python eval_transformer.py
  1. Generate theorem sequences for the predict-based strategy (predict + random):
cd theorem_predict/tools
python add_random_seq_to_pred_seq.py

Symbolic Solver

Finally, run the symbolic solver with the Final search strategy (predict + low-first) over generated logic forms:

cd symbolic_solver
python test.py --label final_new \
--strategy final \
--text_logic_form_path ../text_parser/text_logic_forms.json \
--diagram_logic_form_path ../diagram_parser/diagram_logic_forms.json \
--predict_path ../theorem_predict/results/test/pred_seqs_test_bart_best.json

Data Annotation Tools

We release the data collection tools that probably help you extend our dataset in the future work.

Data Collection

The data collection tool is used to collect geometry problems and the corresponding logical forms.

cd annotation_tool/data_collection
python app.py

labelImg

Symbol Labeling

LabelImg is a graphical image annotation tool and label object bounding boxes in images. We use this tool to label text regions and symbols in the diagram. If you are using the Linux system, you can just run the following commands to install the tool:

cd annotation_tool/labelImg
sudo apt-get install pyqt5-dev-tools
sudo pip3 install -r requirements/requirements-linux-python3.txt
make qt5py3

Run the labeling tool:

python labelImg.py

After running the tool, click the Open Dir button to open the data directory containing problem images, for example, InterGPS/data/geometry3k/symbols, and choose Use default label to use pre-defined labels in data/predefined_classes.txt. Note that pre-defined labels in data/predefined_classes.txt are consistent with labels in diagram_parser/detection/classes.txt.

labelImg

Follow the instructions to install the LabelImg package on other systems or learn more about the usage details.

Citation

If the paper, the dataset, or the code helps you, please cite the paper in the following format :

@inproceedings{lu2021inter,
  title = {Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning},
  author = {Lu, Pan and Gong, Ran and Jiang, Shibiao and Qiu, Liang and Huang, Siyuan and Liang, Xiaodan and Zhu, Song-Chun},
  booktitle = {The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)},
  year = {2021}
}

Q&A

If you encounter any problem, feel free to either directly contact the first authors or leave an issue in the github repo.

Comments
  • Sharing Training Details

    Sharing Training Details

    Hi Pan,

    Your paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning" is awesome.

    However, when reproducing results of this work, I have one problem. I trained the symbol detection model with the data you provided, but the model could not perform as well as the box_result you released. Could you please share more training details?

    Thank you very much and looking forward to your reply.

    opened by YusenZhang826 3
  • DataSet Generation

    DataSet Generation

    Hi Pan, Loved your work in InterGPS. We were planning to extend the dataset, using the annotation tools shared. We wanted to know in logic_form.json for each question (Ground Truth), how was point positions added was this done manually or using some subroutine? Thanks, Akshat

    opened by Akshat188 1
  • Providing a pretrained object detection model for text and symbols

    Providing a pretrained object detection model for text and symbols

    Hello, Thanks for your work! Could you please provide a pretrained object detection model, e.g. the one mentioned in the documentation here: models/exp0/csv_retinanet_19.pt?

    Thank you in advance :)

    opened by supitalp 1
  • About datasets

    About datasets

    Hello, how can I expand the data set of Geometry3K? Where the math geometry problems of Geometry3K come from? Could you please provide more specific web links or other information? Thank you very much!

    opened by mingliangzhang2018 1
  • About the file of  “diagram_logic_forms_pred.json”

    About the file of “diagram_logic_forms_pred.json”

    Excuse me, could you tell me about whether the content of file “diagram_logic_forms_pred.json" is the predicted results of your diagram parser? Thanks every much!

    opened by mingliangzhang2018 1
  • Poor performance of theorem predictor

    Poor performance of theorem predictor

    Hello, Pan. Thank you for your open source.

    I download checkpoint model from https://acl2021-intergps.s3.us-west-1.amazonaws.com/tp_model_best.pt But the evaluation results are empty. How can I get it back to normal? Thanks.

    image

    opened by ICanFlyGFC 9
Releases(Latest)
Owner
Pan Lu
Computer Science
Pan Lu
NeRF Meta-Learning with PyTorch

NeRF Meta Learning With PyTorch nerf-meta is a PyTorch re-implementation of NeRF experiments from the paper "Learned Initializations for Optimizing Co

Sanowar Raihan 78 Dec 18, 2022
An atmospheric growth and evolution model based on the EVo degassing model and FastChem 2.0

EVolve Linking planetary mantles to atmospheric chemistry through volcanism using EVo and FastChem. Overview EVolve is a linked mantle degassing and a

Pip Liggins 2 Jan 17, 2022
Code for "On Memorization in Probabilistic Deep Generative Models"

On Memorization in Probabilistic Deep Generative Models This repository contains the code necessary to reproduce the experiments in On Memorization in

The Alan Turing Institute 3 Jun 09, 2022
Official repository of "DeepMIH: Deep Invertible Network for Multiple Image Hiding", TPAMI 2022.

DeepMIH: Deep Invertible Network for Multiple Image Hiding (TPAMI 2022) This repo is the official code for DeepMIH: Deep Invertible Network for Multip

Junpeng Jing 67 Nov 22, 2022
In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.

Contrastive Learning of Object Representations Supervisor: Prof. Dr. Gemma Roig Institutions: Goethe University CVAI - Computational Vision & Artifici

Dirk Neuhäuser 6 Dec 08, 2022
Prototype python implementation of the ome-ngff table spec

Prototype python implementation of the ome-ngff table spec

Kevin Yamauchi 8 Nov 20, 2022
This repository includes the code of the sequence-to-sequence model for discontinuous constituent parsing described in paper Discontinuous Grammar as a Foreign Language.

Discontinuous Grammar as a Foreign Language This repository includes the code of the sequence-to-sequence model for discontinuous constituent parsing

Daniel Fernández-González 2 Apr 07, 2022
python library for invisible image watermark (blind image watermark)

invisible-watermark invisible-watermark is a python library and command line tool for creating invisible watermark over image.(aka. blink image waterm

Shield Mountain 572 Jan 07, 2023
Unofficial implementation of the ImageNet, CIFAR 10 and SVHN Augmentation Policies learned by AutoAugment using pillow

AutoAugment - Learning Augmentation Policies from Data Unofficial implementation of the ImageNet, CIFAR10 and SVHN Augmentation Policies learned by Au

Philip Popien 1.3k Jan 02, 2023
Demonstrational Session git repo for H SAF User Workshop (28/1)

5th H SAF User Workshop The 5th H SAF User Workshop supported by EUMeTrain will be held in online in January 24-28 2022. This repository contains inst

H SAF 4 Aug 04, 2022
TorchMD-Net provides state-of-the-art graph neural networks and equivariant transformer neural networks potentials for learning molecular potentials

TorchMD-net TorchMD-Net provides state-of-the-art graph neural networks and equivariant transformer neural networks potentials for learning molecular

TorchMD 104 Jan 03, 2023
RITA is a family of autoregressive protein models, developed by LightOn in collaboration with the OATML group at Oxford and the Debora Marks Lab at Harvard.

RITA: a Study on Scaling Up Generative Protein Sequence Models RITA is a family of autoregressive protein models, developed by a collaboration of Ligh

LightOn 69 Dec 22, 2022
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

7.7k Jan 03, 2023
Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding (AAAI 2020) - PyTorch Implementation

Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding PyTorch implementation for the Scalable Attentive Sentence-Pair Modeling vi

Microsoft 25 Dec 02, 2022
A Deep Learning based project for creating line art portraits.

ArtLine The main aim of the project is to create amazing line art portraits. Sounds Intresting,let's get to the pictures!! Model-(Smooth) Model-(Quali

Vijish Madhavan 3.3k Jan 07, 2023
simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

Ramón Casero 1 Jan 07, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Udit Arora 19 Oct 28, 2022
EfficientMPC - Efficient Model Predictive Control Implementation

efficientMPC Efficient Model Predictive Control Implementation The original algo

Vin 8 Dec 04, 2022