Code Implementation of "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Related tags

Text Data & NLPnlp
Overview

Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction

***** New March 31th, 2022: Scikit-Style API for Easy Usage *****

PWC Colab Jupyter

This repository implements our ACL 2021 research paper Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction. Our goal is to extract sentiment triplets of the format (aspect target, opinion expression and sentiment polarity), as shown in the diagram below.

Installation

Data Format

Our span-based model uses data files where the format for each line contains one input sentence and a list of output triplets:

sentence#### #### ####[triplet_0, ..., triplet_n]

Each triplet is a tuple that consists of (span_a, span_b, label). Each span is a list. If the span covers a single word, the list will contain only the word index. If the span covers multiple words, the list will contain the index of the first word and last word. For example:

It also has lots of other Korean dishes that are affordable and just as yummy .#### #### ####[([6, 7], [10], 'POS'), ([6, 7], [14], 'POS')]

For prediction, the data can contain the input sentence only, with an empty list for triplets:

sentence#### #### ####[]

Predict Using Model Weights

  • First, download and extract pre-trained weights to pretrained_dir
  • The input data file path_in and output data file path_out have the same data format.
from wrapper import SpanModel

model = SpanModel(save_dir=pretrained_dir, random_seed=0)
model.predict(path_in, path_out)

Model Training

  • Configure the model with save directory and random seed.
  • Start training based on the training and validation data which have the same data format.
model = SpanModel(save_dir=save_dir, random_seed=random_seed)
model.fit(path_train, path_dev)

Model Evaluation

  • From the trained model, predict triplets from the test sentences and output into path_pred.
  • The model includes a scoring function which will provide F1 metric scores for triplet extraction.
model.predict(path_in=path_test, path_out=path_pred)
results = model.score(path_pred, path_test)

Research Citation

If the code is useful for your research project, we appreciate if you cite the following paper:

@inproceedings{xu-etal-2021-learning,
    title = "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction",
    author = "Xu, Lu  and
      Chia, Yew Ken  and
      Bing, Lidong",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.367",
    doi = "10.18653/v1/2021.acl-long.367",
    pages = "4755--4766",
    abstract = "Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term. Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word. Thereby, they cannot perform well on targets and opinions which contain multiple words. Our proposed span-level approach explicitly considers the interaction between the whole spans of targets and opinions when predicting their sentiment relation. Thus, it can make predictions with the semantics of whole spans, ensuring better sentiment consistency. To ease the high computational cost caused by span enumeration, we propose a dual-channel span pruning strategy by incorporating supervision from the Aspect Term Extraction (ATE) and Opinion Term Extraction (OTE) tasks. This strategy not only improves computational efficiency but also distinguishes the opinion and target spans more properly. Our framework simultaneously achieves strong performance for the ASTE as well as ATE and OTE tasks. In particular, our analysis shows that our span-level approach achieves more significant improvements over the baselines on triplets with multi-word targets or opinions.",
}
Comments
  • Train model for new data collected from social media

    Train model for new data collected from social media

    Hi, I would like to train this model in a new dataset with another language "Bahasa" as aspects and opinions of them, especially in social media textual data, constitute a span of words with multiple lengths. How to execute the file accordingly?

    opened by Lafandi 7
  • command命令错误

    command命令错误

    {'command': 'cd /home/data2/yj/Span-ASTE && allennlp train outputs/14lap/seed_0/config.jsonnet --serialization-dir outputs/14lap/seed_0/weights --include-package span_model'} /bin/sh: allennlp: 未找到命令,请问这个在什么文件里改,一直没找到。。。

    opened by lzf00 6
  • Retrain with new language

    Retrain with new language

    Hi, I have some questions (sorry if this is some kind of beginners question, I am new in this field). I want to change the word embedder to the BERT that is pretrained with my language (Indonesia, using indobert). Can you give some tips on how to change the embedder to my language? Thanks!

    opened by rdyzakya 5
  • Using the notebook when there is no GPU

    Using the notebook when there is no GPU

    Hello! Thank you for sharing this work! I was wondering how I can use the demo notebook locally when there is no GPU?

    When running the cell under "# Use pretrained SpanModel weights for prediction, " I got this error:

    2022-07-06 12:28:07,840 - INFO - allennlp.common.plugins - Plugin allennlp_models available Traceback (most recent call last): File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/bin/allennlp", line 8, in sys.exit(run()) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/main.py", line 34, in run main(prog="allennlp") File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/init.py", line 118, in main args.func(args) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 205, in _predict predictor = _get_predictor(args) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 105, in _get_predictor check_for_gpu(args.cuda_device) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/common/checks.py", line 131, in check_for_gpu " 'trainer.cuda_device=-1' in the json config file." + torch_gpu_error allennlp.common.checks.ConfigurationError: Experiment specified a GPU but none is available; if you want to run on CPU use the override 'trainer.cuda_device=-1' in the json config file. module 'torch.cuda' has no attribute '_check_driver'

    I changed cuda_device to -1 in the jsonnet files from your folder training_config. But still no luck.

    opened by xiaoqingwan 5
  • Suggestions to run it against other datasets

    Suggestions to run it against other datasets

    Hi! I'm pretty new to deep learning and ASTE.

    Can you please suggest to me the necessary steps to run this against another dataset? Do I need to follow this data structure (https://github.com/xuuuluuu/SemEval-Triplet-data/blob/master/README.md#data-description) on my dataset by labeling it? How can I modify the code on Colab for new datasets? thank you Any other advice?

    Thank you

    opened by Jurys22 4
  • Running problem

    Running problem

    Hello, I have a question, I want to ask you. I use Pycharm to run your project, but report an error in the main.py file, prompt: ModuleNotFoundError: No module named '_jsonnet'. I guess the main reason because import _jsonnet # noqa. Can you tell me a solution? Thank you very much. 微信图片_20211123164149

    opened by FengLingCong13 4
  • Data format

    Data format

    Excuse me,how do you label the data to make the input format be as follows:

    Exactly as posted plus a great value .####Exactly=O as=O posted=O plus=O a=O great=O value=T-POS .=O####Exactly=O as=O posted=O plus=O a=O great=S value=O .=O####[([6], [5], 'POS')] The specs are pretty good too .####The=O specs=T-POS are=O pretty=O good=O too=O .=O####The=O specs=O are=O pretty=O good=S too=O .=O####[([1], [4], 'POS')]

    opened by arroyoaaa 4
  • Interpretation of the results

    Interpretation of the results

    Hello, I was looking at the file in

    /content/Span-ASTE/model_outputs/aste_sample_c7b00b66bf7ec669d23b80879fda043d/predict_dev.jsonl

    I would like to know what are the numbers in the predicted_ner and predicted_relations such as:

    [[0, 0, 1, 1, 'NEG', 2.777, 0.971]]

    What are 2.777 and 0.971 referring to?

    Thank you

    opened by Jurys22 3
  •   I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run.

    I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run.

    I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run. Two error was reported: 1. allennlp.common.checks.ConfigurationError: Extra parameters passed to SpanModel: {'relation_head_type': 'proper', 'use_bilstm_after_embedder': False, 'use_double_mix_embedder': False, 'use_ner_embeds': False} Traceback (most recent call last): File "X:\workspace\python\[email protected]\Span-ASTE\aste\test.py", line 4, in model.predict('test.txt', "pred.txt") File "X:\workspace\python\[email protected]\Span-ASTE\aste\wrapper.py", line 83, in predict with open(path_temp_out) as f: 2. FileNotFoundError: [Errno 2] No such file or directory: 'X:\workspace\python\papercode\@aspect\Span-ASTE\pretrained_dir\temp_data\pred_out.json'

    opened by SiriusXT 2
  • IndexError: List assignment index out of range

    IndexError: List assignment index out of range

    I've annotated my own data and tried to train the model with the annotated data, and run into this error here (see below). The command runs successfully, but the model doesn't train on the annotated data, going into the out.log files we see this error. The annotated data follows the correct format as I'm able to preview it in the Data Exploration command. Any help would be appreciated please! :)

    image

    opened by jasonhuynh83 2
  • No such file or directory: 'pretrained_14res/temp_data/pred_out.json'

    No such file or directory: 'pretrained_14res/temp_data/pred_out.json'

    Installed it successfully in MAC OS but getting the error pred_out.json not found. Not sure why is this working successfully in colab but not when I am installing it in my local machine. Can any one help me . I have downloaded the folder correctly. Contains all the required files. I have tried with 14lap and 14res but both have same issue. Screenshot 2022-09-22 at 7 48 09 PM

    opened by dipanmoy 2
  • python wrapper.py

    python wrapper.py

    hi ,I'm puzzled when running wrapper.py, the following appears which I can't understand NAME wrapper.py

    SYNOPSIS wrapper.py GROUP | COMMAND

    GROUPS GROUP is one of the following:

     json
       JSON (JavaScript Object Notation) <http://json.org> is a subset of JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data interchange format.
    
     os
       OS routines for NT or Posix depending on what system we're on.
    
     shutil
       Utility functions for copying and archiving files and directory trees.
    
     sys
       This module provides access to some objects used or maintained by the interpreter and to functions that interact strongly with the interpreter.
    
     List
       The central part of internal API.
    
     Tuple
       Tuple type; Tuple[X, Y] is the cross-product type of X and Y.
    
     Optional
       Internal indicator of special typing constructs. See _doc instance attribute for specific docs.
    

    COMMANDS COMMAND is one of the following:

     Namespace
       Simple object for storing attributes.
    
     Path
       PurePath subclass that can make system calls.
    
     train_model
       Trains the model specified in the given [`Params`](../common/params.md#params) object, using the data and training parameters also specified in that object, and saves the results in `serialization_dir`
    
    opened by xian-xian 2
  •  ConfigurationError: key

    ConfigurationError: key "dataset_reader" is required

    I was trying to replicate the same to Azure Databricks. While I'm training to train the model, I am getting the "ConfigurationError: key "dataset_reader" is required" error. For your reference

    image image image image

    Is this solution can be implemented in the Databricks environment ? @chiayewken

    opened by tsharisaravanan 1
  • Optional: Set up NLTK packages这个是什么意思呀,可以帮忙讲解一下吗

    Optional: Set up NLTK packages这个是什么意思呀,可以帮忙讲解一下吗

    Optional: Set up NLTK packages

    if [[ -f punkt.zip ]]; then mkdir -p /home/admin/nltk_data/tokenizers cp punkt.zip /home/admin/nltk_data/tokenizers fi if [[ -f wordnet.zip ]]; then mkdir -p /home/admin/nltk_data/corpora cp wordnet.zip /home/admin/nltk_data/corpora fi 不明白这个什么意思,研一学生求求了

    opened by xian-xian 5
  • An error for Posixpath

    An error for Posixpath

    Hi, I have some questions to ask you.

    The params_file is a string type, but this error has occurred as follow:

    Traceback (most recent call last): File "/Span-ASTE-main/aste/wrapper.py", line 177, in model.fit(path_train, path_dev) File "/Span-ASTE-main/aste/wrapper.py", line 54, in fit test_data_path=str(self.save_temp_data(path_dev, "dev")), File "/lib/python3.7/site-packages/allennlp/common/params.py", line 462, in from_file file_dict = json.loads(evaluate_file(params_file, ext_vars=ext_vars)) TypeError: argument 1 must be str, not PosixPath

    By the way, what should I start your code, the "main.py" or "wrapper.py".

    opened by Chen-PengF 1
  • demo file not working, No module named 'data_utils', No module named 'data_utils'

    demo file not working, No module named 'data_utils', No module named 'data_utils'

    Hi,

    I tried to run the demo file, but it shows error of "No module named 'data_utils'". The error coming from the line "No module named 'data_utils'"

    opened by qi-xia 1
Owner
Chia Yew Ken
Hi! I'm a 2nd year PhD Student with SUTD and Alibaba. My research interests currently include zero-shot learning, structured prediction and sentiment analysis.
Chia Yew Ken
SimCTG - A Contrastive Framework for Neural Text Generation

A Contrastive Framework for Neural Text Generation Authors: Yixuan Su, Tian Lan,

Yixuan Su 345 Jan 03, 2023
Telegram AI chat bot written in Python using Pyrogram

Aurora_Al Just another Telegram AI chat bot written in Python using Pyrogram. A public running instance can be found on telegram as @AuroraAl. Require

♗CσNϙUҽRσR_MҽSƙEƚҽҽR 1 Oct 31, 2021
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.

keytotext Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: Marketing Sea

Gagan Bhatia 364 Jan 03, 2023
fastai ulmfit - Pretraining the Language Model, Fine-Tuning and training a Classifier

fast.ai ULMFiT with SentencePiece from pretraining to deployment Motivation: Why even bother with a non-BERT / Transformer language model? Short answe

Florian Leuerer 26 May 27, 2022
中文医疗信息处理基准CBLUE: A Chinese Biomedical LanguageUnderstanding Evaluation Benchmark

English | 中文说明 CBLUE AI (Artificial Intelligence) is playing an indispensabe role in the biomedical field, helping improve medical technology. For fur

452 Dec 30, 2022
Making text a first-class citizen in TensorFlow.

TensorFlow Text - Text processing in Tensorflow IMPORTANT: When installing TF Text with pip install, please note the version of TensorFlow you are run

1k Dec 26, 2022
PG-19 Language Modelling Benchmark

PG-19 Language Modelling Benchmark This repository contains the PG-19 language modeling benchmark. It includes a set of books extracted from the Proje

DeepMind 161 Oct 30, 2022
jiant is an NLP toolkit

🚨 Update 🚨 : As of 2021/10/17, the jiant project is no longer being actively maintained. This means there will be no plans to add new models, tasks,

ML² AT CILVR 1.5k Dec 28, 2022
An open source library for deep learning end-to-end dialog systems and chatbots.

DeepPavlov is an open-source conversational AI library built on TensorFlow, Keras and PyTorch. DeepPavlov is designed for development of production re

Neural Networks and Deep Learning lab, MIPT 6k Dec 30, 2022
CMeEE 数据集医学实体抽取

医学实体抽取_GlobalPointer_torch 介绍 思想来自于苏神 GlobalPointer,原始版本是基于keras实现的,模型结构实现参考现有 pytorch 复现代码【感谢!】,基于torch百分百复现苏神原始效果。 数据集 中文医学命名实体数据集 点这里申请,很简单,共包含九类医学

85 Dec 28, 2022
Open source code for AlphaFold.

AlphaFold This package provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP

DeepMind 9.7k Jan 02, 2023
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

VAENAR-TTS - PyTorch Implementation PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

Keon Lee 67 Nov 14, 2022
Beyond Accuracy: Behavioral Testing of NLP models with CheckList

CheckList This repository contains code for testing NLP Models as described in the following paper: Beyond Accuracy: Behavioral Testing of NLP models

Marco Tulio Correia Ribeiro 1.8k Dec 28, 2022
GPT-3: Language Models are Few-Shot Learners

GPT-3: Language Models are Few-Shot Learners arXiv link Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-trainin

OpenAI 12.5k Jan 05, 2023
Include MelGAN, HifiGAN and Multiband-HifiGAN, maybe NHV in the future.

Fast (GAN Based Neural) Vocoder Chinese README Todo Submit demo Support NHV Discription Include MelGAN, HifiGAN and Multiband-HifiGAN, maybe include N

Zhengxi Liu (刘正曦) 134 Dec 16, 2022
使用pytorch+transformers复现了SimCSE论文中的有监督训练和无监督训练方法

SimCSE复现 项目描述 SimCSE是一种简单但是很巧妙的NLP对比学习方法,创新性地引入Dropout的方式,对样本添加噪声,从而达到对正样本增强的目的。 该框架的训练目的为:对于batch中的每个样本,拉近其与正样本之间的距离,拉远其与负样本之间的距离,使得模型能够在大规模无监督语料(也可以

58 Dec 20, 2022
A PyTorch implementation of paper "Learning Shared Semantic Space for Speech-to-Text Translation", ACL (Findings) 2021

Chimera: Learning Shared Semantic Space for Speech-to-Text Translation This is a Pytorch implementation for the "Chimera" paper Learning Shared Semant

Chi Han 43 Dec 28, 2022
Legal text retrieval for python

legal-text-retrieval Overview This system contains 2 steps: generate training data containing negative sample found by mixture score of cosine(tfidf)

Nguyễn Minh Phương 22 Dec 06, 2022
BERT score for text generation

BERTScore Automatic Evaluation Metric described in the paper BERTScore: Evaluating Text Generation with BERT (ICLR 2020). News: Features to appear in

Tianyi 1k Jan 08, 2023