A collection of Korean Text Datasets ready to use using Tensorflow-Datasets.

Overview

tfds-korean

A collection of Korean Text Datasets ready to use using Tensorflow-Datasets.

TensorFlow-Datasets를 이용한 한국어/한글 데이터셋 모음입니다.

Dataset Catalog | pypi

PyPI - License PyPI Test Python

Usage

Installation

pip install tfds-korean

Loading dataset

import tensorflow_datasets as tfds
import tfds_korean.nsmc # register nsmc dataset

ds = tfds.load('nsmc')

train_ds = ds['train'].batch(32)
test_ds = ds['test'].batch(128)

# define model
# ....
# ....

model.fit(train_ds)
model.evaluate(test_ds)

See Dataset Catalog page for dataset list and details of each dataset.

Examples

Licenses

The license for this repository and licenses for datasets are applied separately. It is recommended to use each dataset after checking the dataset's website.

본 레포지토리의 라이선스와 데이터셋의 라이선스는 별도로 적용됩니다. 데이터셋을 사용하기 전 각 데이터셋의 라이선스와 웹 사이트를 확인 후 사용하시길 권해드리며, 본 라이브러리는 데이터셋을 호스팅하거나 배포하지 않는 점을 참고부탁드립니다.

Comments
  • [Dataset Request] sae4k

    [Dataset Request] sae4k

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sae4k
    • Dataset description:
    • Homepage: https://github.com/warnikchow/sae4k
    • Citation:

    Additional Context

    dataset request 
    opened by jeongukjae 2
  • [Dataset Request] namuwiki corpus

    [Dataset Request] namuwiki corpus

    Dataset Information

    • Dataset Name: namuwiki corpus
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://github.com/jeongukjae/namuwiki-corpus
    • Citation:
    • License:

    Additional Context

    문장 단위 분절해놓은 나무위키 코퍼스

    dataset request 
    opened by jeongukjae 1
  • [Dataset Request] korean wikipedia corpus

    [Dataset Request] korean wikipedia corpus

    Dataset Information

    • Dataset Name: 한국어 위키피디아 코퍼스
    • Prefered code name(e.g. korean_chatbot_qa_data): korean_wikipedia_corpus
    • Dataset description:
    • Homepage: https://github.com/jeongukjae/korean-wikipedia-corpus
    • Citation:
    • License:

    Additional Context

    kowikitext도 충분히 좋지만, 문장단위로 사용할 때 불편한 점이 있다. 그래서 문장단위로 이미 나누어진 말뭉치를 한국어 위키피디아 덤프에서 하나 생성. (kss로 분절)

    FeaturesDict({
        'content': Sequence(Text(shape=(), dtype=tf.string)),
        'title': Text(shape=(), dtype=tf.string),
    })
    

    요런식으로 content가 TensorSpec(shape=[None], dtype=tf.string)인 텐서값을 가지도록 만들어주면 distillation이나 문장 단위 unsupervised learning이나 할 때 편할 것 같아요.

    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] KLUE

    [Dataset Request] KLUE

    Dataset Information

    • Dataset Name: KLUE
    • Prefered code name(e.g. korean_chatbot_qa_data): klue_dp, klue_mrc, ...
    • Dataset description:
    • Homepage:
    • Citation:
    • License:

    Additional Context

    https://github.com/KLUE-benchmark/KLUE https://arxiv.org/pdf/2105.09680v1.pdf

    • [x] dp @jeongukjae
    • [x] mrc @harrydrippin
    • [x] ner @jeongukjae
    • [x] nli @jeongukjae
    • [x] re @jeongukjae
    • [x] sts @jeongukjae
    • [x] wos @jeongukjae
    • [x] ynat @jeongukjae
    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] namuwikitext

    [Dataset Request] namuwikitext

    Dataset Information

    • Dataset Name: Wikitext format dataset of Namuwiki
    • Prefered code name(e.g. korean_chatbot_qa_data): namuwikitext
    • Dataset description: 나무위키의 덤프 데이터를 바탕을 제작한 wikitext 형식의 텍스트 파일입니다. 학습 및 평가를 위하여 위키페이지 별로 train (99%), dev (0.5%), test (0.5%) 로 나뉘어져있습니다.
    • Homepage: https://github.com/lovit/namuwikitext
    • Citation:

    Additional Context

    https://github.com/lovit/namuwikitext/issues/10

    README에 있는 데이터셋 개수와 맞지 않아 이렇게 이슈 작성을 해놓았는데, 답변은 없는 상황임. 일단 Korpora에 있는 대로 추가해놓고 나중에 다시 수정하는 것이 좋지 않을까

    dataset request 
    opened by jeongukjae 1
  • [Dataset Request] KorQuAD

    [Dataset Request] KorQuAD

    Dataset Information

    • Dataset Name: KorQuAD 1.0
    • Prefered code name(e.g. korean_chatbot_qa_data): korquad_10
    • Dataset description: KorQuAD 1.0은 한국어 Machine Reading Comprehension을 위해 만든 데이터셋입니다. 모든 질의에 대한 답변은 해당 Wikipedia article 문단의 일부 하위 영역으로 이루어집니다. Stanford Question Answering Dataset(SQuAD) v1.0과 동일한 방식으로 구성되었습니다.
    • Homepage: https://korquad.github.io/KorQuad%201.0/
    • Citation:

    Dataset Information

    • Dataset Name: KorQuAD 2.0
    • Prefered code name(e.g. korean_chatbot_qa_data): korquad_20
    • Dataset description: KorQuAD 2.0은 KorQuAD 1.0에서 질문답변 20,000+ 쌍을 포함하여 총 100,000+ 쌍으로 구성된 한국어 Machine Reading Comprehension 데이터셋 입니다. KorQuAD 1.0과는 다르게 1~2 문단이 아닌 Wikipedia article 전체에서 답을 찾아야 합니다. 매우 긴 문서들이 있기 때문에 탐색 시간에 대한 고려가 필요할 것 입니다. 또한 표와 리스트도 포함되어 있기 때문에 HTML tag를 통한 문서의 구조 이해도 필요합니다. 이 데이터셋을 통해서 다양한 형태와 길이의 문서들에서도 기계독해가 가능해질 것 입니다.
    • Homepage: https://korquad.github.io
    • Citation:

    Additional Context

    일단은 KorQuAD 1.0만 추가해놓고 2.0은 추후에 추가해도 무방할 듯

    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] 한국해양대학교 NER 데이터셋

    [Dataset Request] 한국해양대학교 NER 데이터셋

    Dataset Information

    • Dataset Name: 한국해양대학교 자연언어처리 연구실 NER 데이터셋
    • Prefered code name(e.g. korean_chatbot_qa_data): kmounlp_ner
    • Dataset description: 한국어 개체명 정의 및 표지 표준화 기술보고서와 이를 기반으로 제작된 개체명 형태소 말뭉치
    • Homepage: https://github.com/kmounlp/NER
    • Citation:

    Additional Context

    보고서: https://github.com/kmounlp/NER/blob/master/NER%20Guideline%20(ver%201.0).pdf

    dataset request 
    opened by jeongukjae 1
  • Add CONTRIBUTING.md

    Add CONTRIBUTING.md

    • [ ] 프로젝트에서 사용하는 언어에 대한 설명. 사용법/데이터셋 설명은 가능하면 영어로 적되, 이슈/PR 소통은 한국어로 하는게 좋지 않을까?
    • [ ] 데이터셋 추가하는 법
    • [ ] 이슈/PR/Discussion 간단한 설명
    • [ ] 추가로 같이 관리하고 싶은 분들에 대한 설명
    • [ ] 데이터셋 라이선스에 대한 문제에 대한 설명
    documentation before-release 
    opened by jeongukjae 1
  • 현재 wikitext의 문제점을 카탈로그에 적어두기

    현재 wikitext의 문제점을 카탈로그에 적어두기

    https://github.com/jeongukjae/tfds-korean/issues/12#issuecomment-826358469

    위와 같은 이유로 "필터를 해서 사용해라" 혹은 "중간에 빈 example이 있다" 정도는 적어두는 편이 좋은 듯

    documentation 
    opened by jeongukjae 0
  • [Dataset Request] sci-news-sum-kr-50

    [Dataset Request] sci-news-sum-kr-50

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sci_news_sum_kr_50
    • Dataset description:
    • Homepage: https://github.com/theeluwin/sci-news-sum-kr-50
    • Citation:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] kowikitext

    [Dataset Request] kowikitext

    Dataset Information

    • Dataset Name: 한국어 wikitext
    • Prefered code name(e.g. korean_chatbot_qa_data): kowikitext
    • Dataset description: Wikitext format Korean corpus
    • Homepage: https://github.com/lovit/kowikitext
    • Citation:

    Additional Context

    이것도 #12 와 같은 문제점이 존재하는 것으로 보이는데, 일단은 Korpora 방식을 따라감. 이 데이터셋도 heading을 기준으로 split할 경우 = 분류~~~ =같은 행들이 존재하여 정확히 문서 단위로 복구가 불가능함.

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] korean_unsmile_dataset

    [Dataset Request] korean_unsmile_dataset

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://github.com/smilegate-ai/korean_unsmile_dataset
    • Citation:
    • License:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • 데이터셋 카탈로그 빌더 특정 데이터셋 스킵가능하게 수정

    데이터셋 카탈로그 빌더 특정 데이터셋 스킵가능하게 수정

    현재 모든 데이터셋이 로컬에 존재해야 카탈로그를 빌드할 수 있는데, 이게 너무 부담이 된다. 현재 develop 기준만 해도 대략 30GB를 로컬에 들고 있어야 한다.

    데이터셋 버전이 바뀌지 않는다면 카탈로그를 다시 빌드해야하는 때는 build_catalog.py 스크립트가 변경될 때 뿐이라서 특정 데이터셋 페이지 & index 페이지만 빌드해도 되도록 수정해두자. 물론 전체 데이터셋에 대한 카탈로그 빌드도 가능하게 유지.

    documentation 
    opened by jeongukjae 0
  • [Dataset Request] Korean Single Speaker Speech Dataset

    [Dataset Request] Korean Single Speaker Speech Dataset

    Dataset Information

    • Dataset Name: Korean Single Speaker Speech Dataset
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://www.kaggle.com/bryanpark/korean-single-speaker-speech-dataset
    • Citation:
    • License:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] 세종코퍼스

    [Dataset Request] 세종코퍼스

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sejong_corpus
    • Dataset description:
    • Homepage: https://ithub.korean.go.kr/user/total/database/corpusManager.do
    • Citation:
    • License:

    Additional Context

    세종 코퍼스: https://ithub.korean.go.kr/user/total/database/corpusManager.do 세종 코퍼스 - 병렬: https://ithub.korean.go.kr/user/total/database/etcManager.do

    라이선스가 상업적 이용이 어렵더라도 이용하기에 좋은 말뭉치라 생각해서 일단은 추가하는 게 좋을 것 같아요.

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] kcbert

    [Dataset Request] kcbert

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): kcbert
    • Dataset description:
    • Homepage: https://github.com/Beomi/KcBERT
    • Citation:

    Additional Context

    이거 추가해두면 엄청 유용하게 쓸 수 있을 것 같다!!

    dataset request 
    opened by jeongukjae 4
  • [Dataset Request] KAIST Corpus

    [Dataset Request] KAIST Corpus

    Dataset Information

    • Dataset Name: kaist corpus
    • Prefered code name(e.g. korean_chatbot_qa_data): kaist_corpus
    • Dataset description:
    • Homepage: http://semanticweb.kaist.ac.kr/home/index.php/KAIST_Corpus
    • Citation:

    Additional Context

    wontfix dataset request 
    opened by jeongukjae 1
Releases(0.4.0)
  • 0.4.0(Sep 19, 2021)

    • Update KLUE dataset to 1.1.0 https://github.com/jeongukjae/tfds-korean/commit/e954ec4550ec5db015d3f93750e6763aca5a9b48
    • Reorder ClassLabel names of NLI datasets. https://github.com/jeongukjae/tfds-korean/commit/be3e8cba7b9d537969b9c08738dd6df36b0145bc
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Jun 16, 2021)

    • add korean_wikipedia_corpus (https://jeongukjae.github.io/tfds-korean/datasets/korean_wikipedia_corpus.html)
    • add namuwiki_corpus (https://jeongukjae.github.io/tfds-korean/datasets/namuwiki_corpus.html)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Jun 6, 2021)

    • add KLUE benchmark datasets
    • update dataset catalog (https://github.com/jeongukjae/tfds-korean/commit/eb1c72d0a716aba7326276e77e8e6f94976bb579, https://github.com/jeongukjae/tfds-korean/commit/614616b82d0bbdaecbc4ec50e0cfc67b78b646c2)
    • fix klue_ner supervised key bug (https://github.com/jeongukjae/tfds-korean/commit/10f765f01b9f3952e298395779dcf8efeefde93a)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.3(May 29, 2021)

  • 0.1.2(May 25, 2021)

  • 0.1.1(Apr 30, 2021)

  • 0.1.0(Apr 29, 2021)

    • Add kowikitext and namuwikitext dataset
    • Add missing licenses and bibtex.
    • Add license section in catalog page.
    • Add example links in catalog page.
    Source code(tar.gz)
    Source code(zip)
Owner
Jeong Ukjae
Machine Learning Engineer
Jeong Ukjae
1 Jun 28, 2022
A simple visual front end to the Maya UE4 RBF plugin delivered with MetaHumans

poseWrangler Overview PoseWrangler is a simple UI to create and edit pose-driven relationships in Maya using the MayaUE4RBF plugin. This plugin is dis

Christopher Evans 105 Dec 18, 2022
This repository describes our reproducible framework for assessing self-supervised representation learning from speech

LeBenchmark: a reproducible framework for assessing SSL from speech Self-Supervised Learning (SSL) using huge unlabeled data has been successfully exp

49 Aug 24, 2022
中文医疗信息处理基准CBLUE: A Chinese Biomedical LanguageUnderstanding Evaluation Benchmark

English | 中文说明 CBLUE AI (Artificial Intelligence) is playing an indispensabe role in the biomedical field, helping improve medical technology. For fur

452 Dec 30, 2022
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context This repository contains the code in both PyTorch and TensorFlow for our paper

Zhilin Yang 3.3k Dec 28, 2022
KoBERTopic은 BERTopic을 한국어 데이터에 적용할 수 있도록 토크나이저와 BERT를 수정한 코드입니다.

KoBERTopic 모델 소개 KoBERTopic은 BERTopic을 한국어 데이터에 적용할 수 있도록 토크나이저와 BERT를 수정했습니다. 기존 BERTopic : https://github.com/MaartenGr/BERTopic/tree/05a6790b21009d

Won Joon Yoo 26 Jan 03, 2023
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP

TextAttack 🐙 Generating adversarial examples for NLP models [TextAttack Documentation on ReadTheDocs] About • Setup • Usage • Design About TextAttack

QData 2.2k Jan 03, 2023
Python Implementation of ``Modeling the Influence of Verb Aspect on the Activation of Typical Event Locations with BERT'' (Findings of ACL: ACL 2021)

BERT-for-Surprisal Python Implementation of ``Modeling the Influence of Verb Aspect on the Activation of Typical Event Locations with BERT'' (Findings

7 Dec 05, 2022
Tool to check whether a GCP bucket is public or not.

Tool to check publicly accessible GCP bucket. Blog https://justm0rph3u5.medium.com/gcp-inspector-auditing-publicly-exposed-gcp-bucket-ac6cad55618c Wha

DIVYANSHU SHUKLA 7 Nov 24, 2022
This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

Justin Terry 32 Nov 09, 2021
A list of NLP(Natural Language Processing) tutorials

NLP Tutorial A list of NLP(Natural Language Processing) tutorials built on PyTorch. Table of Contents A step-by-step tutorial on how to implement and

Allen Lee 1.3k Dec 25, 2022
This Project is based on NLTK It generates a RANDOM WORD from a predefined list of words, From that random word it read out the word, its meaning with parts of speech , its antonyms, its synonyms

This Project is based on NLTK(Natural Language Toolkit) It generates a RANDOM WORD from a predefined list of words, From that random word it read out the word, its meaning with parts of speech , its

SaiVenkatDhulipudi 2 Nov 17, 2021
🌐 Translation microservice powered by AI

Dot Translate 🌐 A microservice for quick and local translation using A.I. This service starts a local webserver used for neural machine translation.

Dot HQ 48 Nov 22, 2022
Python library for parsing resumes using natural language processing and machine learning

CVParser Python library for parsing resumes using natural language processing and machine learning. Setup Installation on Linux and Mac OS Follow the

nafiu 0 Jul 29, 2021
Calibre recipe to convert latest issue of Analyse & Kritik into an ebook

Calibre Recipe für "Analyse & Kritik" Dies ist ein "Recipe" für die Konvertierung der aktuellen Ausgabe der Zeitung Analyse & Kritik in ein Ebook. Es

Henning 3 Jan 04, 2022
A benchmark for evaluation and comparison of various NLP tasks in Persian language.

Persian NLP Benchmark The repository aims to track existing natural language processing models and evaluate their performance on well-known datasets.

Mofid AI 68 Dec 19, 2022
PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI

data2vec-pytorch PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI (F

Aryan Shekarlaban 105 Jan 04, 2023
Synthetic data for the people.

zpy: Synthetic data in Blender. Website • Install • Docs • Examples • CLI • Contribute • Licence Abstract Collecting, labeling, and cleaning data for

Zumo Labs 253 Dec 21, 2022
Convolutional 2D Knowledge Graph Embeddings resources

ConvE Convolutional 2D Knowledge Graph Embeddings resources. Paper: Convolutional 2D Knowledge Graph Embeddings Used in the paper, but do not use thes

Tim Dettmers 586 Dec 24, 2022