HAN2HAN : Hangul Font Generation

Overview

Open In Colab

English |  한국어

HAN2HAN : Hangul Font Generation

Run Guide

git clone https://github.com/MINED30/HAN2HAN
cd HAN2HAN
mkdir targetimg
# Put your letter image to targetimg directory. The file name must be 
   
    .jpg
   
bash download.sh
python generate.py

Results

Letter from Seodaemun Prison

The above picture is an example of creating a font by extracting two sentences, "Are you doing well at school(그새 학교 잘다니냐)" and "I'm fine(나는 잘있다)," from the letter sent by Lee Yeon-ho at Seodaemun Prison. There are 13 letters in the extracted sentence, but 11 letters are actually usable by the model due to overlapping of 'Jal(잘)' and 'Da(다)'. Lee Yeon-ho's handwriting is characterized by a gentle flow and a clean feel. In the created 'Song of Cell No. 8 ', it can be seen that the characteristics are well lived. In particular, it can be seen that 'ㅎ' and 'ㅇ' are well utilized in the initial consonant, and the 'ㄹ' and 'ㄴ' consonants in the middle and final consonants are also well characterized. On the other hand, it is confirmed that the initial consonant 'ㄷ' is blurred and the font is cut off, and that 'ㅗ' is not well implemented because the font of '그' is unique.

Generation Hangul Font from Alphabet

What is surprising is that it captures the characteristics of the English alphabet well and puts Hangul fonts on it. I got the font from the website. The font above is snow piled on top of the font, and the font below is a horror style font. It is not perfectly created, but it follows the thickness of the font and the softness of the font well. In particular, it was very surprising to see the eyes gradually taking shape as the number of epochs increased in the font above.

Other fonts

The above 4 fonts are not used for training. It takes 10 characters of '나랏말싸미 듕귁에달아' and generates the rest of the Hunminjeongeum. You can see that it is generate so well that it is difficult for the human to distinguish it.

Architecture

Category Embedding

1 category embedder

In this project, the source font plays the role of 'Condition' that tells what type of character it is. Therefore, the source font should be able to change to any style and not lose the character of the characteristic. In the example picture above, it should be able to change to a different style of '밝', and at the same time, the characteristic of '밝' should not be lost. If a characteristic is lost, it cannot function as a Condition. By reconstructing the text whose style has changed in the source font, it is possible to better maintain the characteristic of the text. This structure was inspired by CycleGAN.

In general, the last embedding value of the encoder is used as the category embedding. However, this model uses all layers, not just the last embedding layer. That is, all values (red in the image) encoded in the source font are reflected when creating the font. For this purpose, the generator and encoder structures are designed identically.

Pix2Pix - Generator part

It is created in the same form as U-NET, but the difference is that category embedding is added. The encoded values are concatenated in the decoder part at the back stage, and the embedded values are also concatenated at this time. The concat values are listed below.
  1. Feature map from encoder
  2. The upconvolutional feature map in the previous step
  3. Category embedded feature map

Pix2Pix - GAN Part

The important thing is to be able to train with the font according to the embedding value of the source font, even if you put a different font. In the picture above, when the source font is '밝' regardless of '창' or any character, the generator should create '밝'. The discriminator simply classifies it as 0(True Image) or 1(Fake Image). The generator is trained to deceive the discriminator, and the discriminator is trained to pick out the real image. PatchGAN seems meaningless because the size of the image was small as 32x32. The pre-trained model learned 138 Naver nanum fonts, trained the generator 30 epochs, then trained the discriminator 30 epochs independently and run 30 epochs together.

In Finetuning, you have to create a lot of characters beyond just 10 characters. For example, when finetuning using 10 characters, each character is trained to make 1 character by putting 9 characters. That is, you can train 90 times (10*9 times) with 10 characters. L1 Loss was used as the loss function, and the results were good when the learning rate was assigned to 4e-5~5e-5 and trained for 100~200 epochs.

Character Embedding

For example, when 10 characters exist as input values, there are 10 cases to generate one character as a source font. It can be generated by selecting it randomly, but I hypothesize that it would be better if it was entered as the source font by inserting related characters. AutoEncoder was used to embed characters

Strength

  • A font is created with only a small number of characters, less than 10 characters.
  • Even if you put the alphabet, it generates Hangul font well, and as you can see in the example, it is possible to express the snowy font.
  • Easy to try with COLAB
  • The cost of designing fonts can be lowered.

Weakness

  1. Part of some characters disappears.
    • A small number of characters
    • If the format deviates a lot from the previously trained data, having it has a negative effect. (In the letter sent from Seodaemun Prison, it has better result without '그')
      • The first reason and the second reason conflict with each other. In conclusion, it is thought that data of 'consistent font' is necessary.
  2. The number of characters that can be created is limited to 2420 characters.
    • With only 2420 characters of 138 fonts, the number of characters that can be generated by training is limited to 2420 characters.
      • Better performance is expected if pretraining is performed with 11,172 characters and more fonts.
    • 2420 characters of 138 fonts are too small to cluster by character. If you train more characters, I think character embeddings will be more effective.
    • The more characters, the harder to category embed
      • it is more important to increase the number of fonts.
  3. Low quality by learning and generating images with a small size of 32x32
    • It is possible to generate high-quality fonts if trained to a larger size using more resources in the future.
  4. the effectiveness of character embedding is not proved
    • The results, although matching seem better, have not been proven numerically.
    • For now, model matches with cosine similarity, but it may be more effective if it is further advanced (Nural Net, etc.) in the future.
  5. discrepancy
    • The training is a digital font, but if it is created with an image such as a camera, discrepancy occurs.
    • Most of the designed fonts have a certain typeface, but a real person sometimes uses a typeface that is different from their own, causing discrepancy.
      • However, as in the example of 'A letter from Seodaemun Prison', if you 'font' the characters through preprocessing before generating them, there is no significant hindrance to the creation performance.
Owner
Changwoo Lee
Changwoo Lee
Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow

This Repository contains a sample code for Tacotron 2, WaveGlow with multi-speaker, emotion embeddings together with a script for data preprocessing.

Ivan Didur 106 Jan 01, 2023
A pytorch implementation of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".

RE2 This is a pytorch implementation of the ACL 2019 paper "Simple and Effective Text Matching with Richer Alignment Features". The original Tensorflo

286 Jan 02, 2023
Repository for the paper: VoiceMe: Personalized voice generation in TTS

🗣 VoiceMe: Personalized voice generation in TTS Abstract Novel text-to-speech systems can generate entirely new voices that were not seen during trai

Pol van Rijn 80 Dec 29, 2022
A Chinese to English Neural Model Translation Project

ZH-EN NMT Chinese to English Neural Machine Translation This project is inspired by Stanford's CS224N NMT Project Dataset used in this project: News C

Zhenbang Feng 29 Nov 26, 2022
Korean extractive summarization. 2021 AI 텍스트 요약 온라인 해커톤 화성갈끄니까팀 코드

korean extractive summarization 2021 AI 텍스트 요약 온라인 해커톤 화성갈끄니까팀 코드 Leaderboard Notice Text Summarization with Pretrained Encoders에 나오는 bertsumext모델(ext

3 Aug 10, 2022
"Investigating the Limitations of Transformers with Simple Arithmetic Tasks", 2021

transformers-arithmetic This repository contains the code to reproduce the experiments from the paper: Nogueira, Jiang, Lin "Investigating the Limitat

Castorini 33 Nov 16, 2022
A collection of models for image - text generation in ACM MM 2021.

Bi-directional Image and Text Generation UMT-BITG (image & text generator) Unifying Multimodal Transformer for Bi-directional Image and Text Generatio

Multimedia Research 63 Oct 30, 2022
Generate custom detailed survey paper with topic clustered sections and proper citations, from just a single query in just under 30 mins !!

Auto-Research A no-code utility to generate a detailed well-cited survey with topic clustered sections (draft paper format) and other interesting arti

Sidharth Pal 20 Dec 14, 2022
ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体

ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体,包括上市公司所属行业关系、行业上级关系、产品上游原材料关系、产品下游产品关系、公司主营产品、产品小类共6大类。 上市公司4,654家,行业511个,产品95,559条、上游材料56,824条,上级行业480条,下游产品390条,产品小类52,937条,所属行业3,946条。

liuhuanyong 415 Jan 06, 2023
A music comments dataset, containing 39,051 comments for 27,384 songs.

Music Comments Dataset A music comments dataset, containing 39,051 comments for 27,384 songs. For academic research use only. Introduction This datase

Zhang Yixiao 2 Jan 10, 2022
Opal-lang - A WIP programming language based on Python

thanks to aphitorite for the beautiful logo! opal opal is a WIP transcompiled pr

3 Nov 04, 2022
小布助手对话短文本语义匹配的一个baseline

oppo-text-match 小布助手对话短文本语义匹配的一个baseline 模型 参考:https://kexue.fm/archives/8213 base版本线下大概0.952,线上0.866(单模型,没做K-flod融合)。 训练 测试环境:tensorflow 1.15 + keras

苏剑林(Jianlin Su) 132 Dec 14, 2022
Data preprocessing rosetta parser for python

datapreprocessing_rosetta_parser I've never done any NLP or text data processing before, so I wanted to use this hackathon as a learning opportunity,

ASReview hackathon for Follow the Money 2 Nov 28, 2021
Question answering app is used to answer for a user given question from user given text.

Question answering app is used to answer for a user given question from user given text.It is created using HuggingFace's transformer pipeline and streamlit python packages.

Siva Prakash 3 Apr 05, 2022
A collection of GNN-based fake news detection models.

This repo includes the Pytorch-Geometric implementation of a series of Graph Neural Network (GNN) based fake news detection models. All GNN models are implemented and evaluated under the User Prefere

SafeGraph 251 Jan 01, 2023
Code for the paper PermuteFormer

PermuteFormer This repo includes codes for the paper PermuteFormer: Efficient Relative Position Encoding for Long Sequences. Directory long_range_aren

Peng Chen 42 Mar 16, 2022
A fast hierarchical dimensionality reduction algorithm.

h-NNE: Hierarchical Nearest Neighbor Embedding A fast hierarchical dimensionality reduction algorithm. h-NNE is a general purpose dimensionality reduc

Marios Koulakis 35 Dec 12, 2022
Source code for CsiNet and CRNet using Fully Connected Layer-Shared feedback architecture.

FCS-applications Source code for CsiNet and CRNet using the Fully Connected Layer-Shared feedback architecture. Introduction This repository contains

Boyuan Zhang 4 Oct 07, 2022
Winner system (DAMO-NLP) of SemEval 2022 MultiCoNER shared task over 10 out of 13 tracks.

KB-NER: a Knowledge-based System for Multilingual Complex Named Entity Recognition The code is for the winner system (DAMO-NLP) of SemEval 2022 MultiC

116 Dec 27, 2022
Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"

Structural Guidance for Transformer Language Models This repository accompanies the paper, Structural Guidance for Transformer Language Models, publis

International Business Machines 10 Dec 14, 2022