Crawl BookCorpus

Overview

Homemade BookCorpus

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Clawling could be difficult due to some issues of the website. Also, please consider another option such as using publicly available files at your own risk.

For example,

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@


These are scripts to reproduce BookCorpus by yourself.

BookCorpus is a popular large-scale text corpus, espetially for unsupervised learning of sentence encoders/decoders. However, BookCorpus is no longer distributed...

This repository includes a crawler collecting data from smashwords.com, which is the original source of BookCorpus. Collected sentences may partially differ but the number of them will be larger or almost the same. If you use the new corpus in your work, please specify that it is a replica.

How to use

Prepare URLs of available books. However, this repository already has a list as url_list.jsonl which was a snapshot I (@soskek) collected on Jan 19-20, 2019. You can use it if you'd like.

python -u download_list.py > url_list.jsonl &

Download their files. Downloading is performed for txt files if possible. Otherwise, this tries to extract text from epub. The additional argument --trash-bad-count filters out epub files whose word count is largely different from its official stat (because it may imply some failure).

python download_files.py --list url_list.jsonl --out out_txts --trash-bad-count

The results are saved into the directory of --out (here, out_txts).

Postprocessing

Make concatenated text with sentence-per-line format.

python make_sentlines.py out_txts > all.txt

If you want to tokenize them into segmented words by Microsoft's BlingFire, run the below. You can use another choices for this by yourself.

python make_sentlines.py out_txts | python tokenize_sentlines.py > all.tokenized.txt

Disclaimer

For example, you can refer to terms of smashwords.com. Please use the code responsibly and adhere to respective copyright and related laws. I am not responsible for any plagiarism or legal implication that rises as a result of this repository.

Requirement

  • python3 is recommended
  • beautifulsoup4
  • progressbar2
  • blingfire
  • html2text
  • lxml
pip install -r requirements.txt

Note on Errors

  • It is expected some error messages are shown, e.g., Failed: epub and txt, File is not a zip file or Failed to open. But, the number of failures will be much less than one of successes. Don't worry.

Acknowledgement

epub2txt.py is derived and modified from https://github.com/kevinxiong/epub2txt/blob/master/epub2txt.py

Citation

If you found this code useful, please cite it with the URL.

@misc{soskkobayashi2018bookcorpus,
    author = {Sosuke Kobayashi},
    title = {Homemade BookCorpus},
    howpublished = {\url{https://github.com/BIGBALLON/cifar-10-cnn}},
    year = {2018}
}

Also, the original papers which made the original BookCorpus are as follows:

Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler. "Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books." arXiv preprint arXiv:1506.06724, ICCV 2015.

@InProceedings{Zhu_2015_ICCV,
    title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
    author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    month = {December},
    year = {2015}
}
@inproceedings{moviebook,
    title = {Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books},
    author = {Yukun Zhu and Ryan Kiros and Richard Zemel and Ruslan Salakhutdinov and Raquel Urtasun and Antonio Torralba and Sanja Fidler},
    booktitle = {arXiv preprint arXiv:1506.06724},
    year = {2015}
}

Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. "Skip-Thought Vectors." arXiv preprint arXiv:1506.06726, NIPS 2015.

@article{kiros2015skip,
    title={Skip-Thought Vectors},
    author={Kiros, Ryan and Zhu, Yukun and Salakhutdinov, Ruslan and Zemel, Richard S and Torralba, Antonio and Urtasun, Raquel and Fidler, Sanja},
    journal={arXiv preprint arXiv:1506.06726},
    year={2015}
}
Comments
  • Could you share the processed all.txt?

    Could you share the processed all.txt?

    Hi Sosuke,

    Thanks a lot for the wonderful work! I expect to obtain the bookcorpus dataset with your crawler, but I failed to crawl the articles owing to some network errors. I am afraid that I cannot achieve a complete dataset. So could you please share with me the dataset you have got, e.g. the all.txt. My email address is [email protected]. Thanks!

    Zhijie

    opened by thudzj 9
  • Fix merging sentences in one paragraph

    Fix merging sentences in one paragraph

    This PR simply merges sentences in stack whenever it met an empty line. I am not sure why blank was necessary at the first place, so let't discuss about it if I'm missing some thing here.

    Consider one example from starting section of out_txts/100021__three-plays.txt. Current implementation output:

    Three Plays Published by Mike Suttons at Smashwords Copyright 2011 Mike Sutton ISBN 978-1-4659-8486-9 Tripping on Nothing
    

    It obviously merged the paragraph title Tripping on Nothing into stack incorrectly. With this PR, output is:

    Three Plays Published by Mike Suttons at Smashwords Copyright 2011 Mike Sutton ISBN 978-1-4659-8486-9
    
    
    Tripping on Nothing
    
    opened by yoquankara 4
  • intermittent issues with connections and file names

    intermittent issues with connections and file names

    example: python3.6 download_files.py --list url_list.jsonl --out out_txts --trash-bad-count 0 files had already been saved in out_txts. File is not a zip file | File is not a zip file File is not a zip file File is not a zip file File is not a zip file Failed to open https://www.smashwords.com/books/download/490185/8/latest/0/0/existence.epub HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/490185/8/latest/0/0/existence.epub File is not a zip file File is not a zip file | File is not a zip file File is not a zip file File is not a zip file | File is not a zip file File is not a zip file "There is no item named '' in the archive" File is not a zip file File is not a zip file "There is no item named 'OPS/' in the archive" File is not a zip file File is not a zip file | File is not a zip file Failed to open https://www.smashwords.com/books/download/793264/8/latest/0/0/jaynells-wolf.epub HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/793264/8/latest/0/0/jaynells-wolf.epub Failed to open https://www.smashwords.com/books/download/479710/6/latest/0/0/tainted-ava-delaney-lost-souls-1.txt HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/479710/6/latest/0/0/tainted-ava-delaney-lost-souls-1.txt File is not a zip file "There is no item named 'OPS/' in the archive" File is not a zip file Failed to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub HTTPError: HTTP Error 404: Not Found Failed to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub HTTPError: HTTP Error 404: Not Found Gave up to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub [Errno 2] No such file or directory: 'out_txts/royal-blood-royal-blood-1.epub'

    opened by David-Levinthal 3
  • Network Error

    Network Error

    Hi,Thanks for your code, it's really useful for most nlp researchers and thank you again.

    And when I run this code, it's often interrupted by network error after download a little files, I thought this maybe caused by my network. so, could you please send me a email attached with the crawled BookCorpus datasets if you have ?

    My email is: [email protected]. Thank you very much.

    Best,

    opened by SummmerSnow 3
  • HTTPError: HTTP Error 401: Authorization Required

    HTTPError: HTTP Error 401: Authorization Required

    Thanks for you code, but I got some network trouble when I run the download_list script. The full error message is Failed to open https://www.smashwords.com/books/category/1/downloads/0/free/medium/0 HTTPError: HTTP Error 401: Authorization Required

    What's more, when I use your url_list.jsonl to download file, the download_filles script gaves the same error message: Failed to open https://www.smashwords.com/books/download/246580/6/latest/0/0/silence.txt HTTPError: HTTP Error 401: Authorization Required

    And I tried to open the url in my chrome, and I can see that page without error 401. Could help to find a solution? Thanks a lot~

    opened by NotToday 2
  • smashwords.com forbids this; readme should tell people to get permission first

    smashwords.com forbids this; readme should tell people to get permission first

    The code in this repo violates both the robots.txt of smashwords.com:

    $ curl -s https://www.smashwords.com/robots.txt | tail -4
    User-agent: *
    Disallow: /books/search?
    Disallow: /books/download/
    Crawl-delay: 4
    

    and their terms of service, as far as I can see: “Third parties are not authorized to download, host and otherwise redistribute Smashwords books without prior written agreement from Smashwords” (you could imagine that this only prohibits downloading for subsequent hosting or redistribution, but I think that would be an opportunistic interpretation :) ).

    The readme should tell people very clearly that they must get permission from smashwords.com before running this stuff against their site.

    opened by gthb 1
  • How to resolve URLError SSL: CERTIFICATE_VERIFY_FAILED

    How to resolve URLError SSL: CERTIFICATE_VERIFY_FAILED

    If you get the following error:

    URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:748)>
    

    Adding this block of code at the top of the script at download_files.py will resolve it.

    import os, ssl
    if (not os.environ.get('PYTHONHTTPSVERIFY', '') and
        getattr(ssl, '_create_unverified_context', None)):
        ssl._create_default_https_context = ssl._create_unverified_context
    
    opened by delzac 1
  • add: utf8 encoding for all file opens

    add: utf8 encoding for all file opens

    First of all, Thank you for sharing your work.

    There were some errors about encoding like below. image

    It's been resolved by adding encoding='utf8' for every opens.

    Have a beautiful day.

    opened by YongWookHa 1
  • download_list.py not working due to title change.

    download_list.py not working due to title change.

    Apparently the titles on smashwords changed. txt is now found under "Plain text; contains no formatting" and epub under "Supported by many apps and devices (e.g., Apple Books, Barnes and Noble Nook, Kobo, Google Play, etc.)"

    opened by 1227505 1
  • add strip for genre scraping

    add strip for genre scraping

    It was dirty. I added strip.

    "genres": ["\n                            Category: Fiction \u00bb Mystery & detective \u00bb Women Sleuths ", "\n                            Category: Fiction \u00bb Fantasy \u00bb Paranormal "]
    

    will be

    "genres": ["Category: Fiction \u00bb Mystery & detective \u00bb Women Sleuths", "Category: Fiction \u00bb Fantasy \u00bb Paranormal"]
    
    opened by soskek 0
  • Update on the `url_list.jsonl`

    Update on the `url_list.jsonl`

    Hello, on 2022-12-17 I run the script download_list.py with modified number to page to 31430 which covered the last search page. Here is the updated url_list.jsonl.zip

    There are 4544 entries loss, and 8475 entries added from the original file

    Hope this help

    opened by thipokKub 0
  • Here’s a download link for all of bookcorpus as of Sept 2020

    Here’s a download link for all of bookcorpus as of Sept 2020

    You can download it here: https://twitter.com/theshawwn/status/1301852133319294976?s=21

    it contains 18k plain text files. The results are very high quality. I spent about a week fixing the epub2txt script, which you can find at https://github.com/shawwn/scrap named “epub2txt-all”. (not epub2txt.)

    The new script:

    1. Correctly preserves structure, matching the table of contents very closely;

    2. Correctly renders tables of data (by default html2txt produces mostly garbage-looking results for tables),

    3. Correctly preserves code structure, so that source code and similar things are visually coherent,

    4. Converts numbered lists from “1\.” to “1.”

    5. Runs the full text through ftfy.fix_text() (which is what OpenAI does for GPT), replacing Unicode apostrophes with ascii apostrophes;

    6. Expands Unicode ellipses to “...” (three separate ascii characters).

    The tarball download link (see tweet above) also includes the original ePub URLs, updated for September 2020, which ended up being about 2k more than the URLs in this repo. But they’re hard to crawl. I do have the epub files, but I’m reluctant to distribute them for obvious reasons.

    opened by shawwn 13
  • epub2txt.py produces incorrect results for many epubs

    epub2txt.py produces incorrect results for many epubs

    Specifically this line: https://github.com/soskek/bookcorpus/blob/05a3f227d9748c2ee7ccaf93819d0e0236b6f424/epub2txt.py#L149

    image

    When I tried to convert a book on Tensorflow to text using this script, I noticed chapter 1 was being repeated multiple times.

    The reason is that the Table of Contents looks similar to this:

    ch1.html#section1
    ch1.html#section2
    ch1.html#section3
    ... ch2.html#section1 ch2.html#section2 ...

    The epub2txt script iterates over this table of contents, splits "ch1.html#section1" to "ch1.html", then converts that to text. Then repeats for "ch1.html#section2", which converts the same chapter into text.

    I have a fixed version here: https://github.com/shawwn/scrap/blob/afb699ee9c8181b3728b81fc410a31b66311f0d8/epub2txt#L158-L206

    opened by shawwn 1
  • Can anyone download all the files in the url list file?

    Can anyone download all the files in the url list file?

    I tried to download the bookscorpus data. So far I just downloaded around 5000 books. Can anyone get all the books? I met a lot HTTP Error: 403 Forbidden How to fix this ? Or can i get the all the bookscorpus data from somewhere ?

    Thanks

    opened by wxp16 13
Releases(v1.0)
Owner
Sosuke Kobayashi
[email protected] ML, NLP, CV
Sosuke Kobayashi
Web scrapper para cotizar articulos

WebScrapper Este web scrapper esta desarrollado en python 3.10.0 para buscar en la pagina de cyber puerta articulos dentro del catalogo. El programa t

Jordan Gaona 1 Oct 27, 2021
Web Scraping images using Selenium and Python

Web Scraping images using Selenium and Python A propos de ce document This is a markdown document about Web scraping images and videos using Selenium

Nafaa BOUGRAINE 3 Jul 01, 2022
SearchifyX, predecessor to Searchify, is a fast Quizlet, Quizizz, and Brainly webscraper with various stealth features.

SearchifyX SearchifyX, predecessor to Searchify, is a fast Quizlet, Quizizz, and Brainly webscraper with various stealth features. SearchifyX lets you

28 Dec 20, 2022
An IpVanish Proxies Scraper

EzProxies Tired of searching for good proxies for hours? Just get an IpVanish account and get thousands of good proxies in few seconds! Showcase Watch

11 Nov 13, 2022
Linkedin webscraping - Linkedin web scraping with python

linkedin_webscraping This is the first step of a full project called "LinkedIn J

Pedro Dib 4 Apr 24, 2022
A simplistic scraper made to download tons of random screenshots made by people.

printStealer 1.1 What is this tool? This tool is developed to show the insecurity of the screenshot utility called prnt sc. It is a site that stores s

appelsiensam 4 Jul 26, 2022
Web Content Retrieval for Humans™

Lassie Lassie is a Python library for retrieving basic content from websites. Usage import lassie lassie.fetch('http://www.youtube.com/watch?v

Mike Helmick 570 Dec 19, 2022
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

moxiaoxi 47 Nov 23, 2022
This was supposed to be a web scraping project, but somehow I've turned it into a spamming project

Introduction This was supposed to be a web scraping project, but somehow I've turned it into a spamming project.

Boss Perry (Pez) 1 Jan 23, 2022
👨🏼‍⚖️ reddit bot that turns comment chains into ace attorney scenes

Ace Attorney reddit bot 👨🏼‍⚖️ Reddit bot that turns comment chains into ace attorney scenes. You'll need to sign up for streamable and reddit and se

763 Nov 17, 2022
Shopee Scraper - A web scraper in python that extract sales, price, avaliable stock, location and more of a given seller in Brazil

Shopee Scraper A web scraper in python that extract sales, price, avaliable stock, location and more of a given seller in Brazil. The project was crea

Paulo DaRosa 5 Nov 29, 2022
Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine .

TwitterScraper Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine . Screenshot Data Users Only

Remax Alghamdi 19 Nov 17, 2022
Extract gene TSS site form gencode/ensembl/gencode database GTF file and export bed format file.

GetTss python Package extract gene TSS site form gencode/ensembl/gencode database GTF file and export bed format file. Install $ pip install GetTss Us

laojunjun 6 Nov 21, 2022
Automated Linkedin bot that will improve your visibility and increase your network.

LinkedinSpider LinkedinSpider is a small project using browser automating to increase your visibility and network of connections on Linkedin. DISCLAIM

Frederik 2 Nov 26, 2021
Python web scrapper

Website scrapper Web scrapping project in Python. Created for learning purposes. Start Install python Update configuration with websites Launch script

Nogueira Vitor 1 Dec 19, 2021
Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages

Github Scraper Github scraper app is used to scrape data for a specific user profile. Github scraper app gets a github profile name and check whether

Siva Prakash 6 Apr 05, 2022
This repo has the source code for the crawler and data crawled from auto-data.net

This repo contains the source code for crawler and crawled data of cars specifications from autodata. The data has roughly 45k cars

Tô Đức Anh 5 Nov 22, 2022
Automatically download and crop key information from the arxiv daily paper.

Arxiv daily 速览 功能:按关键词筛选arxiv每日最新paper,自动获取摘要,自动截取文中表格和图片。 1 测试环境 Ubuntu 16+ Python3.7 torch 1.9 Colab GPU 2 使用演示 首先下载权重baiduyun 提取码:il87,放置于code/Pars

HeoLis 20 Jul 30, 2022
A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

Yun Wang 1 Jan 10, 2022
Quick Project made to help scrape Lexile and Atos(AR) levels from ISBN

Lexile-Atos-Scraper Quick Project made to help scrape Lexile and Atos(AR) levels from ISBN You will need to install the chrome webdriver if you have n

1 Feb 11, 2022