Async Python 3.6+ web scraping micro-framework based on asyncio

Overview

Ruia logo

Ruia

🕸️ Async Python 3.6+ web scraping micro-framework based on asyncio.

Write less, run faster.

travis codecov PyPI - Python Version PyPI Downloads gitter

Overview

Ruia is an async web scraping micro-framework, written with asyncio and aiohttp, aims to make crawling url as convenient as possible.

Write less, run faster:

Features

  • Easy: Declarative programming
  • Fast: Powered by asyncio
  • Extensible: Middlewares and plugins
  • Powerful: JavaScript support

Installation

# For Linux & Mac
pip install -U ruia[uvloop]

# For Windows
pip install -U ruia

# New features
pip install git+https://github.com/howie6879/ruia

Tutorials

  1. Overview
  2. Installation
  3. Define Data Items
  4. Spider Control
  5. Request & Response
  6. Customize Middleware
  7. Write a Plugins

TODO

  • Cache for debug, to decreasing request limitation, ruia-cache
  • Provide an easy way to debug the script, ruia-shell
  • Distributed crawling/scraping

Contribution

Ruia is still under developing, feel free to open issues and pull requests:

  • Report or fix bugs
  • Require or publish plugins
  • Write or fix documentation
  • Add test cases

!!!Notice: We use black to format the code

Thanks

Comments
  • Add rtds support.

    Add rtds support.

    I notice that you have tried to use mkdoc to generate the website.

    Here's an example at readthedocs.org, powered by sphinx.

    There's a little bug, but it is still great.

    RTDs

    opened by panhaoyu 32
  • Log crucial information regardless of log-level

    Log crucial information regardless of log-level

    I've reduced the log level of a Spider in my script as I find it too verbose, however I also filter out crucial info, particularly the after completion info (number of requests, time, ect.) - https://github.com/howie6879/ruia/blob/651fac54540fe0030d3a3d5eefca6c67d0dcb3c3/ruia/spider.py#L280-L287

    This is code I currently use to reduce verbosity:

    import logging
    
    # Disable logging (for speed)
    logging.root.setLevel(logging.ERROR)
    

    I'm thinking of changing the code so that it shows regardless of log level, but will there ever be a case where you wouldn't want to see it?

    opened by abmyii 13
  • `DELAY` attribute specifically for retries

    `DELAY` attribute specifically for retries

    I assumed the DELAY attr would set the delay for retries but instead it applies to all requests. I would appreciate it if there was a DELAY attr specifically for retries (RETRY_DELAY). I'd be happy to implement it if given the go-ahead.

    Thank you for this great library!

    opened by abmyii 13
  • Calling `self.start` as an instance method for a `Spider`

    Calling `self.start` as an instance method for a `Spider`

    I have the following parent class which has reusable code for all the spiders in my project (this is just a basic example):

    class Downloader(Spider):
        concurrency = 15
        worker_numbers = 2
    
        # RETRY_DELAY (secs) is time between retries
        request_config = {
            "RETRIES": 10,
            "DELAY": 0,
            "RETRY_DELAY": 0.1
        }
    
        db_name = "DB"
        db_url = "postgresql://..."
        main_table = "test"
    
        def __init__(self, *args, **kwargs):
            # Initialise DB connection
            self.db = DB(self.db_url, self.db_name, self.main_table)
    
        def download(self):
            self.start()
            
    		# After completion, commit to DB
            self.db.commit()
    

    I use it by sub-classing for each different spider. However, it seems that self.start cannot be accessed as an instance for spiders (since it's a classmethod) - giving this error:

    Traceback (most recent call last):
      File "src/scraper.py", line 107, in <module>
        scraper = Scraper()
      File "src/downloader.py", line 31, in __init__
        super(Downloader, self).__init__(*args, **kwargs)
      File "/usr/lib/python3.8/site-packages/ruia/spider.py", line 159, in __init__
        self.request_session = ClientSession()
      File "/usr/lib/python3.8/site-packages/aiohttp/client.py", line 210, in __init__
        loop = get_running_loop(loop)
      File "/usr/lib/python3.8/site-packages/aiohttp/helpers.py", line 269, in get_running_loop
        loop = asyncio.get_event_loop()
      File "/usr/lib/python3.8/asyncio/events.py", line 639, in get_event_loop
        raise RuntimeError('There is no current event loop in thread %r.'
    RuntimeError: There is no current event loop in thread 'MainThread'.
    Exception ignored in: <function ClientSession.__del__ at 0x7f28875e8b80>
    Traceback (most recent call last):
      File "/usr/lib/python3.8/site-packages/aiohttp/client.py", line 302, in __del__
        if not self.closed:
      File "/usr/lib/python3.8/site-packages/aiohttp/client.py", line 916, in closed
        return self._connector is None or self._connector.closed
    AttributeError: 'ClientSession' object has no attribute '_connector'
    

    Any idea how I can solve this issue whilst maintaining the structure I am trying to implement?

    opened by abmyii 11
  • asyncio `RuntimeError`

    asyncio `RuntimeError`

    ERROR asyncio Exception in callback BaseSelectorEventLoop._sock_write_done(150)(<Future finished result=None>)
    handle: <Handle BaseSelectorEventLoop._sock_write_done(150)(<Future finished result=None>)>
    Traceback (most recent call last):
      File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
        self._context.run(self._callback, *self._args)
      File "/usr/lib/python3.8/asyncio/selector_events.py", line 516, in _sock_write_done
        self.remove_writer(fd)
      File "/usr/lib/python3.8/asyncio/selector_events.py", line 346, in remove_writer
        self._ensure_fd_no_transport(fd)
      File "/usr/lib/python3.8/asyncio/selector_events.py", line 251, in _ensure_fd_no_transport
        raise RuntimeError(
    RuntimeError: File descriptor 150 is used by transport <_SelectorSocketTransport fd=150 read=idle write=<polling, bufsize=0>>
    

    Getting this quite a bit still. I don't think it's ruia directly, but aiohttp. Any ideas?

    One thing that may be causing it is that in clean functions I call other functions synchronously, i.e.:

        async def clean_<...>(self, value):
            return <function>(value)
    

    Could that be causing it? I tried doing return await ... but the error still persisted.

    opened by abmyii 11
  • Show URL in Error for easier debugging

    Show URL in Error for easier debugging

    I think errors would be more useful if they also showed the URL of the parsed page. Example:

    ERROR Spider <Item: extract ... error, please check selector or set parameter named default>, https://...

    I hacked a solution together by passing around the url parameter, but I can't think of a clean solution ATM. Any ideas? I can also push my changes if you would like to see them (very hacky).

    opened by abmyii 11
  • 运行示例代码报错

    运行示例代码报错

    我参考的是 https://github.com/howie6879/ruia/blob/master/docs/en/tutorials/item.md 里的代码

    import asyncio
    from ruia import Item, TextField, AttrField
    
    
    class PythonDocumentationItem(Item):
        title = TextField(css_select='title')
        tutorial_link = AttrField(xpath_select="//a[text()='Tutorial']", attr='href')
    
    
    async def main():
        url = 'https://docs.python.org/3/'
        item = await PythonDocumentationItem.get_item(url=url)
        print(item.title)
        print(item.tutorial_link)
    
    
    if __name__ == '__main__':
        # Python 3.7 required
        asyncio.run(main())
    

    运行能获取到正常的结果 3.9.5 Documentation tutorial/index.html 但是会报错,提示RuntimeError: Event loop is closed 完整的运行结果如下所示:

    [2021:05:06 14:11:55] INFO  Request <GET: https://docs.python.org/3/>
    3.9.5 Documentation
    tutorial/index.html
    Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x0000018664A679D0>
    Traceback (most recent call last):
      File "C:\Users\lenovo\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in __del__
        self.close()
      File "C:\Users\lenovo\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close
        self._loop.call_soon(self._call_connection_lost, None)
      File "C:\Users\lenovo\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 746, in call_soon
        self._check_closed()
      File "C:\Users\lenovo\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 510, in _check_closed
        raise RuntimeError('Event loop is closed')
    RuntimeError: Event loop is closed
    

    PC :win10 64bit Python :3.9.4 64bit

    opened by qgyhd1234 10
  • 我愿意用分布式函数调度框架合和你来比,看谁代码更少谁更自由来爬任意网站,欢迎交流。

    我愿意用分布式函数调度框架合和你来比,看谁代码更少谁更自由来爬任意网站,欢迎交流。

    https://github.com/ydf0509/distributed_framework/blob/master/test_frame/car_home_crawler_sample/car_home_consumer.py

    欢迎来对比,或者你不想用汽车之家测试,可以指定一个任何两层级网站的爬虫调度,看谁的代码少,写法更快更自由,看谁的控制手段多,看谁的运行速度更快,。

    opened by ydf0509 9
  • `TextField` strips strings which may not be desirable

    `TextField` strips strings which may not be desirable

    https://github.com/howie6879/ruia/blob/8a91c0129d38efd8fcd3bee10b78f694a1c37213/ruia/field.py#L120

    My use case is extracting paragraphs which have newlines between them, and these are stripped out by TextField. Should a new field be introduced (I have already made one for my scraper), or should the stripping be optional? Perhaps both is best.

    opened by abmyii 9
  • Trouble scraping deck.tk/deckstats.net

    Trouble scraping deck.tk/deckstats.net

    For example:

    import asyncio
    from ruia import Request
    
    
    async def request_example():
        url = "https://deck.tk/07Pw8tfr"
        params = {
            'name': 'ruia',
        }
        headers = {
            'User-Agent': 'Python3.6',
        }
        request = Request(url=url, method='GET', params=params, headers=headers)
        response = await request.fetch()
        json_result = await response.json()
        print(json_result)
    
    
    if __name__ == '__main__':
        asyncio.get_event_loop().run_until_complete(request_example())
    

    This simply hangs without resolution. That is, the request is never resolved, and I must Ctrl-C out of it. Scrapy handles this without issue, but I was hoping to transition to ruia. Any ideas?

    bug enhancement 
    opened by Triquetra 7
  • Would be nice to be able to pass in

    Would be nice to be able to pass in "start_urls"

    Ruia seems like a brilliant way to write simple and elegant web scrapers, but I can't figure out how to have a different "start_urls" value. I want a web scraper that can check all links on any GIVEN web page, not just whatever the start_urls lead me to, but also with the simplicity and asynchronous power that Ruia provides. Maybe this is a feature but I can't tell from the documentation or code

    opened by JacobJustice 7
  • Improve Chinese documentation

    Improve Chinese documentation

    Toc: Ruia中文文档

    • [x] 快速开始
    • [ ] 入门指南
      • [ ] 1.概览
      • [ ] 2.爱美妆
      • [ ] 3.定义Item
      • [ ] 4.运行 Spider
      • [ ] 5.个性化
      • [ ] 6.插件
      • [ ] 7.帮助
    • [ ] 基础概念
      • [ ] 1.Request
      • [ ] 2.Response
      • [ ] 3.Item
      • [ ] 4.Field
      • [ ] 5.Spider
      • [ ] 6.Middleware
    • [ ] 开发指南
      • [ ] 1.搭建开发环境
      • [ ] 2.Ruia架构
      • [ ] 3.为Ruia编写插件
      • [ ] 4.贡献代码
    • [ ] 实践指南
      • [ ] 1.谈谈对Python爬虫的理解
    enhancement 
    opened by howie6879 0
Releases(v0.8.0)
Owner
howie.hu
奇文共欣赏,疑义相与析
howie.hu
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

2.3k Jan 04, 2023
Dex-scrapper - Hobby project for scrapping dex data on VeChain

Folders /zumo_abis # abi extracted from zumo repo /zumo_pools # runtime e

3 Jan 20, 2022
An arxiv spider

An Arxiv Spider 做为一个cser,杰出男孩深知内核对连接到计算机上的硬件设备进行管理的高效方式是中断而不是轮询。每当小伙伴发来一篇刚挂在arxiv上的”热乎“好文章时,杰出男孩都会感叹道:”师兄这是每天都挂在arxiv上呀,跑的好快~“。于是杰出男孩找了找 github,借鉴了一下其

Jie Liu 11 Sep 09, 2022
Libextract: extract data from websites

Libextract is a statistics-enabled data extraction library that works on HTML and XML documents and written in Python

499 Dec 09, 2022
This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

1 Jan 31, 2022
Scrape and display grades onto the console

WebScrapeGrades About The Project This Project is a personal project where I learned how to webscrape using python requests. Being able to get request

Cyrus Baybay 1 Oct 23, 2021
robobrowser - A simple, Pythonic library for browsing the web without a standalone web browser.

RoboBrowser: Your friendly neighborhood web scraper Homepage: http://robobrowser.readthedocs.org/ RoboBrowser is a simple, Pythonic library for browsi

Joshua Carp 3.7k Dec 27, 2022
Minecraft Item Scraper

Minecraft Item Scraper To run, first ensure you have the BeautifulSoup module: pip install bs4 Then run, python minecraft_items.py folder-to-save-ima

Jaedan Calder 1 Dec 29, 2021
An IpVanish Proxies Scraper

EzProxies Tired of searching for good proxies for hours? Just get an IpVanish account and get thousands of good proxies in few seconds! Showcase Watch

11 Nov 13, 2022
Dailyiptvlist.com Scraper With Python

Dailyiptvlist.com scraper Info Made in python Linux only script Script requires to have wget installed Running script Clone repository with: git clone

1 Oct 16, 2021
Proxy scraper. Format: IP | PORT | COUNTRY | TYPE

proxy scraper 🔎 Installation: git clone https://github.com/ebankoff/proxy_scraper Required pip libraries (pip install library name): lxml beautifulso

Eban'ko 19 Dec 07, 2022
用python爬取江苏几大高校的就业网站,并提供3种方式通知给用户,分别是通过微信发送、命令行直接输出、windows气泡通知。

crawler_for_university 用python爬取江苏几大高校的就业网站,并提供3种方式通知给用户,分别是通过微信发送、命令行直接输出、windows气泡通知。 环境依赖 wxpy,requests,bs4等库 功能描述 该项目基于python,通过爬虫爬各高校的就业信息网,爬取招聘信

8 Aug 16, 2021
A modern CSS selector implementation for BeautifulSoup

Soup Sieve Overview Soup Sieve is a CSS selector library designed to be used with Beautiful Soup 4. It aims to provide selecting, matching, and filter

Isaac Muse 151 Dec 23, 2022
A Powerful Spider(Web Crawler) System in Python.

pyspider A Powerful Spider(Web Crawler) System in Python. Write script in Python Powerful WebUI with script editor, task monitor, project manager and

Roy Binux 15.7k Jan 04, 2023
IGLS - Instagram Like Scraper CLI tool

IGLS - Instagram Like Scraper It's a web scraping command line tool based on python and selenium. Description This is a trial tool for learning purpos

Shreshth Goyal 5 Oct 29, 2021
A Python library for automating interaction with websites.

Home page https://mechanicalsoup.readthedocs.io/ Overview A Python library for automating interaction with websites. MechanicalSoup automatically stor

4.3k Jan 07, 2023
for those who dont want to pay $10/month for high school game footage with ads

nfhs-scraper Disclaimer: I am in no way responsible for what you choose to do with this script and guide. I do not endorse avoiding paywalls or any il

Conrad Crawford 5 Apr 12, 2022
CreamySoup - a helper script for automated SourceMod plugin updates management.

CreamySoup/"Creamy SourceMod Updater" (or just soup for short), a helper script for automated SourceMod plugin updates management.

3 Jan 03, 2022
a way to scrape a database of all of the isef projects

ISEF Database This is a simple web scraper which gets all of the projects and abstract information from here. My goal for this is for someone to get i

William Kaiser 1 Mar 18, 2022
Scrapping the data from each page of biocides listed on the BAUA website into a csv file

Scrapping the data from each page of biocides listed on the BAUA website into a csv file

Eric DE MARIA 1 Nov 30, 2021