🥫 The simple, fast, and modern web scraping library

Overview

gazpacho

Travis PyPI PyPI - Python Version Downloads

About

gazpacho is a simple, fast, and modern web scraping library. The library is stable, actively maintained, and installed with zero dependencies.

Install

Install with pip at the command line:

pip install -U gazpacho

Quickstart

Give this a try:

from gazpacho import get, Soup

url = 'https://scrape.world/books'
html = get(url)
soup = Soup(html)
books = soup.find('div', {'class': 'book-'}, partial=True)

def parse(book):
    name = book.find('h4').text
    price = float(book.find('p').text[1:].split(' ')[0])
    return name, price

[parse(book) for book in books]

Tutorial

Import

Import gazpacho following the convention:

from gazpacho import get, Soup

get

Use the get function to download raw HTML:

url = 'https://scrape.world/soup'
html = get(url)
print(html[:50])
# '<!DOCTYPE html>\n<html lang="en">\n  <head>\n    <met'

Adjust get requests with optional params and headers:

get(
    url='https://httpbin.org/anything',
    params={'foo': 'bar', 'bar': 'baz'},
    headers={'User-Agent': 'gazpacho'}
)

Soup

Use the Soup wrapper on raw html to enable parsing:

soup = Soup(html)

Soup objects can alternatively be initialized with the .get classmethod:

soup = Soup.get(url)

.find

Use the .find method to target and extract HTML tags:

h1 = soup.find('h1')
print(h1)
# <h1 id="firstHeading" class="firstHeading" lang="en">Soup</h1>

attrs=

Use the attrs argument to isolate tags that contain specific HTML element attributes:

soup.find('div', attrs={'class': 'section-'})

partial=

Element attributes are partially matched by default. Turn this off by setting partial to False:

soup.find('div', {'class': 'soup'}, partial=False)

mode=

Override the mode argument {'auto', 'first', 'all'} to guarantee return behaviour:

print(soup.find('span', mode='first'))
# <span class="navbar-toggler-icon"></span>
len(soup.find('span', mode='all'))
# 8

dir()

Soup objects have html, tag, attrs, and text attributes:

dir(h1)
# ['attrs', 'find', 'get', 'html', 'strip', 'tag', 'text']

Use them accordingly:

print(h1.html)
# '<h1 id="firstHeading" class="firstHeading" lang="en">Soup</h1>'
print(h1.tag)
# h1
print(h1.attrs)
# {'id': 'firstHeading', 'class': 'firstHeading', 'lang': 'en'}
print(h1.text)
# Soup

Support

If you use gazpacho, consider adding the scraper: gazpacho badge to your project README.md:

[![scraper: gazpacho](https://img.shields.io/badge/scraper-gazpacho-C6422C)](https://github.com/maxhumber/gazpacho)

Contribute

For feature requests or bug reports, please use Github Issues

For PRs, please read the CONTRIBUTING.md document

Comments
  • .text is empty on Soup creation

    .text is empty on Soup creation

    Describe the bug

    When I create a soup object...

    To Reproduce

    Calling .text returns an empty string:

    from gazpacho import Soup
    
    html = """<p>&pound;682m</p>"""
    
    soup = Soup(html)
    print(soup.text)
    ''
    

    Expected behavior

    Should output:

    print(soup.text)
    '£682m'
    

    Environment:

    • OS: macOS
    • Version: 1.1

    Additional context

    Inspired by this S/O question

    bug hacktoberfest 
    opened by maxhumber 15
  • API suggestion: soup.all(

    API suggestion: soup.all("div") and soup.first("div")

    The default auto behavior of .find() doesn't work for me, because it means I can't trust my code not to start throwing errors if the page I am scraping adds another matching element, or drops the number of elements down to one (triggering a change in return type).

    I know I can do this:

    div = soup.find("div", mode="first")
    # Or this:
    divs = soup.find("div", mode="all")
    

    But having function parameters that change the return type is still a bit weird - not great for code hinting and suchlike.

    Changing how .find() works would be a backwards incompatible change, which isn't good now that you're past the 1.0 release. I suggest adding two new methods instead:

    div = soup.first("div") # Returns a single element
    # Or:
    divs = soup.all("div") # Returns a list of elements
    

    This would be consistent with your existing API design (promoting the mode arguments to first class method names) and could be implemented without breaking existing code.

    opened by simonw 6
  • A select function similar to soups.

    A select function similar to soups.

    Is your feature request related to a problem? Please describe. It's great to be able to run find and then find within the initial result, but it seems more readable to be able to find based on CSS selectors.

    Describe the solution you'd like

    selector = '.foo img.bar'
    soup.select(selector) # this would return any img item with the class "bar" inside of an object with the class "foo"
    
    opened by kjaymiller 5
  • separate find into find and find_one

    separate find into find and find_one

    Is your feature request related to a problem? Please describe. Right now it's hard to reason about the behaviour of the find method. If it finds one element it will return a Soup object, if it finds more than one it will return a list of Soup objects.

    Describe the solution you'd like Separate find into a find method and find_one method.

    Describe alternatives you've considered Keep it and YOLO?

    Additional context Conversation with Michael Kennedy:

    If I were designing the api, i'd have that always return a List[Node] (or whatever the class is). Then add two methods:

    • find() -> List[Node]
    • find_one() -> Optional[Node]
    • one() -> Node (exception if the there are zero or two or more nodes)
    enhancement hacktoberfest 
    opened by maxhumber 5
  • Format/Pretty Print can't handle void tags

    Format/Pretty Print can't handle void tags

    Describe the bug

    Soup can handle and format matched tags no problem:

    from gazpacho import Soup
    html = """<ul><li>Item 1</li><li>Item 2</li></ul>"""
    Soup(html)
    

    Which correctly formats to:

    <ul>
      <li>Item 1</li>
      <li>Item 2</li>
    </ul>
    

    But it can't handle void tags (like img)...

    To Reproduce

    For example, this bit of html:

    html = """<ul><li>Item 1</li><li>Item 2</li></ul><img src="image.png">"""
    Soup(html)
    

    Will fail to format on print:

    <ul><li>Item 1</li><li>Item 2</li></ul><img src="image.png">
    

    Expected behavior

    Ideally Soup formats it as:

    <ul>
      <li>Item 1</li>
      <li>Item 2</li>
    </ul>
    <img src="image.png">
    

    Environment:

    • OS: macOS
    • Version: 1.1

    Additional context

    The problem has to do with the underlying parseString function unable to handle void tags:

    from xml.dom.minidom import parseString as string_to_dom
    string_to_dom(html)
    

    Possible solution, turn void tags into self-closing tags on input, and the transform them back to void tags on print....

    help wanted hacktoberfest 
    opened by maxhumber 4
  • Add release versions to GitHub?

    Add release versions to GitHub?

    $ git tag v0.7.2 && git push --tags 🎉 🎈

    I really like this project. I think that adding releases to the repository can help the project grow in popularity. I'd like to see that!

    opened by naltun 4
  • User Agent Rotation / Faking

    User Agent Rotation / Faking

    Is your feature request related to a problem? Please describe.

    It might be nice if gazpacho had the ability to rotate/fake a user agent

    Describe the solution you'd like

    Sort of like this but more primitive. (Importantly gazpacho does not want to take on any dependencies)

    Additional context

    Right now gazpacho just spoofs the latest Firefox User Agent

    enhancement hacktoberfest 
    opened by maxhumber 3
  • Enable strict matching for find

    Enable strict matching for find

    Describe the bug Right now match has an ability to be strict. This functionality is presently not enable for find.

    To Reproduce Code to reproduce the behaviour:

    from gazpacho import Soup, match
    
    match({'foo': 'bar'}, {'foo': 'bar baz'})
    # True
    
    match({'foo': 'bar'}, {'foo': 'bar baz'}, strict=True)
    # False
    

    Expected behavior The find method should be forgiving (partial match) to protect ease of use, and maintain backwards compatibility, but there should be an argument to enable strict/exact matching that piggybacks on match

    Environment:

    • OS: macOS
    • Version: 0.7.2
    hacktoberfest 
    opened by maxhumber 3
  • Get all the child elements of a Soup object

    Get all the child elements of a Soup object

    Is your feature request related to a problem? Please describe. I would like try to a .children() method in the Soup object that can list all the child elements of the Soup object.

    Describe the solution you'd like I would make a regex pattern to match each inner element and return a list of Soup() objects with those elements. I might also try to make an option for recurse or not.

    Describe alternatives you've considered All that I can think of is doing the same thing mentioned above in the scraping code

    Additional context None

    opened by Vthechamp22 2
  • Improve issue and feature request templates

    Improve issue and feature request templates

    Is your feature request related to a problem? Please describe. Improve the .github issue template

    Describe the solution you'd like I would like a better issue and feature request template in the .github folder. The format I would like is the bolded headings to become proper sections, and the help line below them comments.

    Describe alternatives you've considered None

    Additional context What I would like is instead of:

    ---
    name: Bug report
    about: Create a report to help gazpacho improve
    title: ''
    labels: ''
    assignees: ''
    ---
    
    **Describe the bug**
    A clear and concise description of what the bug is.
    
    **To Reproduce**
    Code to reproduce the behaviour:
    
    ```python
    
    \```
    
    **Expected behavior**
    A clear and concise description of what you expected to happen.
    
    **Environment:**
     - OS: [macOS, Linux, Windows]
     - Version: [e.g. 0.8.1]
    
    **Additional context**
    Add any other context about the problem here.
    

    It should be something like:

    ---
    name: Bug report
    about: Create a report to help gazpacho improve
    title: ''
    labels: ''
    assignees: ''
    ---
    
    ## Describe the bug
    <!-- A clear and concise description of what the bug is. -->
    
    ## To Reproduce
    <!-- Code to reproduce the behaviour: -->
    
    ```python
    # code
    \```
    
    ## Expected behavior
    <!-- A clear and concise description of what you expected to happen. -->
    
    **Environment:**
     - OS: [macOS, Linux, Windows]
     - Version: [e.g. 0.8.1]
    
    ## Additional context
    <!-- Add any other context about the problem here. Delete this section if not applicable -->
    

    Or something like this

    opened by Vthechamp22 2
  • Needs a Render method (like Requests-Html) to allow pulling text rendered by Javascript...

    Needs a Render method (like Requests-Html) to allow pulling text rendered by Javascript...

    Need support for dynamic text rendering...

    Need a method that triggers the Javascript on a page to fire (see https://github.com/psf/requests-html, r.html.render()).

    opened by jasonvogel 0
  • Can't parse some HTML entries

    Can't parse some HTML entries

    Describe the bug

    Can't parse some entries, there are 40 entries for every page, but some are not being parsed correctly.

    Steps to reproduce the issue

    from gazpacho import get, Soup
    
    for i in range(1, 15):
        link = f'https://1337x.to/category-search/aladdin/Movies/{i}/'
        html = get(link)
        soup = Soup(html)
        body = soup.find("tbody")
    
        # extracting all the entries in the body,
        # there are 40 entries for every page, the last one can have less,
        entries = body.find("tr", mode='all')[::-1]
    
        # but for some pages it can't retrives all the entries from some reason
        print(f'{len(entries)} entries -> {link}')
    

    Expected behavior

    See 40 entries for every page

    Environment:

    Arch Linux - 5.13.10-arch1-1 Python - 3.9.6 Gazpacho - 1.1

    opened by NicKoehler 0
  • Finding tags return entire html

    Finding tags return entire html

    Describe the bug

    Using soup.find on particular website(s) returns entire html instead of the matching tag(s)

    Steps to reproduce the issue

    Look for ul tag with attribute class="cves" (<ul class="cves">) on https://mariadb.com/kb/en/security/

    from gazpacho import get, Soup
    endpoint = "https://mariadb.com/kb/en/security/"
    html_dump = Soup.get(endpoint)
    sample = html_dump.find('ul', attrs={'class': 'cves'}, mode='all')
    

    sample contains the contents of an entire html

    Expected behavior

    sample should contain the contents of the tag <ul class "cves">, which in this case would be rows of <li>-s, listing the CVEs and corresponding fixed version in MariaDB, something like:

    <ul class="cves">
      <li>..</li>
      ...
      <li>..</li>
    </ul>
    

    Environment:

    • OS: Ubuntu Linux 18.04
    • Version: gazpacho 1.1, python 3.6.9

    Additional information

    Using BeautifulSoup on the same html_dump did get the job done, although the <li>-tags are weirdly nested together.

    from bs4 import BeautifulSoup
    # html_dump from above Soup.get(endpoint)
    bs_soup = BeautifulSoup(html_dump.html, 'html.parser')
    ul_cves = bs_soup.find_all('ul','cves')
    

    ul_cves contain strangely nested <li>-s, from which it was still possible to extract the rows of <li>-s I was looking for.

    <ul class="cves">
      <li>
        <li>
        ...
      </li></li>
    </ul>
    
    opened by jz-ang 0
  • Support not a utf-8 encoding

    Support not a utf-8 encoding

    Thank you for your nice project!

    Please add an argument encoding to decode that does not utf-8 encoded pages. https://github.com/maxhumber/gazpacho/blob/ecd53aff4e3d8bdf9eaaea4e0244a75cbabf6259/gazpacho/get.py#L51

    I tried EUC-KR encoded page and got an error message.

    UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbd in position 95: invalid start byte
    
    opened by KwangYeol 0
  • attrs method output is changed when using find

    attrs method output is changed when using find

    find changes the content of attrs

    When using the find method on a Soup object, the content of attrs is overwritten by the parameter attrs in find.

    Steps to reproduce the issue

    Try the following:

    from gazpacho import Soup
    
    div = Soup("<div id='my_id' />").find("div")
    print(div.attrs)
    div.find("span", {"id": "invalid_id"})
    print(div.attrs)
    

    The expected output will be the following, because we twice print the attributes of a:

    {'id': 'my_id'}
    {'id': 'my_id'}
    

    But instead you actually receive:

    {'id': 'my_id'}
    {'id': 'invalid_id'}
    

    which is wrong.

    Environment:

    • OS: Linux
    • Version: 1.1

    My current workaround is to save the attributes before I execute find.

    opened by cfrahnow 1
  • Can't install whl files

    Can't install whl files

    Describe the bug

    Hi,

    There was a pull request (https://github.com/maxhumber/gazpacho/pull/48) to add whl publishing but it appears to have been lost somewhere in a merge on October 31st, 2020. (https://github.com/maxhumber/gazpacho/compare/v1.1...master). Therefore, no wheels have been published for 1.1.

    This causes the installation error on my system that the PR was meant to address.

    Expected behavior

    Install gazpacho with a wheel, not a tar.gz;. Please re-add the whl publishing.

    Environment:

    • OS: Windows 10
    opened by daddycocoaman 0
Releases(v1.1)
  • v1.1(Oct 9, 2020)

  • v1.0(Sep 24, 2020)

    1.0 (2020-09-24)

    • Feature: gazpacho is now fully baked with type hints (thanks for the suggestion @ju-sh!)
    • Feature: Soup.get("url") alternative initializer
    • Fixed: .find is now able to capture malformed void tags (<img />, vs. <img>) (thanks for the Issue @mallegrini!)
    • Renamed: .find(..., strict=) is now find(..., partial=)
    • Renamed: .remove_tags is now .strip
    Source code(tar.gz)
    Source code(zip)
  • v0.9.4(Jul 7, 2020)

    0.9.4 (2020-07-07)

    • Feature: automagical json-to-dictionary return behaviour for get
    • Improvement: automatic missing URL protocol inference for get
    • Improvement: condensed HTTPError Exceptions
    Source code(tar.gz)
    Source code(zip)
  • v0.9.3(Apr 29, 2020)

  • v0.9.2(Apr 21, 2020)

  • v0.9.1(Feb 16, 2020)

  • v0.9(Nov 25, 2019)

  • v0.8.1(Oct 11, 2019)

  • v0.8(Oct 7, 2019)

    Changelog

    • Added mode argument to the find method to adjust return behaviour (defaults to mode='auto')
    • Enabled strict attribute matching for the find method (defaults to strict=False)
    Source code(tar.gz)
    Source code(zip)
Incredibly fast crawler designed for OSINT.

Photon Incredibly fast crawler designed for OSINT. Photon Wiki • How To Use • Compatibility • Photon Library • Contribution • Roadmap Key Features Dat

Somdev Sangwan 9.3k Jan 02, 2023
Tool to scan for secret files on HTTP servers

snallygaster Finds file leaks and other security problems on HTTP servers. what? snallygaster is a tool that looks for files accessible on web servers

Hanno Böck 2k Dec 28, 2022
抢京东茅台脚本,定时自动触发,自动预约,自动停止

jd_maotai 抢京东茅台脚本,定时自动触发,自动预约,自动停止 小白信用 99.6,暂时还没抢到过,朋友 80 多抢到了一瓶,所以我感觉是跟信用分没啥关系,完全是看运气的。

Aruelius.L 117 Dec 22, 2022
WebScraping - Scrapes Job website for python developer jobs and exports the data to a csv file

WebScraping Web scraping Pyton program that scrapes Job website for python devel

Michelle 2 Jul 22, 2022
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 04, 2022
Displays market info for the LUNI token on the Terra Blockchain

LuniBot for Discord Displays market info for the LUNI/LUNA token on the Terra Blockchain (Webscrape method currently scraping CoinMarketCap). Will evo

0 Jan 22, 2022
A python script to extract answers to any question on Quora (Quora+ included)

quora-plus-bypass A python script to extract answers to any question on Quora (Quora+ included) Requirements Python 3.x

Nitin Narayanan 10 Aug 18, 2022
An arxiv spider

An Arxiv Spider 做为一个cser,杰出男孩深知内核对连接到计算机上的硬件设备进行管理的高效方式是中断而不是轮询。每当小伙伴发来一篇刚挂在arxiv上的”热乎“好文章时,杰出男孩都会感叹道:”师兄这是每天都挂在arxiv上呀,跑的好快~“。于是杰出男孩找了找 github,借鉴了一下其

Jie Liu 11 Sep 09, 2022
Anonymously scrapes onlinesim.ru for new usable phone numbers.

phone-scraper Anonymously scrapes onlinesim.ru for new usable phone numbers. Usage Clone the repository $ git clone https://github.com/thomasgruebl/ph

16 Oct 08, 2022
Dude is a very simple framework for writing web scrapers using Python decorators

Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-lea

Ronie Martinez 326 Dec 15, 2022
A module for CME that spiders hashes across the domain with a given hash.

hash_spider A module for CME that spiders hashes across the domain with a given hash. Installation Simply copy hash_spider.py to your CME module folde

37 Sep 08, 2022
Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings

0 Jan 07, 2022
This program scrapes information and images for movies and TV shows.

Media-WebScraper This program scrapes information and images for movies and TV shows. Summary For more information on the program, read the WebScrape_

1 Dec 05, 2021
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

claudio paulo 5 Sep 25, 2022
Scrapy, a fast high-level web crawling & scraping framework for Python.

Scrapy Overview Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pag

Scrapy project 45.5k Jan 07, 2023
Web scrapping

Project Setup Table of Contents Project Setup Table of Contents Run project locally Install Requirements Run script Run project locally Install Requir

Charles 3 Feb 04, 2022
Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website by form number and returns the results as json

Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website (prior form publication) by form number and returns the results as json. It provides the option to download pdfs over a ra

1 Jan 04, 2022
A high-level distributed crawling framework.

Cola: high-level distributed crawling framework Overview Cola is a high-level distributed crawling framework, used to crawl pages and extract structur

Xuye (Chris) Qin 1.5k Dec 24, 2022
Introduction to WebScraping Workshop - Semcomp 24 Beta

Extrair informações da internet de forma automatizada. Existem diversas maneiras de fazer isso, nesse tutorial vamos ver algumas delas, por meio de bibliotecas de python.

Luísa Moura 19 Sep 11, 2022
Complete pipeline for crawling online newspaper article.

Complete pipeline for crawling online newspaper article. The articles are stored to MongoDB. The whole pipeline is dockerized, thus the user does not need to worry about dependencies. Additionally, d

newspipe 4 May 27, 2022