A client interface for Scrapinghub's API

Overview

Client interface for Scrapinghub API

https://secure.travis-ci.org/scrapinghub/python-scrapinghub.svg?branch=master

The scrapinghub is a Python library for communicating with the Scrapinghub API.

Requirements

  • Python 2.7 or above

Installation

The quick way:

pip install scrapinghub

You can also install the library with MessagePack support, it provides better response time and improved bandwidth usage:

pip install scrapinghub[msgpack]

Documentation

Documentation is available online via Read the Docs or in the docs directory.

Comments
  • msgpack errors when using iter() with intervals between each batch call

    msgpack errors when using iter() with intervals between each batch call

    Good Day!

    I've encountered this peculiar issue when trying to save up memory by processing the items in chunks. Here's a strip down version of the code for reproduction of the issue:

    import pandas as pd
    
    from scrapinghub import ScrapinghubClient
    
    def read_job_items_by_chunk(jobkey, chunk=10000):
        """In order to prevent OOM issues, the jobs' data must be read in
        chunks.
    
        This will return a generator of pandas DataFrames.
        """
    
        client = ScrapinghubClient("APIKEY123")
    
        item_generator = client.get_job(jobkey).items.iter()
    
        while item_generator:
            yield pd.DataFrame(
                [next(item_generator) for _ in range(chunk)]
            )
    
    for df_chunk in read_job_items_by_chunk('123/123/123'):
        # having a small chunk-size like 10000 won't have any problems
    
    for df_chunk in read_job_items_by_chunk('123/123/123', chunk=25000):
        # having a bug chunk-size like 25000 will throw out errors like the one below
    

    Here's the common error it throws:

    <omitted stack trace above>
    
        [next(item_generator) for _ in range(chunk)]
      File "/usr/local/lib/python2.7/site-packages/scrapinghub/client/proxy.py", line 115, in iter
        _path, requests_params, **apiparams
      File "/usr/local/lib/python2.7/site-packages/scrapinghub/hubstorage/serialization.py", line 33, in mpdecode
        for obj in unpacker:
      File "msgpack/_unpacker.pyx", line 459, in msgpack._unpacker.Unpacker.__next__ (msgpack/_unpacker.cpp:459)
      File "msgpack/_unpacker.pyx", line 390, in msgpack._unpacker.Unpacker._unpack (msgpack/_unpacker.cpp:390)
      File "/usr/local/lib/python2.7/encodings/utf_8.py", line 16, in decode
        return codecs.utf_8_decode(input, errors, True)
    UnicodeDecodeError: 'utf8' codec can't decode byte 0x8b in position 67: invalid start byte
    

    Moreover, it throws out a different error when using a much bigger chunk-size, like 50000:

    <omitted stack trace above>
    
        [next(item_generator) for _ in range(chunk)]
      File "/usr/local/lib/python2.7/site-packages/scrapinghub/client/proxy.py", line 115, in iter
        _path, requests_params, **apiparams
      File "/usr/local/lib/python2.7/site-packages/scrapinghub/hubstorage/serialization.py", line 33, in mpdecode
        for obj in unpacker:
      File "msgpack/_unpacker.pyx", line 459, in msgpack._unpacker.Unpacker.__next__ (msgpack/_unpacker.cpp:459)
      File "msgpack/_unpacker.pyx", line 390, in msgpack._unpacker.Unpacker._unpack (msgpack/_unpacker.cpp:390)
    TypeError: unhashable type: 'dict'
    

    I find that the workaround/solution for this would be to have a lower value for chunk. So far, 1000 works great.

    This uses scrapy:1.5 stack in Scrapy Cloud.

    I'm guessing this might have something to do with the long waiting time that happens when processing the pandas DataFrame chunk, and when the next batch of items are being iterated, the server might have deallocated the pointer to it or something.

    May I ask if there might be a solution for this? since a much bigger chunk size will help with the speed of our jobs.

    I've marked it as bug for now as this is quite an unexpected/undocumented behavior.

    Cheers!

    bug 
    opened by BurnzZ 10
  • basic py3.3 compatibility while keeping py2.7 compatibility

    basic py3.3 compatibility while keeping py2.7 compatibility

    makes the api callable from py2.x and py3.x. Since scrapy itself is not yet python3 compatible this might be still useful if one has a control application/api written in py3 which should be able to control scrapy crawlers.

    opened by ms5 9
  • UnicodeDecodeError while fetching items

    UnicodeDecodeError while fetching items

    It seems like I randomly get errors like this:

     UnicodeDecodeError: 'utf-8' codec can't decode byte 0xde in position 174: invalid continuation byte
    
            at msgpack._cmsgpack.Unpacker._unpack (_unpacker.pyx:443)
            at msgpack._cmsgpack.Unpacker.__next__ (_unpacker.pyx:518)
            at mpdecode (/usr/local/lib/python3.7/site-packages/scrapinghub/hubstorage/serialization.py:33)
            at iter (/usr/local/lib/python3.7/site-packages/scrapinghub/client/proxy.py:115) 
    

    This happens while iterating the items through last_job.items.iter() It seems to happen about 50% of the time from what I see. I scrape the same website every day and run that function and sometimes it works fine, sometimes raise that error. I am not sure if this is an issue with this library or with the ScrapingHub API though but it is very problematic.

    This happens on the latest (2.3.1) version

    opened by mijamo 8
  • Use SHUB_JOBAUTH environment variable in utils.parse_auth method

    Use SHUB_JOBAUTH environment variable in utils.parse_auth method

    Currently, the parse_auth method tries to get the API Key from the SH_APIKEY environment variable, which needs to be manually set either in spider's code or in the [docker] image's code. A regular practice is to create dummy users and associate them with the project so that real contributors don't have to share their API Keys.

    Another option is to use the credentials provided by the SHUB_JOBAUTH, defined during runtime when executing jobs in the Scrapy Cloud platform.

    Although it's possible to use Collections and Frontera, this is not a regular Dash API Key but a JWT Token generated in runtime by JobQ service, which works only for a part of our API endpoints (JobQ/Hubstorage).

    I'd like to contribute with a Pull Request adding support for this ephemeral API Key.

    opened by victor-torres 8
  • Avoid races for hubstorage frontier tests

    Avoid races for hubstorage frontier tests

    Looks like there're races in sh.hubstorage.frontier tests, it's relatively easy to reproduce by rerunning the Travis job (I can't reproduce it locally) https://travis-ci.org/scrapinghub/python-scrapinghub/jobs/172664296

    After checking internals, my guess is that as Batchuploader works in a separate thread trying to upload next messages batch from queue, and frontier.flush() operation waits for the queue being empty (by doing queue.join()), there's a probability that on context switch the queue is empty, but Batchuploader hasn't called callback yet, and the frontier.newcounter is not updated yet. In this case simple short delay should fix this, at least I wasn't able to reproduce the issue after the fix.

    Could you please confirm/disapprove my finding please?

    opened by vshlapakov 7
  • Add truncate method to collections

    Add truncate method to collections

    This makes possible to delete an entire Collection with a single API request, without having to iterate through records and therefore making multiple API requests.

    opened by victor-torres 6
  • How to run a job?

    How to run a job?

    I can't see how to run a job. There's two examples in the docs. In the project section:

    For example, to schedule a spider run (it returns a job object):
    
    >>> project.jobs.run('spider1', job_args={'arg1':'val1'})
    <scrapinghub.client.Job at 0x106ee12e8>>
    
    

    and in the spider section:

    Like project instance, spider instance has jobs field to work with the spider's jobs.
    
    To schedule a spider run:
    
    >>> spider.jobs.run(job_args={'arg1:'val1'})
    <scrapinghub.client.Job at 0x106ee12e8>>
    

    Neither works, both throw AttributeError: 'Jobs' object has no attribute 'run'

    opened by ollieglass 6
  • Some imports from standard lib collections are breaking on python 3.10

    Some imports from standard lib collections are breaking on python 3.10

    Hi everyone,

    Based on an issue from another repo (https://github.com/okfn-brasil/querido-diario/issues/502), I noticed that scrapinghub is using some imports from standard lib collections that are deprecated and not working on Python 3.10.

    In Python 3.8 I have these results on ipython console:

    In [1]: from collections import Iterator
    <ipython-input-1-4fb967d2a9f8>:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
      from collections import Iterator
    
    In [2]: from collections import Iterable
    <ipython-input-2-c0513a1e6784>:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
      from collections import Iterable
    
    In [3]: from collections import MutableMapping
    <ipython-input-3-069a7babadbf>:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
      from collections import MutableMapping
    

    According to this, it is necessary to change the imports of Iterable, Iterator and MutableMapping to get these items from "collections.abc" instead of just "collections"

    Here are the list of imports that I found:

    • tests/client/test_job.py - from collections import Iterator
    • tests/client/test_frontiers.py - from collections import Iterable
    • tests/client/test_projects.py - from collections import defaultdict, Iterator
    • scrapinghub/hubstorage/resourcetype.py - from collections import MutableMapping
    opened by lbmendes 5
  • Collections key not found with library

    Collections key not found with library

    I'm curious about the difference between Collection.get() and Collection.iter(key=[KEY])

    >>> key = '456/789'
    >>> store = project.collections.get_store('trump')
    >>> store.set({'_key': key, 'value': 'abc'})
    >>> print(store.list(key=[key]))
    
    [{'value': 'abc', '_key': '456/789'}]  # https://storage.scrapinghub.com/collections/9328/s/trump?key=456%2F789&meta=_key
    
    >>> try:
    >>>     print(store.get(key))
    >>> except scrapinghub.client.exceptions.NotFound as e:
    >>>     print(getattr(e, 'http_error', e))
    
    404 Client Error: Not Found for url: https://storage.scrapinghub.com/collections/9328/s/trump/456/789
    

    I assume that Collection.get() is a handy shortcut for the key-filtered .iter() function so I guess the point of my issue is that .get() will raise an exception if given bad input, for example slashes

    opened by stav 5
  • project.jobs close_reason support needed

    project.jobs close_reason support needed

    I would like to get the last "finished" job for a spider.

    But if I do:

    project.jobs(spider='myspider', state='finished', count=-1)
    

    I will only get jobs with a state of finished but this may include jobs with a close_reason of shutdown or something other than "finished".

    I would like to be able to do:

    project.jobs(spider='myspider', close_reason='finished', count=-1)
    

    which would of course assume that state is finished as well.

    opened by stav 5
  • Drop versions earlier than Python 3.7 and update requirements

    Drop versions earlier than Python 3.7 and update requirements

    library upgrades

    This update nominally drops support for Python 2.7, 3.5, and 3.6, and tests support for 3.10 to avoid libraries being pinned to very old versions, many of them with bugs or with security issues.

    It's "nominally" because the code hasn't been changed except for deprecations enforced in Python 3.9 or 3.10.

    disabled tests

    Tests that required running test servers are disabled:

    • Running the servers locally is too complicated
    • There are no changes to the library's logic. Only required library versions were changed

    The tests can be re-enabled by someone with access to test servers.

    maintenance 
    opened by apalala 4
  • Add retry logic to Job Tag Update function

    Add retry logic to Job Tag Update function

    Description

    An Internal Server Error pops up whenever a large number of tag updates run parallel or sequentially.

    Traceback (most recent call last):
      File "/usr/local/lib/python3.10/site-packages/project-1.0-py3.10.egg/XX/utils/workflow/__init__.py", line 930, in run
        start_stage, active_stage, ran_stages = self.setup_continuation(
      File "/usr/local/lib/python3.10/site-packages/project-1.0-py3.10.egg/XX/utils/workflow/__init__.py", line 667, in setup_continuation
        self._discard_jobs(start_stage, ran_stages)
      File "/usr/local/lib/python3.10/site-packages/project-1.0-py3.10.egg/XX/utils/workflow/__init__.py", line 705, in _discard_jobs
        self.get_job(jobinfo["key"]).update_tags(
      File "/usr/local/lib/python3.10/site-packages/scrapinghub/client/[jobs.py](http://jobs.py/)", line 503, in update_tags
        self._client._connection._post('jobs_update', 'json', params)
      File "/usr/local/lib/python3.10/site-packages/scrapinghub/[legacy.py](http://legacy.py/)", line 120, in _post
        return self._request(url, params, headers, format, raw, files)
      File "/usr/local/lib/python3.10/site-packages/scrapinghub/client/[exceptions.py](http://exceptions.py/)", line 98, in wrapped
        raise ServerError(http_error=exc)
    scrapinghub.client.exceptions.ServerError: Internal server error
    

    This is not a problem if you are doing updates for a couple of jobs, but if you want to mass update this error will pop up eventually.

    Adding adaptable retry logic to the update_tags function through that ServerError exception would make it easier to debug and implement large-scale workflows.

    opened by ftadao 0
  • Incorrect information for Samples in Job documentation

    Incorrect information for Samples in Job documentation

    At this link - https://python-scrapinghub.readthedocs.io/en/latest/client/overview.html#job-data-1, the description for samples refer to the job stats which is confusing and seems incorrect.

    I think it should be runtime samples that the job uploaded;

    Please correct me if I have misinterpreted this.

    image

    opened by gutsytechster 0
  • Jobs.iter() is unable to accept has_tag as a list.

    Jobs.iter() is unable to accept has_tag as a list.

    From the docs:

    jobs_summary = project.jobs.iter( ... has_tag=['new', 'verified'], lacks_tag='obsolete')

    has_tag accepts a string but no a list. lacks_tag works perfectly fine with both.

    opened by PeteRoyAlex 0
  • KeyError: 'status' when trying to schedule spider

    KeyError: 'status' when trying to schedule spider

    I am getting this error when trying to schedule a spider. This is happening with version 2.3.1

    Traceback (most recent call last):
      File "/home/molveyra/.local/share/virtualenvs/mollie-AtuAN_AE/lib/python3.8/site-packages/scrapinghub/legacy.py", line 157, in _decode_response
        if data['status'] == 'ok':
    KeyError: 'status'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/molveyra/.local/share/virtualenvs/mollie-AtuAN_AE/lib/python3.8/site-packages/scrapinghub/client/exceptions.py", line 69, in wrapped
        return method(*args, **kwargs)
      File "/home/molveyra/.local/share/virtualenvs/mollie-AtuAN_AE/lib/python3.8/site-packages/scrapinghub/client/__init__.py", line 19, in _request
        return super(Connection, self)._request(*args, **kwargs)
      File "/home/molveyra/.local/share/virtualenvs/mollie-AtuAN_AE/lib/python3.8/site-packages/scrapinghub/legacy.py", line 143, in _request
        return self._decode_response(response, format, raw)
      File "/home/molveyra/.local/share/virtualenvs/mollie-AtuAN_AE/lib/python3.8/site-packages/scrapinghub/legacy.py", line 169, in _decode_response
        raise APIError("JSON response does not contain status")
    scrapinghub.legacy.APIError: JSON response does not contain status
    
    enhancement 
    opened by kalessin 6
  • collections.get_store is not working as documented

    collections.get_store is not working as documented

    Upon going through collections doc we see 2. call .get_store(<somename>) to create or access the named collection you want (the collection will be created automatically if it doesn't exist) ; you get a "store" object back, But when you try this

    >>> store = collections.get_store('store_which_does_not_exist')
    >>> store.get('key_which_does_not_exist')
    DEBUG:https://storage.scrapinghub.com:443 "GET /collections/462630/s/store_which_does_not_exist/key_which_does_not_exist HTTP/1.1" 404 46
    2021-02-04 13:33:20 [urllib3.connectionpool] DEBUG: https://storage.scrapinghub.com:443 "GET /collections/462630/s/store_which_does_not_exist/key_which_does_not_exist HTTP/1.1" 404 46
    DEBUG:<Response [404]>: b'unknown collection store_which_does_not_exist\n'
    2021-02-04 13:33:20 [HubstorageClient] DEBUG: <Response [404]>: b'unknown collection store_which_does_not_exist\n'
    *** scrapinghub.client.exceptions.NotFound: unknown collection store_which_does_not_exist
    

    When we .set some value to store which doesn’t exist, store is created and then the values are stored.

    >>> store.set({'_key': 'some_key', 'value': 'some_value'})
    DEBUG:https://storage.scrapinghub.com:443 "POST /collections/462630/s/store_which_does_not_exist HTTP/1.1" 200 0
    2021-02-04 13:36:56 [urllib3.connectionpool] DEBUG: https://storage.scrapinghub.com:443 "POST /collections/462630/s/store_which_does_not_exist HTTP/1.1" 200 0
    According to docs, shouldn’t the store be created when we call .get_store ?
    
    bug docs 
    opened by realslimshanky-sh 0
Releases(2.4.0)
  • 2.4.0(Mar 10, 2022)

    What's Changed

    • update iter() for better fallback in getting 'meta' argument by @BurnzZ in https://github.com/scrapinghub/python-scrapinghub/pull/146
    • switch from Travis to GH actions by @pawelmhm in https://github.com/scrapinghub/python-scrapinghub/pull/162
    • Python 3.10 compatibility by @elacuesta in https://github.com/scrapinghub/python-scrapinghub/pull/166

    New Contributors

    • @pawelmhm made their first contribution in https://github.com/scrapinghub/python-scrapinghub/pull/162

    Full Changelog: https://github.com/scrapinghub/python-scrapinghub/compare/2.3.1...2.4.0

    Source code(tar.gz)
    Source code(zip)
  • 2.3.1(Mar 13, 2020)

  • 2.3.0(Dec 17, 2019)

  • 2.1.1(Apr 25, 2019)

    • add Python 3.7 support
    • update msgpack dependency
    • fix iter logic for items/requests/logs
    • add truncate method to collections
    • improve documentation
    Source code(tar.gz)
    Source code(zip)
  • 2.2.1(Aug 7, 2019)

  • 2.2.0(Aug 7, 2019)

  • 2.1.0(Jan 14, 2019)

    • add an option to schedule jobs with custom environment variables
    • fallback to SHUB_JOBAUTH environment variable if SH_APIKEY is not set
    • provide a unified connection timeout used by both internal clients
    • increase a chunk size when working with the items stats endpoint

    Python 3.3 is considered unmaintained.

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Mar 29, 2017)

    We're very happy to finally announce official major release of new Scrapinghub python client. Documentation is available online via Read The Docs http://python-scrapinghub.readthedocs.io/

    Source code(tar.gz)
    Source code(zip)
  • 1.9.0(Nov 29, 2016)

    python-scrapinghub 1.9.0

    • python-hubstorage merged into python-scrapinghub
    • all tests are improved and rewritten with py.test
    • hubstorage tests use vcrpy cassettes, work faster and don't require any external services to run

    python-hubstorage is going to be considered deprecated, its next version will contain a deprecation warning and a proposal to use python-scrapinghub >=1.9.0 instead.

    Source code(tar.gz)
    Source code(zip)
Owner
Scrapinghub
Turn web content into useful data
Scrapinghub
Build better AWS infrastructure

Sceptre About Sceptre is a tool to drive AWS CloudFormation. It automates the mundane, repetitive and error-prone tasks, enabling you to concentrate o

sceptre 1.4k Jan 04, 2023
A multifunctional bot for Discord

Um bot multifuncional e divertido para Discord Estive desenvolvendo o BotDaora desde o começo de outubro de 2021 e agora ele é open-source! tomei essa

Ruan 4 Dec 28, 2021
This automation protect against subdomain takeover on AWS env which also send alerts on slack.

AWS_Subdomain_Takeover_Detector Purpose The purpose of this automation is to detect misconfigured Route53 entries which are vulnerable to subdomain ta

Puneet Kumar Maurya 8 May 18, 2022
Volt is yet another discord api wrapper for Python. It supports python 3.8 +

Volt Volt is yet another discord api wrapper for Python. It supports python 3.8 + How to install [Currently Not Supported.] pip install volt.py Speed

Minjun Kim (Lapis0875) 11 Nov 21, 2022
A melhor maneira de atender seus clientes no Telegram!

Clientes.Chat Sobre o serviço Configuração Banco de Dados Variáveis de Ambiente Docker Python Heroku Contribuição Sobre o serviço A maneira mais organ

Gabriel R F 10 Oct 12, 2022
A Python 2.7/3.x module for Amcrest Cameras using the SDK HTTP API.

A Python 2.7/3.x module for Amcrest Cameras using the SDK HTTP API. Amcrest and Dahua devices share similar firmwares. Dahua Cameras and NVRs also work with this module.

Marcelo Moreira de Mello 176 Dec 21, 2022
An interactive and multi-function Telegram bot, made especially for Telegram groups.

PyKorone An interaction and fun bot for Telegram groups, having some useful and other useless commands. Created as an experiment and learning bot but

Amano Team 17 Nov 12, 2022
Python SDK for 42DI

42di Python SDK Install pip install git+https://github.com/42di/python-sdk import import di #42di import pandas_datareader as pdr Init SDK project =

42DI 2 Nov 03, 2021
ShadowMusic - A Telegram Music Bot with proper functions written in Python with Pyrogram and Py-Tgcalls.

⭐️ Shadow Music ⭐️ A Telegram Music Bot written in Python using Pyrogram and Py-Tgcalls Ready to use method A Support Group, Updates Channel and ready

TeamShadow 8 Aug 17, 2022
GitGram Bot. Bot Then Message You Your Repo Starts, Forks, And Many More

Yet Another GitAlertBot Inspired From Dev-v2's GitGram Run Bot: Local Host Git Clone Repo : For Telethon Version : git clone https://github.com/TeamAl

Alina RoBot 2 Nov 24, 2021
A simple and stupid Miinto API wrapper

miinto-api-wrapper Miinto API Wrapper is a simple python wrapper for Miinto API. Miinto is a fashion luxury marketplace. For more information see the

Giuseppe Checchia 3 Jan 09, 2022
Celestial - a Python regex Discord chatbot who can talk with you.

Celestial a Python regex Discord chat bot who can talk with you. Invite url: https://discord.com/api/oauth2/authorize?client_id=927573556961869825&per

Jirayu Kaewsing 3 Jan 01, 2023
SongLink Discord Bot - Discord bot to share music links easily

SongLink_Discord_Bot Discord bot to share music links easily. Take a link from y

Edgar Lefevre 4 Feb 18, 2022
Total servers you're in!

Discord-Servercount With this script you can check the amount of servers you are in, along with statistics of how many servers you are owner in and in

Nickyux 1 Feb 12, 2022
Asyncevents: a small library to help developers perform asynchronous event handling in Python

asyncevents - Asynchronous event handling for modern Python asyncevents is a small library to help developers perform asynchronous event handling in m

Mattia 5 Aug 07, 2022
Python Script to download hundreds of images from 'Google Images'. It is a ready-to-run code!

Google Images Download Python Script for 'searching' and 'downloading' hundreds of Google images to the local hard disk! Documentation Documentation H

Hardik Vasa 8.2k Jan 05, 2023
Telegram Bot to check covid vaccine slot availability on CoWin site

Cowin Assist Telegram Bot Check the bot here @cowinassistbot. This is a simple Telegram bot to Check slots availability Get an alert when slots become

32 Jun 21, 2022
TikTok channel bulk ripper based on TikTok-Api and Youtube-dl. Some assembly may be required.

RipTok Script provided as is. Absolutely no guarantee. A TikTok ripper based on TikTokApi and YouTube-dl. Some assembly may be required. positional ar

32 Dec 24, 2022
Gera um PDF, logo depois de você responder um questionário simples, e envia para o e-mail que você informar.

PDF generator and send it for your email Criador: Francisco Robson de O. Dutra Filho Repositório criado no dia 18/09/2021 Instagram: @robsondutra_ Sob

8 Nov 22, 2021
Python: Asynchronous client for the Tailscale API

Python: Asynchronous client for the Tailscale API Asynchronous client for the Tailscale API. About This package allows you to control and monitor Tail

Franck Nijhof 9 Nov 22, 2022