Web Content Retrieval for Humans™

Overview

Lassie

https://img.shields.io/pypi/v/lassie.svg?style=flat-square https://img.shields.io/travis/michaelhelmick/lassie.svg?style=flat-square https://img.shields.io/coveralls/michaelhelmick/lassie/master.svg?style=flat-square https://img.shields.io/badge/Say%20Thanks!-:)-1EAEDB.svg?style=flat-square

Lassie is a Python library for retrieving basic content from websites.

https://i.imgur.com/QrvNfAX.gif

Usage

>>> import lassie
>>> lassie.fetch('http://www.youtube.com/watch?v=dQw4w9WgXcQ')
{
    'description': u'Music video by Rick Astley performing Never Gonna Give You Up. YouTube view counts pre-VEVO: 2,573,462 (C) 1987 PWL',
    'videos': [{
        'src': u'http://www.youtube.com/v/dQw4w9WgXcQ?autohide=1&version=3',
        'height': 480,
        'type': u'application/x-shockwave-flash',
        'width': 640
    }, {
        'src': u'https://www.youtube.com/embed/dQw4w9WgXcQ',
        'height': 480,
        'width': 640
    }],
    'title': u'Rick Astley - Never Gonna Give You Up',
    'url': u'http://www.youtube.com/watch?v=dQw4w9WgXcQ',
    'keywords': [u'Rick', u'Astley', u'Sony', u'BMG', u'Music', u'UK', u'Pop'],
    'images': [{
        'src': u'http://i1.ytimg.com/vi/dQw4w9WgXcQ/hqdefault.jpg?feature=og',
        'type': u'og:image'
    }, {
        'src': u'http://i1.ytimg.com/vi/dQw4w9WgXcQ/hqdefault.jpg',
        'type': u'twitter:image'
    }, {
        'src': u'http://s.ytimg.com/yts/img/favicon-vfldLzJxy.ico',
        'type': u'favicon'
    }, {
        'src': u'http://s.ytimg.com/yts/img/favicon_32-vflWoMFGx.png',
        'type': u'favicon'
    }],
    'locale': u'en_US'
}

Install

Install Lassie via pip

$ pip install lassie

or, with easy_install

$ easy_install lassie

But, hey... that's up to you.

Documentation

Documentation can be found here: https://lassie.readthedocs.org/

Comments
  • Fix possible ValueError in convert_to_int caused by values like 1px

    Fix possible ValueError in convert_to_int caused by values like 1px

    When trying to parse http://www.wired.com/wiredscience/2013/09/rim-fire-map-color-scale/ a ValueError was raised in convert_to_img, because the page has image width and height values ending in px.

    I changed the function to be more liberal regarding dimension values, by extracting the digits before casting to int. I added a test for this.

    Not sure though if the value should be converted to int at all or kept as a string.

    opened by yaph 14
  • Import fails on Python3.5

    Import fails on Python3.5

    It appears something is seriously broken when trying to install lassie with Python 3.5. Install goes fine but when importing I get here:

    Python 3.5.0 (default, Sep 23 2015, 04:41:38)
    [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import lassie
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/__init__.py", line 19, in <module>
        from .api import fetch
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/api.py", line 11, in <module>
        from .core import Lassie
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/core.py", line 13, in <module>
        from bs4 import BeautifulSoup
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/__init__.py", line 30, in <module>
        from .builder import builder_registry, ParserRejectedMarkup
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/builder/__init__.py", line 308, in <module>
        from . import _htmlparser
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/builder/_htmlparser.py", line 7, in <module>
        from html.parser import (
    ImportError: cannot import name 'HTMLParseError'
    
    opened by gnunicorn 6
  • Add optional structured properties for og:image and og:video

    Add optional structured properties for og:image and og:video

    From http://ogp.me/#structured.

    The og:video tag has the identical tags as og:image.

    og:image:url - Identical to og:image. og:image:secure_url - An alternate url to use if the webpage requires HTTPS. og:image:type - A MIME type for this image. og:image:width - The number of pixels wide. og:image:height - The number of pixels high.

    opened by jpadilla 6
  • Optional support for canonical URL meta tag.

    Optional support for canonical URL meta tag.

    This is very roughed in, but it adds support for returning the URL as provided by the canonical link element.

    There isn't anything to determine precedence with og:url.

    Has passing tests, and is disabled by default.

    Needed this for a project, not sure if it would be useful upstream.

    enhancement 
    opened by jmhobbs 5
  • Possible relative URL in og:image

    Possible relative URL in og:image

    I just came accros a page with a relative path value for the og:image. Adding a call to urljoin on the src attribute in line 186 of core.py would be a possibility, but maybe it's better to check for the src prop (possibly href prop too) in _filter_meta_data and do it there. What do you think about that?

    opened by yaph 5
  • Can't get the full article.

    Can't get the full article.

    Hi, I want to extract the article from the source url. I got only the title of the article and small parts of it under the "description" parameter.

    opened by yaseenox 4
  • Update requests==2.8 in setup.py, too

    Update requests==2.8 in setup.py, too

    The changelog for the last release states, that request is now pinned at version 2.8, yet when installing the latest version of lassie, it requires (and install) version 2.6 – the setup.py hasn't been updated to reflect that change and breaks the installation. This PR corrects that.

    opened by gnunicorn 4
  • Please allow to configure the requests session

    Please allow to configure the requests session

    It would be useful to be able to configure the requests session used to retrieve the requested URL.

    You could perhaps initialize a default session object in the Lassie constructor, which the user could then configure, and/or add a parameter to Lassie.fetch() to override the default session.

    opened by tawmas 4
  • Bump requests from 2.18.4 to 2.20.0

    Bump requests from 2.18.4 to 2.20.0

    ⚠️ Dependabot is rebasing this PR ⚠️

    If you make any changes to it yourself then they will take precedence over the rebase.


    Bumps requests from 2.18.4 to 2.20.0.

    Changelog

    Sourced from requests's changelog.

    2.20.0 (2018-10-18)

    Bugfixes

    • Content-Type header parsing is now case-insensitive (e.g. charset=utf8 v Charset=utf8).
    • Fixed exception leak where certain redirect urls would raise uncaught urllib3 exceptions.
    • Requests removes Authorization header from requests redirected from https to http on the same hostname. (CVE-2018-18074)
    • should_bypass_proxies now handles URIs without hostnames (e.g. files).

    Dependencies

    • Requests now supports urllib3 v1.24.

    Deprecations

    • Requests has officially stopped support for Python 2.6.

    2.19.1 (2018-06-14)

    Bugfixes

    • Fixed issue where status_codes.py's init function failed trying to append to a __doc__ value of None.

    2.19.0 (2018-06-12)

    Improvements

    • Warn user about possible slowdown when using cryptography version < 1.3.4
    • Check for invalid host in proxy URL, before forwarding request to adapter.
    • Fragments are now properly maintained across redirects. (RFC7231 7.1.2)
    • Removed use of cgi module to expedite library load time.
    • Added support for SHA-256 and SHA-512 digest auth algorithms.
    • Minor performance improvement to Request.content.
    • Migrate to using collections.abc for 3.7 compatibility.

    Bugfixes

    • Parsing empty Link headers with parse_header_links() no longer return one bogus entry.
    ... (truncated)
    Commits
    • bd84045 v2.20.0
    • 7fd9267 remove final remnants from 2.6
    • 6ae8a21 Add myself to AUTHORS
    • 89ab030 Use comprehensions whenever possible
    • 2c6a842 Merge pull request #4827 from webmaven/patch-1
    • 30be889 CVE URLs update: www sub-subdomain no longer valid
    • a6cd380 Merge pull request #4765 from requests/encapsulate_urllib3_exc
    • bbdbcc8 wrap url parsing exceptions from urllib3's PoolManager
    • ff0c325 Merge pull request #4805 from jdufresne/https
    • b0ad249 Prefer https:// for URLs throughout project
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot ignore this [patch|minor|major] version will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 3
  • Added support for open graph optional property `site_name`.

    Added support for open graph optional property `site_name`.

    Hi, I added supported for the open graph site_name property.

    This parse the following tag... <meta property="og:site_name" content="IMDb" /> into {"site_name": "IMDb"}

    opened by cameronmaske 3
  • make image urls absolute and added mock to test_requirements

    make image urls absolute and added mock to test_requirements

    I made a change so that when lassie.fetch is called with all_images=True the images src attributes contain absolute URLs. Since lassie already comes with a function that makes relative URLs absolute, I think it's better done inside lassie than in the application which imports it.

    When trying to run the tests after the changes the mock package was missing, so I added it to the test_requirements.txt file.

    opened by yaph 2
  • docs: Fix a few typos

    docs: Fix a few typos

    There are small typos in:

    • docs/usage/advanced_usage.rst

    Fixes:

    • Should read attributes rather than attibutes.
    • Should read actual rather than acutal.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • Any reason to pindown upper version in requirements.txt

    Any reason to pindown upper version in requirements.txt

    Hi,

    Since lassie is a library, limiting upper versions for dependencies as in

    requests>=2.18.4,<3.0.0
    beautifulsoup4>=4.9.0,<4.10.0
    

    can lead to conflicts for software using it, e.g. on pip install:

    The conflict is caused by:
        The user requested beautifulsoup4==4.10.0
        lassie 0.11.11 depends on beautifulsoup4<4.10.0 and >=4.9.0
    

    Is there any reason for the pindown?

    opened by idlesign 1
  • Encoding issues with german umlauts

    Encoding issues with german umlauts

    Hi,

    when getting the description from a German website the "ü" "ä" etc. end up being "ä", "ü" etc. Example: https://finanzguru.de/ Result:

    Finanzguru - Finanzen magisch einfach Finanzen magisch einfach. Verwalte deine Verträge, kündige per Fingertipp und spare Geld mit meinen Spartipps. Alles an einem Ort und komplett kostenfrei. Einfacher war es noch nie.

    I am using lassie within Django.

    opened by leugh 0
  • Add new filters for embeddable items

    Add new filters for embeddable items

    The idea is to return as much data as we can in the API so users can possibly embed media. (i.e. Spotify tracks)

    We'll probably add a new embed.py and return a new embed key in the lassie API response.

    enhancement 
    opened by michaelhelmick 0
Releases(0.11.11)
Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.

Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.

Joseph Lai 543 Jan 03, 2023
SearchifyX, predecessor to Searchify, is a fast Quizlet, Quizizz, and Brainly webscraper with various stealth features.

SearchifyX SearchifyX, predecessor to Searchify, is a fast Quizlet, Quizizz, and Brainly webscraper with various stealth features. SearchifyX lets you

28 Dec 20, 2022
LSpider 一个为被动扫描器定制的前端爬虫

LSpider LSpider - 一个为被动扫描器定制的前端爬虫 什么是LSpider? 一款为被动扫描器而生的前端爬虫~ 由Chrome Headless、LSpider主控、Mysql数据库、RabbitMQ、被动扫描器5部分组合而成。

Knownsec, Inc. 321 Dec 12, 2022
Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website by form number and returns the results as json

Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website (prior form publication) by form number and returns the results as json. It provides the option to download pdfs over a ra

1 Jan 04, 2022
Jobinja.ir jobs scraper.

Jobinja.ir Dataset Introduction This project is a simple web scraper that scraps pages of jobinja.ir concurrently and writes and update (if file gets

Iman Kermani 3 Apr 15, 2022
Web Scraping COVID 19 Meta Portal with Python

Web-Scraping-COVID-19-Meta-Portal-with-Python - Requests API and Beautiful Soup to scrape real-time COVID statistics from worldometer website and perform data cleaning and visual analysis in Jupyter

Aarif Munwar Jahan 1 Jan 04, 2022
Unja is a fast & light tool for fetching known URLs from Wayback Machine

Unja Fetch Known Urls What's Unja? Unja is a fast & light tool for fetching known URLs from Wayback Machine, Common Crawl, Virus Total & AlienVault's

Sheryar 10 Aug 07, 2022
A scrapy pipeline that provides an easy way to store files and images using various folder structures.

scrapy-folder-tree This is a scrapy pipeline that provides an easy way to store files and images using various folder structures. Supported folder str

Panagiotis Simakis 7 Oct 23, 2022
Scraping and visualising India's real-time COVID-19 data from the MOHFW dataset.

COVID19-WEB-SCRAPER Open Source Tech Lab - Project [SEMESTER IV] OSTL Assignments OSTL Assignments - 1 OSTL Assignments - 2 Project COVID19 India Data

AMEY THAKUR 8 Apr 28, 2022
VG-Scraper is a python program using the module called BeautifulSoup which allows anyone to scrape something off an website. This program lets you put in a number trough an input and a number is 1 news article.

VG-Scraper VG-Scraper is a convinient program where you can find all the news articles instead of finding one yourself. Installing [Linux] Open a term

3 Feb 13, 2022
An automated, headless YouTube Watcher and Scraper

Searches YouTube, queries recommended videos and watches them. All fully automated and anonymised through the Tor network. The project consists of two independently usable components, the YouTube aut

44 Oct 18, 2022
Scrap-mtg-top-8 - A top 8 mtg scraper using python

Scrap-mtg-top-8 - A top 8 mtg scraper using python

1 Jan 24, 2022
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

claudio paulo 5 Sep 25, 2022
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
Python Web Scrapper Project

Web Scrapper Projeto desenvolvido em python, sobre tudo com Selenium, BeautifulSoup e Pandas é um web scrapper que puxa uma tabela com as principais e

Jordan Ítalo Amaral 2 Jan 04, 2022
A simple Discord scraper for discord bots

A simple Discord scraper for discord bots. That includes sending an guild members ids to an file, Mass inviter for joining servers your bot is in and Fetching all the servers of the bot (w/MemberCoun

3zg 1 Jan 06, 2022
京东秒杀商品抢购Python脚本

Jd_Seckill 非常感谢原作者 https://github.com/zhou-xiaojun/jd_mask 提供的代码 也非常感谢 https://github.com/wlwwu/jd_maotai 进行的优化 主要功能 登陆京东商城(www.jd.com) cookies登录 (需要自

Andy Zou 1.5k Jan 03, 2023
Web-Scraping using Selenium Master

Web-Scraping using Selenium What is the need of Selenium? Some websites don't like to be scrapped and in that case you need to disguise your webscrapi

Md Rashidul Islam 1 Oct 26, 2021
A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

2 Apr 29, 2022
A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.

cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b

Hitesh Rana 4 Jun 02, 2022