EBay-email-tracker - Scapes an entire search page of a particular item on eBay and sends regular updates to an email address

Overview

Introduction

This is a project I built with the sole intent to learn more about scraping websites, manipulating data, and delivering it through a medium. It is not intended for commercial use.

The program tracks an entire eBay search page of a particular item and sends automated updates to an email with the respective link and price to each entry.

The program asks you for:

  • The item you want to track
  • How many hours do you want between updates
  • The range of price
  • The email address you would like to receive the updates to

Installation

The program only works locally. It will create a CSV file in the current directory the program is run from with all the data.

If you would like to test the program, you need to:

pip install eBay_email_tracker
from ebay_email_tracker import tracker
tracker()

The program will only work for a Gmail account. You have to authorize access to less secure apps in your Gmail settings.

I recommend setting a new password for this. You can do that in "app passwords". You will need to have 2-Factor authorization in your Gmail, and it allows any login attempt with the correct credentials to connect without requiring 2-Factor authorization.

Set environmental variables in your Windows Operating System: (Control Panel -> View advanced system settings -> Environment Variables -> User variables for...)

Variable Value
EBAY_TRACKER_EMAIL An email you want to send the information from
EBAY_TRACKER_PASSWORD Password to that email (app passwords preferably)

The program depends on these packages:

  • Numpy
  • BeautifulSoup
  • lxml
  • requests
  • Pandas

Improvements:

  • I realized that the program will often send entries that are no longer available. While the program sleeps, I could check each entry that was not seen in the most recent search, and see if it has its price listed. If not, delete it from the data.
  • Currently, the only way to close the program would be to interrupt it and close it. I could implement a way for the user to close the program if he or she desires to.
  • While the program is running, there is nothing telling the user that it is working apart from the first email sent. The program will stay the same for the whole duration. I could print statements to the user, so he knows the program is working as intended.
  • The program only works locally. I could implement a database that tracks the item of different users, and sends that information without storing each entry locally, but that would be a completely different project.
Owner
Luis M. Capdevielle
Business Analyst Graduate.
Luis M. Capdevielle
Scrapy-based cyber security news finder

Cyber-Security-News-Scraper Scrapy-based cyber security news finder Goal To keep up to date on the constant barrage of information within the field of

2 Nov 01, 2021
Twitter Eye is a Twitter Information Gathering Tool With Twitter Eye

Twitter Eye is a Twitter Information Gathering Tool With Twitter Eye, you can search with various keywords and usernames on Twitter.

Jolanda de Koff 19 Dec 12, 2022
A scalable frontier for web crawlers

Frontera Overview Frontera is a web crawling framework consisting of crawl frontier, and distribution/scaling primitives, allowing to build a large sc

Scrapinghub 1.2k Jan 02, 2023
Python framework to scrape Pastebin pastes and analyze them

pastepwn - Paste-Scraping Python Framework Pastebin is a very helpful tool to store or rather share ascii encoded data online. In the world of OSINT,

Rico 105 Dec 29, 2022
Automated data scraper for Thailand COVID-19 data

The Researcher COVID data Automated data scraper for Thailand COVID-19 data Accessing the Data 1st Dose Provincial Vaccination Data 2nd Dose Provincia

Porames Vatanaprasan 31 Apr 17, 2022
This tool can be used to extract information from any website

WEB-INFO- This tool can be used to extract information from any website Install Termux and run the command --- $ apt-get update $ apt-get upgrade $ pk

1 Oct 24, 2021
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Podalirius 13 Dec 27, 2022
An Web Scraping API for MDL(My Drama List) for Python.

PyMDL An API for MyDramaList(MDL) based on webscraping for python. Description An API for MDL to make your life easier in retriving and working on dat

6 Dec 10, 2022
Python script who crawl first shodan page and check DBLTEK vulnerability

🐛 MASS DBLTEK EXPLOIT CHECKER USING SHODAN 🕸 Python script who crawl first shodan page and check DBLTEK vulnerability

Divin 4 Jan 09, 2022
Scrapes all articles and their headlines from theonion.com

The Onion Article Scraper Scrapes all articles and their headlines from the satirical news website https://www.theonion.com Also see Clickhole Article

0 Nov 17, 2021
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 04, 2022
simple http & https proxy scraper and checker

simple http & https proxy scraper and checker

Neospace 11 Nov 15, 2021
Binance harvester - A Python 3 script to harvest data from the Binance socket stream and calculate popular TA indicators and produce lists of top trending coins

Binance harvester - A Python 3 script to harvest data from the Binance socket stream and calculate popular TA indicators and produce lists of top trending coins

68 Oct 08, 2022
Luis M. Capdevielle 1 Jan 14, 2022
Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

Adrien Barbaresi 704 Jan 06, 2023
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
A repository with scraping code and soccer dataset from understat.com.

UNDERSTAT - SHOTS DATASET As many people interested in soccer analytics know, Understat is an amazing source of information. They provide Expected Goa

douglasbc 48 Jan 03, 2023
Creating Scrapy scrapers via the Django admin interface

django-dynamic-scraper Django Dynamic Scraper (DDS) is an app for Django which builds on top of the scraping framework Scrapy and lets you create and

Holger Drewes 1.1k Dec 17, 2022
CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform

CRI Scrape CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform Disclaimer This code is only for educational purpose. So

Vincenzo Cardone 0 Jul 23, 2022
Scrapy-soccer-games - Scraping information about soccer games from a few websites

scrapy-soccer-games Esse projeto tem por finalidade pegar informação de tabela d

Caio Alves 2 Jul 20, 2022