Incredibly fast crawler designed for OSINT.

Overview


Photon
Photon

Incredibly fast crawler designed for OSINT.

pypi

demo

Photon WikiHow To UseCompatibilityPhoton LibraryContributionRoadmap

Key Features

Data Extraction

Photon can extract the following data while crawling:

  • URLs (in-scope & out-of-scope)
  • URLs with parameters (example.com/gallery.php?id=2)
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • Secret keys (auth/API keys & hashes)
  • JavaScript files & Endpoints present in them
  • Strings matching custom regex pattern
  • Subdomains & DNS related data

The extracted information is saved in an organized manner or can be exported as json.

save demo

Flexible

Control timeout, delay, add seeds, exclude URLs matching a regex pattern and other cool stuff. The extensive range of options provided by Photon lets you crawl the web exactly the way you want.

Genius

Photon's smart thread management & refined logic gives you top notch performance.

Still, crawling can be resource intensive but Photon has some tricks up it's sleeves. You can fetch URLs archived by archive.org to be used as seeds by using --wayback option.

Plugins

Docker

Photon can be launched using a lightweight Python-Alpine (103 MB) Docker image.

$ git clone https://github.com/s0md3v/Photon.git
$ cd Photon
$ docker build -t photon .
$ docker run -it --name photon photon:latest -u google.com

To view results, you can either head over to the local docker volume, which you can find by running docker inspect photon or by mounting the target loot folder:

$ docker run -it --name photon -v "$PWD:/Photon/google.com" photon:latest -u google.com

Frequent & Seamless Updates

Photon is under heavy development and updates for fixing bugs. optimizing performance & new features are being rolled regularly.

If you would like to see features and issues that are being worked on, you can do that on Development project board.

Updates can be installed & checked for with the --update option. Photon has seamless update capabilities which means you can update Photon without losing any of your saved data.

Contribution & License

You can contribute in following ways:

  • Report bugs
  • Develop plugins
  • Add more "APIs" for ninja mode
  • Give suggestions to make it better
  • Fix issues & submit a pull request

Please read the guidelines before submitting a pull request or issue.

Do you want to have a conversation in private? Hit me up on my twitter, inbox is open :)

Photon is licensed under GPL v3.0 license

Comments
  • error when scanning IP

    error when scanning IP

    Line 169 errors out if you run photon against an IP. Easiest fix might be to just add a try/except, but there is prob a more elgant solution.

    I'm pretty sure this was working before.

    [email protected]:/opt/Photon# python /opt/Photon/photon.py -u http://192.168.0.213:80
          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/ v1.1.1
    
    Traceback (most recent call last):
      File "/opt/Photon/photon.py", line 169, in <module>
        domain = get_fld(host, fix_protocol=True) # Extracts top level domain out of the host
      File "/usr/local/lib/python2.7/dist-packages/tld/utils.py", line 387, in get_fld
        search_private=search_private
      File "/usr/local/lib/python2.7/dist-packages/tld/utils.py", line 339, in process_url
        raise TldDomainNotFound(domain_name=domain_name)
    tld.exceptions.TldDomainNotFound: Domain 192.168.0.213 didn't match any existing TLD name!
    
    opened by sethsec 18
  • Option to skip crawling of URLs that match a regex pattern

    Option to skip crawling of URLs that match a regex pattern

    Added option to skip crawling of URLs that match a regex pattern, changed handling of seeds, and removed double spaces.

    I am assuming this is what you meant in the ideas column tell me if I'm way off though :)

    opened by connorskees 17
  • Added two new options: -o/--output and --stdout

    Added two new options: -o/--output and --stdout

    Awesome tool. I've been looking for something like this for a while to integrate with something I am building! I added two new options for you to consider:

    1. -o/--output option that allows the user to specify an output directory (overriding the default).

    Command: python photon.py -u http://10.10.10.102:80 -l 2 -t100 -o /pentest/photontest

    In this case, all of the output files will be written to /pentest/photontest:

    [email protected]:/pentest/photontest# ls -ltr
    total 24
    -rw-r--r-- 1 root root    0 Jul 25 11:11 scripts.txt
    -rw-r--r-- 1 root root 3260 Jul 25 11:11 robots.txt
    -rw-r--r-- 1 root root 3260 Jul 25 11:11 links.txt
    -rw-r--r-- 1 root root   17 Jul 25 11:11 intel.txt
    -rw-r--r-- 1 root root  437 Jul 25 11:11 fuzzable.txt
    -rw-r--r-- 1 root root  146 Jul 25 11:11 files.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 failed.txt
    -rw-r--r-- 1 root root   96 Jul 25 11:11 external.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 endpoints.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 custom.txt
    
    1. A --stdout option that allow user to print everything to stdout so they can pipe it into another tool or redirect all output to an output file with an operating system redirector.

    Command: [email protected]:/opt/dev/Photon# python photon.py -u http://10.10.10.9:80 -l 2 -t100 --stdout

    Output:

          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/
    
    [+] URLs retrieved from robots.txt: 68
    [~] Level 1: 69 URLs
    [!] Progress: 69/69
    [~] Level 2: 9 URLs
    [!] Progress: 9/9
    [~] Crawling 0 JavaScript files
    
    --------------------------------------------------
    [+] URLs: 78
    [+] Intel: 1
    [+] Files: 1
    [+] Endpoints: 0
    [+] Fuzzable URLs: 9
    [+] Custom strings: 0
    [+] JavaScript Files: 0
    [+] External References: 3
    --------------------------------------------------
    [!] Total time taken: 0:32
    [!] Average request time: 0.40
    [+] Results saved in 10.10.10.9:80 directory
    
    All Results:
    
    http://10.10.10.9:80/themes/*.gif
    http://10.10.10.9:80/modules/*.png
    http://10.10.10.9:80/INSTALL.mysql.txt
    http://10.10.10.9:80/install.php
    http://10.10.10.9:80/scripts/
    http://10.10.10.9:80/node/add/
    http://10.10.10.9:80/?q=admin/
    http://10.10.10.9:80/themes/*.png
    http://10.10.10.9:80/modules/*.gif
    http://10.10.10.9:80
    http://10.10.10.9:80/includes/
    http://10.10.10.9:80/?q=user/password/
    http://10.10.10.9:80/INSTALL.txt
    http://10.10.10.9:80/profiles/
    http://10.10.10.9:80/themes/bartik/css/ie6.css?on28x3
    http://10.10.10.9:80/MAINTAINERS.txt
    http://10.10.10.9:80/themes/bartik/css/ie.css?on28x3
    http://10.10.10.9:80/modules/*.jpeg
    http://10.10.10.9:80/misc/*.gif
    

    The last thing i changed is the way you were wiping the directory each time you ran the tool so that you would get clean output. If you accept the -o option which allows the user to specify the directory, you can't just blindly delete the directory anymore (can't trust user input ;)). So i think i added a cleaner way to just overwrite each file (replacing the w+ with w), that should accomplish the same thing without needing to delete directories.

    enhancement 
    opened by sethsec 13
  • Added -v/--verbose option + fixed few logic error in detecting bad js files

    Added -v/--verbose option + fixed few logic error in detecting bad js files

    Changes:

    • Added the -v/--verbose option for verbose output.
    • Fixed 2 logical errors in detecting bad JS scripts. - With this fix, number of bad js files have almost gone to zero.
    opened by 0xInfection 11
  • RuntimeError: Set changed size during iteration in Photon 1.0.7

    RuntimeError: Set changed size during iteration in Photon 1.0.7

    [!] Progress: 1/1
    [~] Level 2: 387 URLs
    [!] Progress: 387/387
    [~] Level 3: 18078 URLs
    [!] Progress: 18078/18078
    [~] Level 4: 90143 URLs
    [!] Progress: 39750/90143^C
    [~] Crawling 0 JavaScript files
    
    Traceback (most recent call last):
      File "photon.py", line 454, in <module>
        for url in external:
    RuntimeError: Set changed size during iteration
    
    invalid 
    opened by thistehneisen 10
  • A couple minor issues

    A couple minor issues

    SSL issues

    You're ignoring SSL verification:

    /usr/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:843: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
      InsecureRequestWarning)
    

    You can fix this by adding the following to the top of the file (or wherever you feel like putting it):

    import warnings
    warnings.filterwarnings("ignore")
    

    Now this is bad practice, for obvious reasons, however, it makes sense why you're doing it. The above solution is the quickest and simplest solution to solve the problem and keep the errors from being annoying.

    An example with and without:

    Without: without

    With: with


    Dammit Unicode

    If I supply a URL that is in, lets say Russian and try to extract all the data:

          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/ 
    
     URLs retrieved from robots.txt: 5
     Level 1: 6 URLs
     Progress: 6/6
     Level 2: 35 URLs
     Progress: 35/35
     Level 3: 7 URLs
     Progress: 7/7
     Crawling 7 JavaScript files
     Progress: 7/7
    Traceback (most recent call last):
      File "photon.py", line 429, in <module>
        f.write(x + '\n')
    UnicodeEncodeError: 'ascii' codec can't encode character u'\u200b' in position 7: ordinal not in range(128)
    

    Easiest solution would be to just ignore things like this and continue, with a warning to the user that it was ignored.

    opened by Ekultek 10
  • Adding code to detect and report broken links.

    Adding code to detect and report broken links.

    Current Issue: Broken links for which server returns 404 status (or similar error) are not reported as failed links, instead these erroneous pages are parsed for text content.

    opened by snehm 9
  • Multiple improvements

    Multiple improvements

    The intels are searched only inside the page plain text to avoid retrieving tokens are garbage javascript code. Better regular expressions could allow searching inside javascript code for intels though.

    v1.3.0

    • Added more intels (GENERIC_URL, BRACKET_URL, BACKSLASH_URL, HEXENCODED_URL, URLENCODED_URL, B64ENCODED_URL, IPV4, IPV6, EMAIL, MD5, SHA1, SHA256, SHA512, YARA_PARSE, CREDIT_CARD)
    • Intel search only applied to text (not inside javascript or html tags)
    • proxy support with -p, --proxy option (http proxy only)
    • minor fixes and pep8 format

    Tested on

    os: Linux Mint 19.1
    python: 3.6.7
    
    opened by oXis 8
  • get_lfd

    get_lfd

    Hi After using the option --update i get this error:

    Traceback (most recent call last): File "photon.py", line 187, in domain = topLevel(main_url) File "photon.py", line 182, in topLevel toplevel = tld.get_fld(host, fix_protocol=True) AttributeError: 'module' object has no attribute 'get_fld'

    Any clue? I installed also on another system and still the same change target: same delete totally and clone again: same

    OS Kali3 Thanks

    invalid 
    opened by psychomad 7
  • Exceptions in threads during scanning in Level 1 & 2

    Exceptions in threads during scanning in Level 1 & 2

    Exception in thread Thread-4694: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "photon.py", line 211, in extractor if is_link(link, processed, files): File "/usr/share/Photon/core/utils.py", line 41, in is_link is_file = url.endswith(BAD_TYPES) TypeError: endswith first arg must be str, unicode, or tuple, not list

    image

    opened by 4k4xs4pH1r3 6
  • Improve dev-experience

    Improve dev-experience

    Add .gitignore and requirements file just to improve development experience.

    Doing this:

    • You won't need to remove unnecessary python compiled files or either other not needed files.
    • You just "pip install -r requirements.txt", and you'll have all necessary third-party libraries to execute script.
    invalid 
    opened by dgarana 5
  • TLSCertVerificationDisabled

    TLSCertVerificationDisabled

    Under [core/requester.py] line 48 of code.

    response = SESSION.get

    Certificate verification is disabled by setting verify to False in get. This may lead to Man-in-the-middle attacks.

    opened by yonasuriv 0
  • .well-known files

    .well-known files

    I see that this project examines robots.txt and sitemap.xml. I was wondering if you could add some of the other .well-known files like ads.txt, security.txt and others found https://well-known.dev/resources/

    opened by WebBreacher 0
  • No generating a DNS map

    No generating a DNS map

    So i cant get an dns map only creates when getting from google and example.com.

    Im using python 3 and command: python3 photon.py -u "http://example.com" --dns

    opened by DEPSTRCZ 0
Releases(v1.3.0)
  • v1.3.0(Apr 5, 2019)

    • Dropped Python < 3.2 support
    • Removed Ninja mode
    • Fixed a bug in link parsing
    • Fixed Unicode output
    • Fixed a bug which caused URLs to be treated as files
    • Intel is now associated with the URL where it was found
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Jan 26, 2019)

  • v1.1.6(Jan 25, 2019)

  • v1.1.5(Oct 24, 2018)

  • v1.1.4(Sep 18, 2018)

  • v1.1.3(Sep 6, 2018)

  • v1.1.2(Aug 7, 2018)

    • Code refactor
    • Better identification of external URLs
    • Fixed a major bug that made several intel URLs pass under the radar
    • Fixed a major bug that caused non-html type content to be marked a crawlable URL
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Sep 4, 2018)

    • Added --wayback
    • Fixed progress bar for python > 3.2
    • Added /core/config.py for easy customization
    • --dns now saves subdomains in subdomains.txt
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Aug 29, 2018)

    • Use of ThreadPoolExecutor for x2 speed (for python > 3.2)
    • Fixed mishandling of urls starting with //
    • Removed a redundant try-except statement
    • Evaluate entropy of found keys to avoid false positives
    Source code(tar.gz)
    Source code(zip)
  • v1.0.9(Aug 19, 2018)

  • v1.0.8(Aug 4, 2018)

    • added --exclude option
    • Better regex and code logic to favor performance
    • Fixed a bug that caused dnsdumpster to fail if target was a subdomain
    • Fixed a bug that caused a crash if run outside "Photon" directory
    • Fixed a bug in file saving (specific to python3)
    Source code(tar.gz)
    Source code(zip)
  • v1.0.7(Jul 28, 2018)

    • Added --timeout option
    • Added --output option
    • Added --user-agent option
    • Replaced lxml with regex
    • Better logic for favoring performance
    • Added bigger and separate file for user-agents
    Source code(tar.gz)
    Source code(zip)
  • v1.0.6(Jul 26, 2018)

  • v1.0.5(Jul 26, 2018)

  • v1.0.4(Jul 25, 2018)

    • Fixed an issue which caused regular links to be saved in robots.txt
    • Simplified flash function
    • Removed -n as an alias of --ninja
    • Added --only-urls option
    • Refactored code for readability
    • Skip saving files if the content is empty
    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Jul 24, 2018)

    • Introduced plugins
    • Added dnsdumpster plugin
    • Fixed non-ascii character handling, again
    • 404 pages are now added to failed list
    • Handling exceptions in jscanner
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Jul 23, 2018)

    • Proper handling of null response from robots.txt & sitemap.xml
    • Python2 compatibility
    • Proper handling of non-ascii chars
    • Added ability to specify custom regex pattern
    • Display total time taken and average time per request
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Jul 23, 2018)

Owner
Somdev Sangwan
I make things, I break things and I make things that break things.
Somdev Sangwan
A Python package that scrapes Google News article data while remaining undetected by Google.

A Python package that scrapes Google News article data while remaining undetected by Google. Our scraper can scrape page data up until the last page and never trigger a CAPTCHA (download stats: https

Geminid Systems, Inc 6 Aug 10, 2022
This Spider/Bot is developed using Python and based on Scrapy Framework to Fetch some items information from Amazon

- Hello, This Project Contains Amazon Web-bot. - I've developed this bot for fething some items information on Amazon. - Scrapy Framework in Python is

Khaled Tofailieh 4 Feb 13, 2022
Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Bernardas Ališauskas 8 Oct 27, 2022
New World Market Scraper

Bean Seller A New Worlds market scraper. Deployment This must be installed on Windows as it uses the Windows api to do its stuff Install Prerequisites

4 Sep 21, 2022
An Web Scraping API for MDL(My Drama List) for Python.

PyMDL An API for MyDramaList(MDL) based on webscraping for python. Description An API for MDL to make your life easier in retriving and working on dat

6 Dec 10, 2022
Scrape and display grades onto the console

WebScrapeGrades About The Project This Project is a personal project where I learned how to webscrape using python requests. Being able to get request

Cyrus Baybay 1 Oct 23, 2021
Instagram profile scrapper with python

IG Profile Scrapper Instagram profile Scrapper Just type the username, and boo! :D Instalation clone this repo to your computer git clone https://gith

its Galih 6 Nov 07, 2022
Amazon web scraping using Scrapy Framework

Amazon-web-scraping-using-Scrapy-Framework Scrapy Scrapy is an application framework for crawling web sites and extracting structured data which can b

Sejal Rajput 1 Jan 25, 2022
CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform

CRI Scrape CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform Disclaimer This code is only for educational purpose. So

Vincenzo Cardone 0 Jul 23, 2022
A Very simple free proxy list scraper.

Scrappp A Very simple free proxy list scraper, made in python The tool scrape proxy from diffrent sites and api's. Screenshots About the script !!! RE

Joji aka Moncef 12 Oct 27, 2022
Introduction to WebScraping Workshop - Semcomp 24 Beta

Extrair informações da internet de forma automatizada. Existem diversas maneiras de fazer isso, nesse tutorial vamos ver algumas delas, por meio de bibliotecas de python.

Luísa Moura 19 Sep 11, 2022
Web Scraping COVID 19 Meta Portal with Python

Web-Scraping-COVID-19-Meta-Portal-with-Python - Requests API and Beautiful Soup to scrape real-time COVID statistics from worldometer website and perform data cleaning and visual analysis in Jupyter

Aarif Munwar Jahan 1 Jan 04, 2022
✂️🕷️ Spider-Cut is a Network Mapper Framework (NMAP Framework)

Spider-Cut is a Network Mapper Framework (NMAP Framework) Installation | Usage | Creators | Donate Installation # Kali Linux | WSL

XforWorks 3 Mar 07, 2022
Scraping web pages to get data

Scraping Data Get public data and save in database This is project use Python How to run a project 1 - Clone the repository 2 - Install beautifulsoup4

Soccer Project 2 Nov 01, 2021
OSTA web scraper, for checking the status of school buses in Ottawa

OSTA-La-Vista OSTA web scraper, for checking the status of school buses in Ottawa. Getting Started Using a Raspberry Pi, download Python 3, and option

1 Jan 28, 2022
此脚本为 python 脚本,实现原理为利用 selenium 定位相关元素,再配合点击事件完成浏览器的自动化.

此脚本为 python 脚本,实现原理为利用 selenium 定位相关元素,再配合点击事件完成浏览器的自动化.

N0el4kLs 5 Nov 19, 2021
Python script who crawl first shodan page and check DBLTEK vulnerability

🐛 MASS DBLTEK EXPLOIT CHECKER USING SHODAN 🕸 Python script who crawl first shodan page and check DBLTEK vulnerability

Divin 4 Jan 09, 2022
Use Flask API to wrap Facebook data. Grab the wapper of Facebook public pages without an API key.

Facebook Scraper Use Flask API to wrap Facebook data. Grab the wapper of Facebook public pages without an API key. (Currently working 2021) Setup Befo

Encore Shao 2 Dec 27, 2021
for those who dont want to pay $10/month for high school game footage with ads

nfhs-scraper Disclaimer: I am in no way responsible for what you choose to do with this script and guide. I do not endorse avoiding paywalls or any il

Conrad Crawford 5 Apr 12, 2022
Goblyn is a Python tool focused to enumeration and capture of website files metadata.

Goblyn Metadata Enumeration What's Goblyn? Goblyn is a tool focused to enumeration and capture of website files metadata. How it works? Goblyn will se

Gustavo 46 Nov 22, 2022