✂️🕷️ Spider-Cut is a Network Mapper Framework (NMAP Framework)

Overview

Build with Python Latest Release Last Comitment Python Version


Spider-Cut is a Network Mapper Framework (NMAP Framework)

Installation    |    Usage    |    Creators    |    Donate

Installation

# Kali Linux | WSL

# clone the repo
$ git clone https://github.com/XFORWORKS/SpiderCut

# change the working directory to SpiderCut
$ cd SpiderCut
  
# install the requirements  
$ python -m pip install -r requirements.txt

# run the bigger installation process
$ python setup.py

Usage

# Kali Linux | WSL

$ python spidercut.py --help
usage: spidercut.py [-h] [-v] [-s] [-m] [-rH] [-sn] [-rh] [-eX]
                              [-ex] [-a]

Spider-Cut is a Network Mapper Framework (NMAP Framework)

optional arguments:
  -h, --help       show this help message and exit
  -v, --version    shows the current Version of Spider-Cut
  -s, --single     scan a single Target
  -m, --multiple   scan multiple Targets
  -rH, --rhost     scan a range of Hosts
  -sn, --subnet    scan an entire Subnet
  -rh, --ranhost   scan random Hosts
  -eX, --exclude   Excluding Targets from a scan
  -ex, --excllist  Excluding Targets using a list
  -a, --agscan     perform an Aggressive scan

Created By XFORWORKS (Retr0 & Ctript0)

To run the script:

# Kali Linux | WSL

$ python spidercut.py

Creators

The Script is designed by Cript0 (like the banners, the colors, etc...)

Everything other is from Retr0 (like the commands, the framework idea, etc...)

Donate

PayPal

Releases

Latest Release

You might also like...
Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js

Gerapy Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Documentation Documentation

Async Python 3.6+ web scraping micro-framework based on asyncio
Async Python 3.6+ web scraping micro-framework based on asyncio

Ruia 🕸️ Async Python 3.6+ web scraping micro-framework based on asyncio. ⚡ Write less, run faster. Overview Ruia is an async web scraping micro-frame

A high-level distributed crawling framework.

Cola: high-level distributed crawling framework Overview Cola is a high-level distributed crawling framework, used to crawl pages and extract structur

Transistor, a Python web scraping framework for intelligent use cases.
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

Web crawling framework  based on asyncio.
Web crawling framework based on asyncio.

Web crawling framework for everyone. Written with asyncio, uvloop and aiohttp. Requirements Python3.5+ Installation pip install gain pip install uvloo

PyQuery-based scraping micro-framework.

demiurge PyQuery-based scraping micro-framework. Supports Python 2.x and 3.x. Documentation: http://demiurge.readthedocs.org Installing demiurge $ pip

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

A simple django-rest-framework api using web scraping

Apicell You can use this api to search in google, bing, pypi and subscene and get results Method : POST Parameter : query Example import request url =

Python framework to scrape Pastebin pastes and analyze them
Python framework to scrape Pastebin pastes and analyze them

pastepwn - Paste-Scraping Python Framework Pastebin is a very helpful tool to store or rather share ascii encoded data online. In the world of OSINT,

Releases(1.0)
  • 1.0(Dec 25, 2021)

    Build with Python Last Comitment Python Version


    Spider-Cut is a Network Mapper Framework (NMAP Framework)

    Installation

    # Kali Linux | WSL
    
    # clone the repo
    $ git clone https://github.com/XFORWORKS/SpiderCut
    
    # change the working directory to SpiderCut
    $ cd SpiderCut
      
    # install the requirements  
    $ python -m pip install -r requirements.txt
    
    # run the bigger installation process
    $ python setup.py
    

    Usage

    # Kali Linux | WSL
    
    $ python spidercut.py --help
    usage: spidercut.py [-h] [-v] [-s] [-m] [-rH] [-sn] [-rh] [-eX]
                                  [-ex] [-a]
    
    Spider-Cut is a Network Mapper Framework (NMAP Framework)
    
    optional arguments:
      -h, --help       show this help message and exit
      -v, --version    shows the current Version of Spider-Cut
      -s, --single     scan a single Target
      -m, --multiple   scan multiple Targets
      -rH, --rhost     scan a range of Hosts
      -sn, --subnet    scan an entire Subnet
      -rh, --ranhost   scan random Hosts
      -eX, --exclude   Excluding Targets from a scan
      -ex, --excllist  Excluding Targets using a list
      -a, --agscan     perform an Aggressive scan
    
    Created By XFORWORKS (Retr0 & Cript0)
    

    To run the script:

    # Kali Linux | WSL
    
    $ python spidercut.py
    

    Creators

    The Script is designed by Cript0 (like the banners, the colors, etc...)

    Everything other is from Retr0 (like the commands, the framework idea, etc...)

    Donate

    PayPal

    Source code(tar.gz)
    Source code(zip)
    SpiderCut.rar(171.16 KB)
    SpiderCut.zip(226.92 KB)
Owner
XforWorks
Nothing to see here...
XforWorks
一款利用Python来自动获取QQ音乐上某个歌手所有歌曲歌词的爬虫软件

QQ音乐歌词爬虫 一款利用Python来自动获取QQ音乐上某个歌手所有歌曲歌词的爬虫软件,默认去除了所有演唱会(Live)版本的歌曲。 使用方法 直接运行python run.py即可,然后输入你想获取的歌手名字,然后静静等待片刻。 output目录下保存生成的歌词和歌名文件。以周杰伦为例,会生成两

Yang Wei 11 Jul 27, 2022
Web-Scrapper using Python and Flask

Web-Scrapper "[초급]Python으로 웹 스크래퍼 만들기" 코스 -NomadCoders 기초적인 Python 문법강의부터 시작하여 웹사이트의 html파일에서 원하는 내용을 Scrapping해서 출력, csv 파일로 저장, flask를 이용한 간단한 웹페이지

윤성도 1 Nov 10, 2021
Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Bernardas Ališauskas 8 Oct 27, 2022
A low-code tool that generates python crawler code based on curl or url

KKBA Intruoduction A low-code tool that generates python crawler code based on curl or url Requirement Python = 3.6 Install pip install kkba Usage Co

8 Sep 20, 2021
Python script that reads Aliexpress offers urls from a Excel filename (.csv) and post then in a Telegram channel using a bot

Aliexpress to telegram post Python script that reads Aliexpress offers urls from a Excel filename (.csv) and post then in a Telegram channel using a b

Fernando 6 Dec 06, 2022
Web scraper build using python.

Web Scraper This project is made in pyhthon. It took some info. from website list then add them into data.json file. The dependencies used are: reques

Shashwat Harsh 2 Jul 22, 2022
Console application for downloading images from Reddit in Python

RedditImageScraper Console application for downloading images from Reddit in Python Introduction This short Python script was created for the mass-dow

James 0 Jul 04, 2021
A multithreaded tool for searching and downloading images from popular search engines. It is straightforward to set up and run!

🕳️ CygnusX1 Code by Trong-Dat Ngo. Overviews 🕳️ CygnusX1 is a multithreaded tool 🛠️ , used to search and download images from popular search engine

DatNgo 32 Dec 31, 2022
Web Scraping COVID 19 Meta Portal with Python

Web-Scraping-COVID-19-Meta-Portal-with-Python - Requests API and Beautiful Soup to scrape real-time COVID statistics from worldometer website and perform data cleaning and visual analysis in Jupyter

Aarif Munwar Jahan 1 Jan 04, 2022
Scrapes all articles and their headlines from theonion.com

The Onion Article Scraper Scrapes all articles and their headlines from the satirical news website https://www.theonion.com Also see Clickhole Article

0 Nov 17, 2021
Python scraper to check for earlier appointments in Clalit Health Services

clalit-appt-checker Python scraper to check for earlier appointments in Clalit Health Services Some background If you ever needed to schedule a doctor

Dekel 16 Sep 17, 2022
A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

2 Apr 29, 2022
Current Antarctic large iceberg positions derived from ASCAT and OSCAT-2

Iceberg Locations Antarctic large iceberg positions derived from ASCAT and OSCAT-2. All data collected here are from the NASA SCP website Overview Thi

Joel Hanson 5 Jul 27, 2022
Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js

Gerapy Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Documentation Documentation

Gerapy 2.9k Jan 03, 2023
Facebook Group Scraping Using Beautiful Soup & Selenium

Extract Facebook group posts that are related to a specific topic and write them to a .json file.

Fatima Ghadieh 14 Aug 12, 2022
🕷 Phone Crawler with multi-thread functionality

Phone Crawler: Phone Crawler with multi-thread functionality Disclaimer: I'm not responsible for any illegal/misuse actions, this program was made for

Kmuv1t 3 Feb 10, 2022
Pelican plugin that adds site search capability

Search: A Plugin for Pelican This plugin generates an index for searching content on a Pelican-powered site. Why would you want this? Static sites are

22 Nov 21, 2022
An Automated udemy coupons scraper which scrapes coupons and autopost the result in blogspot post

Autoscraper-n-blogger An Automated udemy coupons scraper which scrapes coupons and autopost the result in blogspot post and notifies via Telegram bot

GOKUL A.P 13 Dec 21, 2022
WebScrapping Project - G1 Latest News

Web Scrapping com Python Esse projeto consiste em um código para o usuário buscar as últimas nóticias sobre um termo qualquer, no site G1. Para esse p

Eduardo Henrique 2 Feb 13, 2022
feapder 是一款简单、快速、轻量级的爬虫框架。以开发快速、抓取快速、使用简单、功能强大为宗旨。支持分布式爬虫、批次爬虫、多模板爬虫,以及完善的爬虫报警机制。

feapder 是一款简单、快速、轻量级的爬虫框架。起名源于 fast、easy、air、pro、spider的缩写,以开发快速、抓取快速、使用简单、功能强大为宗旨,历时4年倾心打造。支持轻量爬虫、分布式爬虫、批次爬虫、爬虫集成,以及完善的爬虫报警机制。 之

boris 1.4k Dec 29, 2022