此脚本为 python 脚本,实现原理为利用 selenium 定位相关元素,再配合点击事件完成浏览器的自动化.

Overview

Introduce

此脚本为 python 脚本,实现原理为利用 selenium 定位相关元素,再配合点击事件完成浏览器的自动化.

本打卡脚本适用于利用青柠疫服-高校疫情防空平台 https://wxyqfk.zhxy.net/#/poster 的每日疫情打卡


Usage

Download

可以利用 git 将此仓库下载到本地

git clone https://github.com/N0el4kLs/yqfkAutoCheck.git

Pre-use

安装脚本相关依赖

pip install -r requirements.txt
#pip3 install -r requirements.txt

info.ini Setting

用户需要在使用前先完善 info.ini 文件.

设置学校代码

选择自己的学校,点击确认

urlyxdm=xxxx 中的 xxxx 为学校代码 示例中的学院代码为 10646

完善配置文件中的[Url]部分

[Url]
loginpage = https://wxyqfk.zhxy.net/?yxdm=10646#/login

打卡用户信息设置

配置示例:

[PeopleList]
people = example 
// 这里填写用户表示,以此来查找用户的 LoginId SchoolCard 以及 PassWd
// 如果有多个用户,用户与用户之间用逗号隔开: people = example1,example2,example3

[LoginId]
example = 测试


[SchoolCard]
example = 123456789

[PassWd]
examole = 123456admin

Code Supplement

验证码

由于本人在实现识别图片验证码功能时是调用了别人的接口,所以此处代码需要用户自己填充.

填充代码解构: 输入:图片的路径的, 返回值: 为验证码.

需要修改的代码在QNcheck.py109-116

def getveriycode(self,imagedata):
        '''
        还原base64 图片信息,并获取图片验证码
        '''
        self.decodeImag(imagedata)
        
        # img_path = ./verifycode.png
        return USEROWNFUNC(r'./verifycode.png')  # 替换USEROWNFUNCTION(r'./verifycode.png') 为用户填充代码

邮件功能

本脚本提供了邮件发送功能,以提醒用户今日的打卡情况 此处使用的qq邮箱,所以需要用户填充相关信息 需要填充的代码位于 e_mail.py7-8

self.__my_sender =''   #发件人邮箱账号
self.__my_pass = ''    #发件人邮箱密码

QNcheck.py140中填入接受邮件的邮箱.

login_email = LoginEmail('user email') # 将 user eamil 更改为 接受邮件的邮箱

定位功能

QNcheck.py16-17行完善经纬度信息.

self.latitude = xxx    // 纬度
self.longitude = xxx  // 经度

RUN

运行脚本

python QNcheck.py
#python3 QNcheck.py
Owner
N0el4kLs
Just a Web Security Fresher
N0el4kLs
Incredibly fast crawler designed for OSINT.

Photon Incredibly fast crawler designed for OSINT. Photon Wiki • How To Use • Compatibility • Photon Library • Contribution • Roadmap Key Features Dat

Somdev Sangwan 9.3k Jan 02, 2023
Twitter Claimer / Swapper / Turbo - Proxyless - Multithreading

Twitter Turbo / Auto Claimer / Swapper Version: 1.0 Last Update: 01/26/2022 Use this at your own descretion. I've only used this on test accounts and

Underscores 6 May 02, 2022
Semplice scraper realizzato in Python tramite la libreria BeautifulSoup

Semplice scraper realizzato in Python tramite la libreria BeautifulSoup

2 Nov 22, 2021
A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response and scrap complete article - No need to write scrappers for articles fetching anymore

GNews 🚩 A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response 🚩 As well as you can fetch full

Muhammad Abdullah 273 Dec 31, 2022
A web scraper for nomadlist.com, made to avoid website restrictions.

Gypsylist gypsylist.py is a web scraper for nomadlist.com, made to avoid website restrictions. nomadlist.com is a website with a lot of information fo

Alessio Greggi 5 Nov 24, 2022
Get-web-images - A python code that get images from any site

image retrieval This is a python code to retrieve an image from the internet, a

CODE 1 Dec 30, 2021
UdemyBot - A Simple Udemy Free Courses Scrapper

UdemyBot - A Simple Udemy Free Courses Scrapper

Gautam Kumar 112 Nov 12, 2022
Instagram_scrapper - This project allow you to scrape the list of followers, following or both from a public Instagram account, and create a csv or excel file easily.

Instagram_scrapper This project allow you to scrape the list of followers, following or both from a public Instagram account, and create a csv or exce

Lakhdar Belkharroubi 5 Oct 17, 2022
This script is intended to crawl license information of repositories through the GitHub API.

GithubLicenseCrawler This script is intended to crawl license information of repositories through the GitHub API. Taking a csv file with requirements.

schutera 4 Oct 25, 2022
Create crawler get some new products with maximum discount in banimode website

crawler-banimode create crawler and get some new products with maximum discount in banimode website. این پروژه کوچک جهت یادگیری و کار با ابزار سلنیوم

nourollah rezaei 2 Feb 17, 2022
哔哩哔哩爬取器:以个人为中心

Open Bilibili Crawer 哔哩哔哩是一个信息非常丰富的社交平台,我们基于此构造社交网络。在该网络中,节点包括用户(up主),以及视频、专栏等创作产物;关系包括:用户之间,包括关注关系(following/follower),回复关系(评论区),转发关系(对视频or动态转发);用户对创

Boshen Shi 3 Oct 21, 2021
A web scraping pipeline project that retrieves TV and movie data from two sources, then transforms and stores data in a MySQL database.

New to Streaming Scraper An in-progress web scraping project built with Python, R, and SQL. The scraped data are movie and TV show information. The go

Charles Dungy 1 Mar 28, 2022
A Pixiv web crawler module

Pixiv-spider A Pixiv spider module WARNING It's an unfinished work, browsing the code carefully before using it. Features 0004 - Readme.md updated, co

Uzuki 1 Nov 14, 2021
Simple python tool for the purpose of swapping latinic letters with cirilic ones and vice versa in txt, docx and pdf files in Serbian language

Alpha Swap English This is a simple python tool for the purpose of swapping latinic letters with cirylic ones and vice versa, in txt, docx and pdf fil

Aleksandar Damnjanovic 3 May 31, 2022
This program will help you to properly scrape all data from a specific website

This program will help you to properly scrape all data from a specific website

MD. MINHAZ 0 May 15, 2022
Python web scrapper

Website scrapper Web scrapping project in Python. Created for learning purposes. Start Install python Update configuration with websites Launch script

Nogueira Vitor 1 Dec 19, 2021
Google Developer Profile Badge Scraper

Google Developer Profile Badge Scraper GDev Profile Badge Scraper is a Google Developer Profile Web Scraper which scrapes for specific badges in a use

Siddhant Lad 7 Jan 10, 2022
爬取各大SRC当日公告 | 通过微信通知的小工具 | 赏金工具

OnTimeHacker V1.0 OnTimeHacker 是一个爬取各大SRC当日公告,并通过微信通知的小工具 OnTimeHacker目前版本为1.0,已支持24家SRC,列表如下 360、爱奇艺、阿里、百度、哔哩哔哩、贝壳、Boss、58、菜鸟、滴滴、斗鱼、 饿了么、瓜子、合合、享道、京东、

Bywalks 95 Jan 07, 2023
A scalable frontier for web crawlers

Frontera Overview Frontera is a web crawling framework consisting of crawl frontier, and distribution/scaling primitives, allowing to build a large sc

Scrapinghub 1.2k Jan 02, 2023
Introduction to WebScraping Workshop - Semcomp 24 Beta

Extrair informações da internet de forma automatizada. Existem diversas maneiras de fazer isso, nesse tutorial vamos ver algumas delas, por meio de bibliotecas de python.

Luísa Moura 19 Sep 11, 2022