河南工业大学 完美校园 自动校外打卡

Overview

HAUT-checkin

河南工业大学自动校外打卡
由于github actions存在明显延迟,建议直接使用腾讯云函数

特点

  • 多人打卡
  • 使用简单,仅需账号密码以及用于微信推送的uid
  • 自动获取上一次打卡信息用于打卡
  • 向所有成员微信单独推送打卡状态
  • 完美校园服务器繁忙时造成打卡失败会自动重新打卡,直到所有成员成功打卡

更新日志

2021.1.31 解决完美校园新设备需要验证码问题,重构项目,放弃github actions

使用方法

点这里下载源代码文件到本地

登陆腾讯云函数控制台

点击函数服务->新建创建云函数

选择自定义函数

地域可以随便选

运行环境选择python3.6

提交方法选择本地上传zip包

点击上传选择刚刚下载的zip文件

展开高级配置子菜单

执行超时时间设置为900

然后点击此链接获取二维码

QRcode

每个用户都需要扫描此二维码关注新消息服务公众号用于推送打卡状态

关注后在公众号内依次点击我的->我的UID,获取每个用户的UID

在环境变量处按照以下格式填入打卡成员信息

device_seed 可填入任意数字

建议不要让多个用户的device_seed相同

key Value
user1 账号 密码 device_seed uid
user2 账号 密码 device_seed uid

展开触发器配置子菜单

触发周期选择自定义触发周期

cron表达式填入0 10 0 * * * *

即为凌晨00:10打卡,第二位表示分钟,第三位表示小时

可自行修改打卡时间

点击完成

以上步骤完成后

进入函数代码页面

点击左侧的SMS.py

在点击右上角的绿色小三角运行此脚本

用于验证虚拟的新设备

在命令行依次填入username和刚刚填入环境变量的device_seed

然后输入收到的验证码

至此所有步骤完成

你可以点击页面下方的测试来验证是否出错

部署完成后如需添加打卡成员,修改函数配置添加环境变量即可

部署成功后第一次使用时,请在打卡时间确认脚本运行正常,默认每日00:10开始打卡

注意:本项目默认学校为河南工业大学,其他学校请自行修改代码。

You might also like...
Comments
  • index.py中的bug

    index.py中的bug

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    
    opened by tyu-t 1
  • 修复error队列处理uid的错误

    修复error队列处理uid的错误

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    

    应该是:

                phone = error[i][0]
                password = error[i][1]
                device_seed = error[i][2]
                uid = error[i][3]
    
    opened by tyu-t 0
  • 新增字段信息

    新增字段信息

    字段变了, 暂时懒得自己fork或者提pr,给看到这里的同学几个字段信息: image

    {
    ...,
    "updatainfo": [
        {
            "propertyname": "temperature",
            "value": get_updatainfo(last_check_json['updatainfos'], "temperature")
        },
        {
            "propertyname": "symptom",
            "value": get_updatainfo(last_check_json['updatainfos'], "symptom")
        },
        {
            "propertyname":"isFFHasSymptom",
            # "value": get_updatainfo(last_check_json['updatainfos'], "isFFHasSymptom") # 该字段已经无法获取
            "value": isFFHasSymptomDict[phone]
        },
        {
            "propertyname":"isContactFriendIn14",
            "value": "否"
        },
         { # 2022.03.24 该字段失效
        #     "propertyname":"xinqing",
        #     # "value": "是,已接种二针剂型(灭活疫苗,科兴、国药等)满6个月"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xinqing")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"xndkrqzj",          # 2021/12/8号更新增加,接种时间
        #     # "value": "2023-06-30"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xndkrqzj")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"zdyqdq0511",        # 2021/12/8号更新增加,接种企业
        #     # "value": "科兴"                    
        #     "value": get_updatainfo(last_check_json['updatainfos'], "zdyqdq0511")
        # }
        { # 2022.03.24 新增字段(其实是改名)"您昨日是否进行核酸检测"
            "propertyname":"xinqing",
            "value": "否"
        }
    ],
    ...
    }
    

    直接 Python 代码复制过来的,不是标准 Json 语法。其中 isFFHasSymptomDict[phone] 在外面搞了个词典,类似这样:

    isFFHasSymptomDict = {
        '18666666666': '接种部分剂次',
        '15555555555': '完成接种,待接种加强针',
        '17666666666': '未接种或不能接种',
        '15777777777': '已接种加强针'
    }
    

    字段名还是一如既往地让人一头雾水(

    opened by CHxCOOH 1
Releases(v0.1.0)
UdemyBot - A Simple Udemy Free Courses Scrapper

UdemyBot - A Simple Udemy Free Courses Scrapper

Gautam Kumar 112 Nov 12, 2022
An Web Scraping API for MDL(My Drama List) for Python.

PyMDL An API for MyDramaList(MDL) based on webscraping for python. Description An API for MDL to make your life easier in retriving and working on dat

6 Dec 10, 2022
A tool to easily scrape youtube data using the Google API

YouTube data scraper To easily scrape any data from the youtube homepage, a youtube channel/user, search results, playlists, and a single video itself

7 Dec 03, 2022
Scrapy-based cyber security news finder

Cyber-Security-News-Scraper Scrapy-based cyber security news finder Goal To keep up to date on the constant barrage of information within the field of

2 Nov 01, 2021
Example of scraping a paginated API endpoint and dumping the data into a DB

Provider API Scraper Example Example of scraping a paginated API endpoint and dumping the data into a DB. Pre-requisits Python = 3.9 Pipenv Setup # i

Alex Skobelev 1 Oct 20, 2021
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
This tool crawls a list of websites and download all PDF and office documents

This tool crawls a list of websites and download all PDF and office documents. Then it analyses the PDF documents and tries to detect accessibility issues.

AccessibilityLU 7 Sep 30, 2022
Automatically scrapes all menu items from the Taco Bell website

Automatically scrapes all menu items from the Taco Bell website. Returns as PANDAS dataframe.

Sasha 2 Jan 15, 2022
SmartScraper: 简单、自动、快捷的Python网络爬虫

SmartScraper: 简单、自动、快捷的Python网络爬虫 Note: The origin developer of SmartScraper is Alireza Mika, I only change a little code of AutoScraper. SmartScraper

DaDeng 9 Apr 16, 2022
Web scrapper para cotizar articulos

WebScrapper Este web scrapper esta desarrollado en python 3.10.0 para buscar en la pagina de cyber puerta articulos dentro del catalogo. El programa t

Jordan Gaona 1 Oct 27, 2021
对于有验证码的站点爆破,用于安全合法测试

使用方法 python3 main.py + 配置好的文件 python3 main.py Verify.json python3 main.py NoVerify.json 以上分别对应有验证码的demo和无验证码的demo Tips: 你可以以域名作为配置文件名字加载:python3 main

47 Nov 09, 2022
Telegram group scraper tool

Telegram Group Scrapper

Wahyusaputra 2 Jan 11, 2022
Html Content / Article Extractor, web scrapping lib in Python

Python-Goose - Article Extractor Intro Goose was originally an article extractor written in Java that has most recently (Aug2011) been converted to a

Xavier Grangier 3.8k Jan 02, 2023
A web scraper for nomadlist.com, made to avoid website restrictions.

Gypsylist gypsylist.py is a web scraper for nomadlist.com, made to avoid website restrictions. nomadlist.com is a website with a lot of information fo

Alessio Greggi 5 Nov 24, 2022
A list of Python Bots used to extract data from several websites

A list of Python Bots used to extract data from several websites. Data extraction is for products on e-commerce (ecommerce) websites. Data fetched i

Sahil Ladhani 1 Jan 14, 2022
a Scrapy spider that utilizes Postgres as a DB, Squid as a proxy server, Redis for de-duplication and Splash to render JavaScript. All in a microservices architecture utilizing Docker and Docker Compose

This is George's Scraping Project To get started cd into the theZoo file and run: chmod +x script.sh then: ./script.sh This will spin up a Postgres co

George Reyes 7 Nov 27, 2022
Web and PDF Scraper Refactoring

Web and PDF Scraper Refactoring This repository contains the example code of the Web and PDF scraper code roast. Here are the links to the videos: Par

18 Dec 31, 2022
Collection of code files to scrap different kinds of websites.

STW-Collection Scrap The Web Collection; blog posts. This repo contains Scrapy sample code to scrap the following kind of websites: Do you want to lea

Tapasweni Pathak 15 Jun 08, 2022
Danbooru scraper with python

Danbooru Version: 0.0.1 License under: MIT License Dependencies Python: = 3.9.7 beautifulsoup4 cloudscraper Example of use Danbooru from danbooru imp

Sugarbell 2 Oct 27, 2022
茅台抢购最新优化版本,茅台秒杀,优化了抢购协程队列

茅台抢购最新优化版本,茅台秒杀,优化了抢购协程队列

MaoTai 33 Sep 03, 2022