河南工业大学 完美校园 自动校外打卡

Overview

HAUT-checkin

河南工业大学自动校外打卡
由于github actions存在明显延迟,建议直接使用腾讯云函数

特点

  • 多人打卡
  • 使用简单,仅需账号密码以及用于微信推送的uid
  • 自动获取上一次打卡信息用于打卡
  • 向所有成员微信单独推送打卡状态
  • 完美校园服务器繁忙时造成打卡失败会自动重新打卡,直到所有成员成功打卡

更新日志

2021.1.31 解决完美校园新设备需要验证码问题,重构项目,放弃github actions

使用方法

点这里下载源代码文件到本地

登陆腾讯云函数控制台

点击函数服务->新建创建云函数

选择自定义函数

地域可以随便选

运行环境选择python3.6

提交方法选择本地上传zip包

点击上传选择刚刚下载的zip文件

展开高级配置子菜单

执行超时时间设置为900

然后点击此链接获取二维码

QRcode

每个用户都需要扫描此二维码关注新消息服务公众号用于推送打卡状态

关注后在公众号内依次点击我的->我的UID,获取每个用户的UID

在环境变量处按照以下格式填入打卡成员信息

device_seed 可填入任意数字

建议不要让多个用户的device_seed相同

key Value
user1 账号 密码 device_seed uid
user2 账号 密码 device_seed uid

展开触发器配置子菜单

触发周期选择自定义触发周期

cron表达式填入0 10 0 * * * *

即为凌晨00:10打卡,第二位表示分钟,第三位表示小时

可自行修改打卡时间

点击完成

以上步骤完成后

进入函数代码页面

点击左侧的SMS.py

在点击右上角的绿色小三角运行此脚本

用于验证虚拟的新设备

在命令行依次填入username和刚刚填入环境变量的device_seed

然后输入收到的验证码

至此所有步骤完成

你可以点击页面下方的测试来验证是否出错

部署完成后如需添加打卡成员,修改函数配置添加环境变量即可

部署成功后第一次使用时,请在打卡时间确认脚本运行正常,默认每日00:10开始打卡

注意:本项目默认学校为河南工业大学,其他学校请自行修改代码。

You might also like...
Comments
  • index.py中的bug

    index.py中的bug

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    
    opened by tyu-t 1
  • 修复error队列处理uid的错误

    修复error队列处理uid的错误

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    

    应该是:

                phone = error[i][0]
                password = error[i][1]
                device_seed = error[i][2]
                uid = error[i][3]
    
    opened by tyu-t 0
  • 新增字段信息

    新增字段信息

    字段变了, 暂时懒得自己fork或者提pr,给看到这里的同学几个字段信息: image

    {
    ...,
    "updatainfo": [
        {
            "propertyname": "temperature",
            "value": get_updatainfo(last_check_json['updatainfos'], "temperature")
        },
        {
            "propertyname": "symptom",
            "value": get_updatainfo(last_check_json['updatainfos'], "symptom")
        },
        {
            "propertyname":"isFFHasSymptom",
            # "value": get_updatainfo(last_check_json['updatainfos'], "isFFHasSymptom") # 该字段已经无法获取
            "value": isFFHasSymptomDict[phone]
        },
        {
            "propertyname":"isContactFriendIn14",
            "value": "否"
        },
         { # 2022.03.24 该字段失效
        #     "propertyname":"xinqing",
        #     # "value": "是,已接种二针剂型(灭活疫苗,科兴、国药等)满6个月"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xinqing")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"xndkrqzj",          # 2021/12/8号更新增加,接种时间
        #     # "value": "2023-06-30"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xndkrqzj")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"zdyqdq0511",        # 2021/12/8号更新增加,接种企业
        #     # "value": "科兴"                    
        #     "value": get_updatainfo(last_check_json['updatainfos'], "zdyqdq0511")
        # }
        { # 2022.03.24 新增字段(其实是改名)"您昨日是否进行核酸检测"
            "propertyname":"xinqing",
            "value": "否"
        }
    ],
    ...
    }
    

    直接 Python 代码复制过来的,不是标准 Json 语法。其中 isFFHasSymptomDict[phone] 在外面搞了个词典,类似这样:

    isFFHasSymptomDict = {
        '18666666666': '接种部分剂次',
        '15555555555': '完成接种,待接种加强针',
        '17666666666': '未接种或不能接种',
        '15777777777': '已接种加强针'
    }
    

    字段名还是一如既往地让人一头雾水(

    opened by CHxCOOH 1
Releases(v0.1.0)
Scrapes Every Email Address of Every Society in Every University

society-email-scrape Site Live at https://kcsoc.github.io/society-email-scrape/ How to automatically generate new data Go to unis.yml Add your uni Cre

Krishna Consciousness Society 18 Dec 14, 2022
Scrapy, a fast high-level web crawling & scraping framework for Python.

Scrapy Overview Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pag

Scrapy project 45.5k Jan 07, 2023
A Web Scraping Program.

Web Scraping AUTHOR: Saurabh G. MTech Information Security, IIT Jammu. If you find this repository useful. I would appreciate if you Star it and Fork

Saurabh G. 2 Dec 14, 2022
Web Scraping Instagram photos with Selenium by only using a hashtag.

Web-Scraping-Instagram This project is used to automatically obtain images by web scraping Instagram with Selenium in Python. The required input will

Sandro Agama 3 Nov 24, 2022
A pure-python HTML screen-scraping library

Scrapely Scrapely is a library for extracting structured data from HTML pages. Given some example web pages and the data to be extracted, scrapely con

Scrapy project 1.8k Dec 31, 2022
原神爬虫 抓取原神界面圣遗物信息

原神圣遗物半自动爬虫 说明 直接抓取原神界面中的圣遗物数据 目前只适配了背包页面的抓取 准确率:97.5%(普通通用接口,对 40 件随机圣遗物识别,统计完全正确的数量为 39) 准确率:100%(4k 屏幕,普通通用接口,对 110 件圣遗物识别,统计完全正确的数量为 110) 不排除还有小错误的

hwa 28 Oct 10, 2022
VG-Scraper is a python program using the module called BeautifulSoup which allows anyone to scrape something off an website. This program lets you put in a number trough an input and a number is 1 news article.

VG-Scraper VG-Scraper is a convinient program where you can find all the news articles instead of finding one yourself. Installing [Linux] Open a term

3 Feb 13, 2022
Minecraft Item Scraper

Minecraft Item Scraper To run, first ensure you have the BeautifulSoup module: pip install bs4 Then run, python minecraft_items.py folder-to-save-ima

Jaedan Calder 1 Dec 29, 2021
Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages

Github Scraper Github scraper app is used to scrape data for a specific user profile. Github scraper app gets a github profile name and check whether

Siva Prakash 6 Apr 05, 2022
A list of Python Bots used to extract data from several websites

A list of Python Bots used to extract data from several websites. Data extraction is for products on e-commerce (ecommerce) websites. Data fetched i

Sahil Ladhani 1 Jan 14, 2022
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

Amit 6 Aug 26, 2022
淘宝茅台抢购最新优化版本,淘宝茅台秒杀,优化了茅台抢购线程队列

淘宝茅台抢购最新优化版本,淘宝茅台秒杀,优化了茅台抢购线程队列

MaoTai 118 Dec 16, 2022
a high-performance, lightweight and human friendly serving engine for scrapy

a high-performance, lightweight and human friendly serving engine for scrapy

Speakol Ads 30 Mar 01, 2022
A module for CME that spiders hashes across the domain with a given hash.

hash_spider A module for CME that spiders hashes across the domain with a given hash. Installation Simply copy hash_spider.py to your CME module folde

37 Sep 08, 2022
Google Scholar Web Scraping

Google Scholar Web Scraping This is a python script that asks for a user to input the url for a google scholar profile, and then it writes publication

Suzan M 1 Dec 12, 2021
Kusonime scraper using python3

Features Scrap from url Scrap from recommendation Search by query Todo [+] Search by genre Example # Get download url from kusonime import Scrap

MhankBarBar 2 Jan 28, 2022
Google Developer Profile Badge Scraper

Google Developer Profile Badge Scraper It is a Google Developer Profile Web Scraper which scrapes for specific badges in a user's Google Developer Pro

Hemant Sachdeva 2 Feb 22, 2022
茅台抢购最新优化版本,茅台秒杀,优化了抢购协程队列

茅台抢购最新优化版本,茅台秒杀,优化了抢购协程队列

MaoTai 33 Sep 03, 2022
Scrapes the Sun Life of Canada Philippines web site for historical prices of their investment funds and then saves them as CSV files.

slocpi-scraper Sun Life of Canada Philippines Inc Investment Funds Scraper Install dependencies pip install -r requirements.txt Usage General format:

Daryl Yu 2 Jan 07, 2022
爬虫案例合集。包括但不限于《淘宝、京东、天猫、豆瓣、抖音、快手、微博、微信、阿里、头条、pdd、优酷、爱奇艺、携程、12306、58、搜狐、百度指数、维普万方、Zlibraty、Oalib、小说、招标网、采购网、小红书》

lxSpider 爬虫案例合集。包括但不限于《淘宝、京东、天猫、豆瓣、抖音、快手、微博、微信、阿里、头条、pdd、优酷、爱奇艺、携程、12306、58、搜狐、百度指数、维普万方、Zlibraty、Oalib、小说网站、招标采购网》 简介: 时光荏苒,记不清写了多少案例了。

lx 793 Jan 05, 2023