联通手机营业厅自动做任务、签到、领流量、领积分等。

Overview

UnicomTask

简介

👯 😄 📫

联通手机营业厅自动完成每日任务,领流量、签到获取积分等,月底流量不发愁。

功能

  • 沃之树领流量、浇水(12M日流量)
  • 每日签到(1积分+翻倍4积分+第七天1G流量日包)
  • 天天抽奖,每天三次免费机会(随机奖励)
  • 游戏中心每日打卡(连续打卡,积分递增至最高7,第七天1G流量日包)
  • 游戏中心宝箱100M任务(100M日流量+随机奖励并翻倍)
  • 4G流量包看视频、下软件任务(90M+150M七日流量)
  • 每日领取100定向积分
  • 积分抽奖,每天最多抽30次(中奖几率渺茫)
  • 冬奥积分活动(第1和7天,可领取600定向积分,其余领取300定向积分,有效期至下月底)
  • 邮件推送运行结果

Github Actions 部署方法

1.fork本项目

项目地址:srcrs/UnicomTask

2.准备需要的参数

手机号、服务密码、appID。

其中appId的获取:

  • 安卓用户可在文件管理 --> Unicom/appid 文件中获取。

  • 苹果用户可抓取客户端登录接口获取。

https://m.client.10010.com/mobileService/login.htm

3.将必要参数填到Secrets

Secrets中的NameValue格式如下:

Name Value 说明
USERNAME_COVER 18566669999 手机号(必须)
PASSWORD_COVER 123456 服务密码(必须)
APPID_COVER xxxxxxxxx appId(必须)
EMAIL_COVER [email protected] 邮箱(可选)
LOTTERY_NUM 填写正整数 抽奖次数(可选)

4.开启Actions

默认Actions处于禁止状态,在Actions选项中开启Actions功能,把那个绿色的长按钮点一下。如果看到左侧工作流上有黄色!号,还需继续开启。

5.进行一次push操作

push操作会触发工作流运行。

删除掉README.md中的 😄 即可。完成后,每天早上7:30将自动完成每日任务。

同步上游代码

在最新的代码中,已经加上自动同步上游代码的action,将会定时在每周五16点执行,文件地址在.github/workflows/auto_merge.yml

同时您也可以安装pull应用,也可实现自动同步上游代码。

申明

本项目仅用于学习。

参考项目

mixool/HiCnUnicom,感谢该项目对于登录部分的思路

You might also like...
Releases(UnicomTask-v1.7)
FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

UserGhost411 1 Nov 17, 2022
Collection of code files to scrap different kinds of websites.

STW-Collection Scrap The Web Collection; blog posts. This repo contains Scrapy sample code to scrap the following kind of websites: Do you want to lea

Tapasweni Pathak 15 Jun 08, 2022
An experiment to deploy a serverless infrastructure for a scrapy project.

Serverless Scrapy project This project aims to evaluate the feasibility of an architecture based on serverless technology for a web crawler using scra

José Ferraz Neto 5 Jul 08, 2022
A Python Covid-19 cases tracker that scrapes data off the web and presents the number of Cases, Recovered Cases, and Deaths that occurred because of the pandemic.

A Python Covid-19 cases tracker that scrapes data off the web and presents the number of Cases, Recovered Cases, and Deaths that occurred because of the pandemic.

Alex Papadopoulos 1 Nov 13, 2021
Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website by form number and returns the results as json

Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website (prior form publication) by form number and returns the results as json. It provides the option to download pdfs over a ra

1 Jan 04, 2022
An helper library to scrape data from TikTok in one line, using the Influencer Hunters APIs.

TikTok Scraper An utility library to scrape data from TikTok hassle-free Go to the website » View Demo · Report Bug · Request Feature About The Projec

6 Jan 08, 2023
A Powerful Spider(Web Crawler) System in Python.

pyspider A Powerful Spider(Web Crawler) System in Python. Write script in Python Powerful WebUI with script editor, task monitor, project manager and

Roy Binux 15.7k Jan 04, 2023
Comment Webpage Screenshot is a GitHub Action that captures screenshots of web pages and HTML files located in the repository

Comment Webpage Screenshot is a GitHub Action that helps maintainers visually review HTML file changes introduced on a Pull Request by adding comments with the screenshots of the latest HTML file cha

Maksudul Haque 21 Sep 29, 2022
An helper library to scrape data from Instagram effortlessly, using the Influencer Hunters APIs.

Instagram Scraper An utility library to scrape data from Instagram hassle-free Go to the website » View Demo · Report Bug · Request Feature About The

2 Jul 06, 2022
一个m3u8视频流下载脚本

一个Python的m3u8流视频下载脚本 介绍 m3u8流视频日益常见,目前好用的下载器也有很多,我把之前自己写的一个小脚本分享出来,供广大网友使用。写此程序的目的在于给视频下载爱好者提供一个下载样例,可直接调用,勿再重复造轮子。 使用方法 在python中直接运行程序或进行外部调用 import

Nchu 0 Oct 10, 2021
Raspi-scraper is a configurable python webscraper that checks raspberry pi stocks from verified sellers

Raspi-scraper is a configurable python webscraper that checks raspberry pi stocks from verified sellers.

Louie Cai 13 Oct 15, 2022
This program will help you to properly scrape all data from a specific website

This program will help you to properly scrape all data from a specific website

MD. MINHAZ 0 May 15, 2022
This is a sport analytics project that combines the knowledge of OOP and Webscraping

This is a sport analytics project that combines the knowledge of Object Oriented Programming (OOP) and Webscraping, the weekly scraping of the English Premier league table is carried out to assess th

Dolamu Oludare 1 Nov 26, 2021
An IpVanish Proxies Scraper

EzProxies Tired of searching for good proxies for hours? Just get an IpVanish account and get thousands of good proxies in few seconds! Showcase Watch

11 Nov 13, 2022
A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

onlyfans-scraper A command-line program to download media, like and unlike posts, and more from creators on OnlyFans. Installation You can install thi

185 Jul 23, 2022
A low-code tool that generates python crawler code based on curl or url

KKBA Intruoduction A low-code tool that generates python crawler code based on curl or url Requirement Python = 3.6 Install pip install kkba Usage Co

8 Sep 20, 2021
A package designed to scrape data from Yahoo Finance.

yahoostock A package designed to scrape data from Yahoo Finance. Installation The most simple installation method is through PIP. pip install yahoosto

Rohan Singh 2 May 28, 2022
爬取各大SRC当日公告 | 通过微信通知的小工具 | 赏金工具

OnTimeHacker V1.0 OnTimeHacker 是一个爬取各大SRC当日公告,并通过微信通知的小工具 OnTimeHacker目前版本为1.0,已支持24家SRC,列表如下 360、爱奇艺、阿里、百度、哔哩哔哩、贝壳、Boss、58、菜鸟、滴滴、斗鱼、 饿了么、瓜子、合合、享道、京东、

Bywalks 95 Jan 07, 2023
a Scrapy spider that utilizes Postgres as a DB, Squid as a proxy server, Redis for de-duplication and Splash to render JavaScript. All in a microservices architecture utilizing Docker and Docker Compose

This is George's Scraping Project To get started cd into the theZoo file and run: chmod +x script.sh then: ./script.sh This will spin up a Postgres co

George Reyes 7 Nov 27, 2022
Proxy scraper. Format: IP | PORT | COUNTRY | TYPE

proxy scraper 🔎 Installation: git clone https://github.com/ebankoff/proxy_scraper Required pip libraries (pip install library name): lxml beautifulso

Eban'ko 19 Dec 07, 2022