Provably Rare Gem Miner.

Overview

Provably Rare Gem Miner

just another random project by yoyoismee.eth

useful link

useful thing you should know

  • read contract -> gems(gemID) to get useful info
  • write contract -> mine to claim(kind, salt) to claim your NFT

to run. just edit the python file and run it.

pip install -r requirement.txt
python3 stick_the_miner.py

or new one auto_mine.py for less input. but you'll need infura account

Ps. too lazy to write docs. but it's 50 LoCs have fun.


why stick the miner ? welp.. this is part of the stick the BUIDLer series.

TL;DR - I'm working on a series of opensource NFT related project just for fun.

Key parameters to change if you are using orginal version 'stick_the_miner.py' (cr. K Nattakit's FB post)

  • chain_id - eth:1, fantom:250
  • entropy - ??
  • gemAddr - Game address, can get from https://gems.alphafinance.io/ (loot/bloot/rarity)
  • userAddr - your Wallet address
  • kind = ประเภทของเพชรที่จะขุด ผมแนะนำเป็น Emerald เพราะ return/difficult สูงที่สุด ง่าย ๆ คือคุณจะกำไรเร็วกว่านั่นเอง
  • nonce - number of times you've minted a gem (https://gems.alphafinance.io/ and connect your wallet)
  • diff - difficulty of gemID (https://gems.alphafinance.io/), note that this changes everytime someone minted that gem, so you need to change it too

(more detail) how to use 'auto_mine.py', the updated version of stick_the_miner

  • benefits: manual version (stick_the_miner.py) requires you to update the 'diff' parameter every time someone minted the nft of the target gem, and 'nounce' if you successfully minted one. This version automates that so you just have to rerun to update.
  • steps:
    1. update requirements pip install -r requirements.txt
    1. create an account at (https://infura.io/), select your chain (e.g. Ethereum), create a project and obtain your project ID
    1. create a .env file in the same format as .env-example, inputing your information from (2.), your wallet address and gem ID
    1. python3 auto_mine.py
  • Note: although you dont have to manually adjust 'diff' parameter everytime, you still need to restart the process everytime someone minted target gem's nft still

Once you get the salt:

Multicore version

  • Normal version uses only 1 core of processors, the multicore version should be ~8 times faster depending on your CPU / coreNumber variable
  • You can select the number of processors by chainging coreNumber variable (should not exceed ~16 tho)
  • "fantom_mining_pool_auto_multicore_line.py" is the multicore version of fantom_mining_pool.py
  • for mining by yourself and manual claim please use "fantom_multicore_line.py"
Comments
  • 🎨Added colorlog package for output with colors

    🎨Added colorlog package for output with colors

    I use the classic stick_the_miner.py for mining and had a hard time looking for the salt output due to the monochrome color. So, I decided to differentiate the salt output with the colorlog package😁

    opened by mickyngub 2
  • Multicore version of the miner for both pool mining and self mining

    Multicore version of the miner for both pool mining and self mining

    Depending on your CPU and the coreNumber variable, it should be ~8 times faster than the original version but with the drawback of a tremendous increase in CPU utilization.

    opened by mickyngub 1
  • Lowering the priority of python.exe to reduce lags

    Lowering the priority of python.exe to reduce lags

    If a user is mining gems in the background while using other compute-intensive programs, the user might experience lags due to 100% CPU utilization. By lowering the priority of python.exe miner, other programs will have higher priorities. Thus, users would be less likely to experience lagging issues.

    Under a normal circumstance in which the CPU utilization is less than 100%, it should have no impact on iter/sec.

    Before

    image

    After

    image

    opened by mickyngub 1
  • update fantom_mining_pool

    update fantom_mining_pool

    • edit .env-example add NOTIFY_AUTH_TOKEN, DIFF and PRIVATE_KEY
    • edit var private_key to PRIVATE_KEY
    • insert if PRIVATE_KEY != ''
    • get PRIVATE_KEY from .env for safety
    opened by NuttakitDW 0
  • why other people mint so quickly

    why other people mint so quickly

    https://ftmscan.com/address/0x729d74098f6669541ed1b69403ae75f080ccf1e1

    this people mint level 4 gems so quickly ,his salt is too low, but execute success.

    are you knonw the reason? image

    opened by sumrise 3
  • refactor to support multiple chain properly

    refactor to support multiple chain properly

    some of our code is unnecessary based on Ethereum e.g. infura_key, hard code chain no, and more todo: refactor to a more generic one that would be valid across all EVM compatible chain e.g. infura_key -> rpc_provider (also fix others code to match this change) and more

    also TODO: remove the quick fix for fantom file LOL

    opened by yoyoismee 0
  • Idea for sampling different range of int random on multiple workers

    Idea for sampling different range of int random on multiple workers

    Will probably do tmr, parse n worker to the get_salt function so each worker could random int from different range of numbers eg. worker 1: 1-2^122, worker 2: 2^122 to 2^123

    opened by Duayt 1
Releases(v0.0.1d-test-build)
This code is a near-infrared spectrum modeling method based on PCA and pls

Nirs-Pls-Corn This code is a near-infrared spectrum modeling method based on PCA and pls 近红外光谱分析技术属于交叉领域,需要化学、计算机科学、生物科学等多领域的合作。为此,在(北邮邮电大学杨辉华老师团队)指导下

Fu Pengyou 6 Dec 17, 2022
Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

torch-imle Concise and self-contained PyTorch library implementing the I-MLE gradient estimator proposed in our NeurIPS 2021 paper Implicit MLE: Backp

UCL Natural Language Processing 249 Jan 03, 2023
[NeurIPS 2021] PyTorch Code for Accelerating Robotic Reinforcement Learning with Parameterized Action Primitives

Robot Action Primitives (RAPS) This repository is the official implementation of Accelerating Robotic Reinforcement Learning via Parameterized Action

Murtaza Dalal 55 Dec 27, 2022
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Jan 07, 2023
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 05, 2022
The Empirical Investigation of Representation Learning for Imitation (EIRLI)

The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Center for Human-Compatible AI 31 Nov 06, 2022
(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation

DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation CVPR2021(oral) [arxiv] Requirements python3.7 pytorch==

W-zx-Y 85 Dec 07, 2022
Code release for NeuS

NeuS We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inpu

Peng Wang 813 Jan 04, 2023
Official Pytorch Implementation of GraphiT

GraphiT: Encoding Graph Structure in Transformers This repository implements GraphiT, described in the following paper: Grégoire Mialon*, Dexiong Chen

Inria Thoth 80 Nov 27, 2022
Dynamic Capacity Networks using Tensorflow

Dynamic Capacity Networks using Tensorflow Dynamic Capacity Networks (DCN; http://arxiv.org/abs/1511.07838) implementation using Tensorflow. DCN reduc

Taeksoo Kim 8 Feb 23, 2021
Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

Daniil Gavrilov 347 Nov 14, 2022
My personal Home Assistant configuration.

About This is my personal Home Assistant configuration. My guiding princile is to have full local control of all my devices. I intend everything to ru

Chris Turra 13 Jun 07, 2022
Tutorial repo for an end-to-end Data Science project

End-to-end Data Science project This is the repo with the notebooks, code, and additional material used in the ITI's workshop. The goal of the session

Deena Gergis 127 Dec 30, 2022
Automatic Video Captioning Evaluation Metric --- EMScore

Automatic Video Captioning Evaluation Metric --- EMScore Overview For an illustration, EMScore can be computed as: Installation modify the encode_text

Yaya Shi 17 Nov 28, 2022
A Bayesian cognition approach for belief updating of correlation judgement through uncertainty visualizations

Overview Code and supplemental materials for Karduni et al., 2020 IEEE Vis. "A Bayesian cognition approach for belief updating of correlation judgemen

Ryan Wesslen 1 Feb 08, 2022
Genetic feature selection module for scikit-learn

sklearn-genetic Genetic feature selection module for scikit-learn Genetic algorithms mimic the process of natural selection to search for optimal valu

Manuel Calzolari 260 Dec 14, 2022
A new codebase for Group Activity Recognition. It contains codes for ICCV 2021 paper: Spatio-Temporal Dynamic Inference Network for Group Activity Recognition and some other methods.

Spatio-Temporal Dynamic Inference Network for Group Activity Recognition The source codes for ICCV2021 Paper: Spatio-Temporal Dynamic Inference Networ

40 Dec 12, 2022
Open-CyKG: An Open Cyber Threat Intelligence Knowledge Graph

Open-CyKG: An Open Cyber Threat Intelligence Knowledge Graph Model Description Open-CyKG is a framework that is constructed using an attenti

Injy Sarhan 34 Jan 05, 2023
Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution

Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution Figure: Example visualization of the method and baseline as a

Oliver Hahn 16 Dec 23, 2022
This is a repository of our model for weakly-supervised video dense anticipation.

Introduction This is a repository of our model for weakly-supervised video dense anticipation. More results on GTEA, Epic-Kitchens etc. will come soon

2 Apr 09, 2022