A super lightweight Lagrangian model for calculating millions of trajectories using ERA5 data

Overview

Easy-ERA5-Trck

Easy-ERA5-Trck is a super lightweight Lagrangian model for calculating thousands (even millions) of trajectories simultaneously and efficiently using ERA5 data sets. It can implement super simplified equations of 3-D motion to accelerate integration, and use python multiprocessing to parallelize the integration tasks. Due to its simplification and parallelization, Easy-ERA5-Trck performs great speed in tracing massive air parcels, which makes areawide tracing possible.

Another version using WRF output to drive the model can be found here.

Caution: Trajectory calculation is based on the nearest-neighbor interpolation and first-guess velocity for super efficiency. Accurate calculation algorithm can be found on http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-14-00110.1, or use a professional and complicated model e.g. NOAA HYSPLIT instead.

Any question, please contact Zhenning LI ([email protected])

Galleries

Tibetan Plateau Air Source Tracers

tp_tracer

Tibetan Plateau Air Source Tracers (3D)

tp_tracer_3d

Install

If you wish to run easy-era5-trck using grib2 data, Please first install ecCodes.

Please install python3 using Anaconda3 distribution. Anaconda3 with python3.8 has been fully tested, lower version of python3 may also work (without testing).

Now, we recommend to create a new environment in Anaconda and install the requirements.txt:

conda create -n test_era5trck python=3.8
conda activate test_era5trck
pip install -r requirements.txt

If everything goes smoothly, first cd to the repo root path, and run config.py:

python3 config.py

This will convey fundamental configure parameters to ./conf/config_sys.ini.

Usage

test case

When you install the package ready. You may first want to try the test case. config.ini has been set for testcase, which is a very simple run:

[INPUT]
input_era5_case = ./testcase/
input_parcel_file=./input/input.csv

[CORE]
# timestep in min
time_step = 30
precession = 1-order
# 1 for forward, -1 for backward
forward_option = -1
# for forward, this is the initial time; otherwise, terminating time
start_ymdh = 2015080212
# integration length in hours
integration_length = 24
# how many processors are willing to work for you
ntasks = 4
# not used yet
boundary_check = False

[OUTPUT]
# output format, nc/csv, nc recommended for large-scale tracing
out_fmt = nc
out_prefix = testcase
# output frequency in min
out_frq = 60
# when out_fmt=csv, how many parcel tracks will be organized in a csv file.
sep_num = 5000

When you type python3 run.py, Easy-ERA5-Trck will uptake the above configurations, by which the ERA5 UVW data in ./testcase will be imported for driving the Lagrangian integration.

Now you will see your workers are dedicated to tracing the air parcels. After several seconds, if you see something like:

2021-05-31 17:32:14,015 - INFO : All subprocesses done.
2021-05-31 17:32:14,015 - INFO : Output...
2021-05-31 17:32:14,307 - INFO : Easy ERA5 Track Completed Successfully!

Congratulations! The testcase works smoothly on your machine!

Now you could check the output file in ./output, named as testcase.I20150802120000.E20150801120000.nc|csv, which indicates the initial time and endding time. For backward tracing, I > E, and vice versa.

You could choose output files as plain ascii csv format or netCDF format (Recommended). netCDF format output metadata looks like:

{
dimensions:
    time = 121 ;
    parcel_id = 413 ;
variables:
    double xlat(time, parcel_id) ;
        xlat:_FillValue = NaN ;
    double xlon(time, parcel_id) ;
        xlon:_FillValue = NaN ;
    double xh(time, parcel_id) ;
        xh:_FillValue = NaN ;
    int64 time(time) ;
        time:units = "hours since 1998-06-10 00:00:00" ;
        time:calendar = "proleptic_gregorian" ;
    int64 parcel_id(parcel_id) ;
}

setup your case

Congratulation! After successfully run the toy case, of course, now you are eager to setup your own case. First, build your own case directory, for example, in the repo root dir:

mkdir mycase

Now please make sure you have configured ECMWF CDS API correctly, both in your shell environment and python interface.

Next, set [DOWNLOAD] section in config.ini to fit your desired period, levels, and region for downloading.

[DOWNLOAD]
store_path=./mycase/
start_ymd = 20151220
end_ymd = 20160101
pres=[700, 750, 800, 850, 900, 925, 950, 975, 1000]

# eara: [North, West, South, East]
area=[-10, 0, -90, 360]
# data frame frequency: recommend 1, 2, 3, 6. 
# lower frequency will download faster but less accurate in tracing
freq_hr=3

Here we hope to download 1000-700 hPa data, from 20151220 to 20160101, 3-hr temporal frequency UVW data from ERA5 CDS.

./utlis/getERA5-UVW.py will help you to download the ERA5 reanalysis data for your case, in daily file with freq_hr temporal frequency.

cd utils
python3 getERA5-UVW.py

While the machine is downloading your data, you may want to determine the destinations or initial points of your targeted air parcels. ./input/input.csv: This file is the default file prescribing the air parcels for trajectory simulation. Alternatively, you can assign it by input_parcel_file in config.ini.

The format of this file:

airp_id, init_lat, init_lon, init_h0 (hPa)

For forward trajectory, the init_{lat|lon|h0} denote initial positions; while for backward trajectory, they indicate ending positions. You can write it by yourself. Otherwise, there is also a utility ./utils/take_box_grid.py, which will help you to take air parcels in a rectanguler domain.

plese also set other sections in config.ini accordingly, now these air parcels are waiting your command python3 run.py to travel the world!

Besides, ./utils/control_multi_run.py will help you to run multiple seriels of the simulation. There are some postprocessing scripts for visualization in post_process, you may need to modify them to fit your visualization usage.

Repository Structure

run.py

./run.py: Main script to run the Easy-ERA5-Trck.

conf

  • ./conf/config.ini: Configure file for the model. You may set ERA5 input file, input frequency, integration time steps, and other settings in this file.
  • ./conf/config_sys.ini: Configure file for the system, generate by run config.py.
  • ./conf/logging_config.ini: Configure file for logging module.

core

  • ./core/lagrange.py: Core module for calculating the air parcels Lagrangian trajectories.

lib

  • ./lib/cfgparser.py: Module file containing read/write method of the config.ini
  • ./lib/air_parcel.py: Module file containing definition of air parcel class and related methods such as march and output.
  • ./lib/preprocess_era5inp.py: Module file that defines the field_hdl class, which contains useful fields data (U, V, W...) and related method, including ERA5 grib file IO operations.
  • ./lib/utils.py: utility functions for the model.

post_process

Some visualization scripts.

utils

Utils for downloading, generating input.csv, etc.

Version iteration

Oct 28, 2020

  • Fundimental pipeline design, multiprocessing, and I/O.
  • MVP v0.01

May 31, 2021

  • Major Revision, logging module, and exception treatment
  • test case
  • Major documentation update
  • Utility for data downloading
  • Utility for taking grids in a box
  • Basic functions done, v0.10

Jun 09, 2021

  • The automatic detection of longitude range is added, allowing users to adopt two different ranges of longitude: [-180°, 180°] or [0°, 360°].
  • Currently, if you want to use the [-180°, 180°] data version, you can only set ntasks = 1 in the config.ini file.
You might also like...
A state of the art of new lightweight YOLO model implemented by TensorFlow 2.
A state of the art of new lightweight YOLO model implemented by TensorFlow 2.

CSL-YOLO: A New Lightweight Object Detection System for Edge Computing This project provides a SOTA level lightweight YOLO called "Cross-Stage Lightwe

A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Ultra-lightweight human body posture key point CNN model. ModelSize:2.3MB  HUAWEI P40 NCNN benchmark: 6ms/img,
Ultra-lightweight human body posture key point CNN model. ModelSize:2.3MB HUAWEI P40 NCNN benchmark: 6ms/img,

Ultralight-SimplePose Support NCNN mobile terminal deployment Based on MXNET(=1.5.1) GLUON(=0.7.0) framework Top-down strategy: The input image is t

A simple and lightweight genetic algorithm for optimization of any machine learning model

geneticml This package contains a simple and lightweight genetic algorithm for optimization of any machine learning model. Installation Use pip to ins

MMdet2-based reposity about lightweight detection model: Nanodet, PicoDet.
MMdet2-based reposity about lightweight detection model: Nanodet, PicoDet.

Lightweight-Detection-and-KD MMdet2-based reposity about lightweight detection model: Nanodet, PicoDet. This repo also includes detection knowledge di

Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

🐤 Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.
In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.

Contrastive Learning of Object Representations Supervisor: Prof. Dr. Gemma Roig Institutions: Goethe University CVAI - Computational Vision & Artifici

Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.
Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.

Cancer-and-Tumor-Detection-Using-Inception-model In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks

Releases(v0.10-beta)
  • v0.10-beta(Jun 2, 2021)

    This is a pre-release of Easy-ERA5-Trck. In this v0.10-beta pre-release, we establish the basic functions forward/backward tracing the air parcels in massive amount, exploiting the usage of multiprocessing in Python. You could use the tracing output for visualization, and analysis which does not require very high precession/accuracy. Boundary check has not been involved yet, and exception handlings are still under-developed, with no promise to cover your exceptional cases.

    Source code(tar.gz)
    Source code(zip)
Owner
Zhenning Li
Wind extinguishes a candle but energizes fire.
Zhenning Li
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
ReAct: Out-of-distribution Detection With Rectified Activations

ReAct: Out-of-distribution Detection With Rectified Activations This is the source code for paper ReAct: Out-of-distribution Detection With Rectified

38 Dec 05, 2022
A library that allows for inference on probabilistic models

Bean Machine Overview Bean Machine is a probabilistic programming language for inference over statistical models written in the Python language using

Meta Research 234 Dec 29, 2022
Pytorch implementation of XRD spectral identification from COD database

XRDidentifier Pytorch implementation of XRD spectral identification from COD database. Details will be explained in the paper to be submitted to NeurI

Masaki Adachi 4 Jan 07, 2023
This application explain how we can easily integrate Deepface framework with Python Django application

deepface_suite This application explain how we can easily integrate Deepface framework with Python Django application install redis cache install requ

Mohamed Naji Aboo 3 Apr 18, 2022
PIKA: a lightweight speech processing toolkit based on Pytorch and (Py)Kaldi

PIKA: a lightweight speech processing toolkit based on Pytorch and (Py)Kaldi PIKA is a lightweight speech processing toolkit based on Pytorch and (Py)

336 Nov 25, 2022
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
Tensorflow python implementation of "Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos"

Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos This repository is the official tensorflow python implementation

Yasamin Jafarian 287 Jan 06, 2023
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

Chris Nota 5 Aug 30, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
PyTorch implementation of the ExORL: Exploratory Data for Offline Reinforcement Learning

ExORL: Exploratory Data for Offline Reinforcement Learning This is an original PyTorch implementation of the ExORL framework from Don't Change the Alg

Denis Yarats 52 Jan 01, 2023
A computer vision pipeline to identify the "icons" in Christian paintings

Christian-Iconography A computer vision pipeline to identify the "icons" in Christian paintings. A bit about iconography. Iconography is related to id

Rishab Mudliar 3 Jul 30, 2022
[AAAI22] Reliable Propagation-Correction Modulation for Video Object Segmentation

Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or

Xiaohao Xu 70 Dec 04, 2022
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV

Yongming Rao 90 Dec 31, 2022
The code of “Similarity Reasoning and Filtration for Image-Text Matching” [AAAI2021]

SGRAF PyTorch implementation for AAAI2021 paper of “Similarity Reasoning and Filtration for Image-Text Matching”. It is built on top of the SCAN and C

Ronnie_IIAU 149 Dec 22, 2022
Car Parking Tracker Using OpenCv

Car Parking Vacancy Tracker Using OpenCv I used basic image processing methods i

Adwait Kelkar 30 Dec 03, 2022
Invertible conditional GANs for image editing

Invertible Conditional GANs This is the implementation of the IcGAN model proposed in our paper: Invertible Conditional GANs for image editing. Novemb

Guim 278 Dec 12, 2022
This repository contains code released by Google Research.

This repository contains code released by Google Research.

Google Research 26.6k Dec 31, 2022
Instance Semantic Segmentation List

Instance Semantic Segmentation List This repository contains lists of state-or-art instance semantic segmentation works. Papers and resources are list

bighead 87 Mar 06, 2022
Algorithms for outlier, adversarial and drift detection

Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline d

Seldon 1.6k Dec 31, 2022