Physics-informed Neural Operator for Learning Partial Differential Equation

Related tags

Deep LearningPINO
Overview

PINO

PINO Diagram

Results on Navier Stokes equation

Physics-informed Neural Operator for Learning Partial Differential Equation

Abstract: Machine learning methods have recently shown promise in solving partial differential equations (PDEs). They can be classified into two broad categories: solution function approximation and operator learning. The Physics-Informed Neural Network (PINN) is an example of the former while the Fourier neural operator (FNO) is an example of the latter. Both these approaches have shortcomings. The optimization in PINN is challenging and prone to failure, especially on multi-scale dynamic systems. FNO does not suffer from this optimization issue since it carries out supervised learning on a given dataset, but obtaining such data may be too expensive or infeasible. In this work, we propose the physics-informed neural operator (PINO), where we combine the operating-learning and function-optimization frameworks, and this improves convergence rates and accuracy over both PINN and FNO models. In the operator-learning phase, PINO learns the solution operator over multiple instances of the parametric PDE family. In the test-time optimization phase, PINO optimizes the pre-trained operator ansatz for the querying instance of the PDE. Experiments show PINO outperforms previous ML methods on many popular PDE families while retaining the extraordinary speed-up of FNO compared to solvers. In particular, PINO accurately solves long temporal transient flows and Kolmogorov flows, while PINN and other methods fail to converge.

Requirements

  • Pytorch 1.8.0 or later
  • wandb
  • tqdm
  • scipy
  • h5py
  • numpy
  • DeepXDE:latest
  • tensorflow 2.4.0

Data description

Burgers equation

burgers_pino.mat

Darcy flow

  • spatial domain: $x\in (0,1)^2$
  • Data file: piececonst_r421_N1024_smooth1.mat, piececonst_r421_N1024_smooth2.mat
  • Raw data shape: 1024x421x421

Long roll out of Navier Stokes equation

  • spatial domain: $x\in (0, 1)^2$
  • temporal domain: $t\in [0, 49]$
  • forcing: $0.1(\sin(2\pi(x_1+x_2)) + \cos(2\pi(x_1+x_2)))$
  • viscosity = 0.001

Data file: nv_V1e-3_N5000_T50.mat, with shape 50 x 64 x 64 x 5000

  • train set: -1-4799
  • test set: 4799-4999

Navier Stokes with Reynolds number 500

  • spatial domain: $x\in (0, 2\pi)^2$
  • temporal domain: $t \in [0, 0.5]$
  • forcing: $-4\cos(4x_2)$
  • Reynolds number: 500

Train set: data of shape (N, T, X, Y) where N is the number of instances, T is temporal resolution, X, Y are spatial resolutions.

  1. NS_fft_Re500_T4000.npy : 4000x64x64x65
  2. NS_fine_Re500_T128_part0.npy: 100x129x128x128
  3. NS_fine_Re500_T128_part1.npy: 100x129x128x128

Test set: data of shape (N, T, X, Y) where N is the number of instances, T is temporal resolution, X, Y are spatial resolutions.

  1. NS_Re500_s256_T100_test.npy: 100x129x256x256
  2. NS_fine_Re500_T128_part2.npy: 100x129x128x128

Configuration file format: see .yaml files under folder configs for detail.

Code for Burgers equation

Train PINO

To run PINO for Burgers equation, use, e.g.,

python3 train_burgers.py --config_path configs/pretrain/burgers-pretrain.yaml --mode train

To test PINO for burgers equation, use, e.g.,

python3 train_burgers.py --config_path configs/test/burgers.yaml --mode test

Code for Darcy Flow

Operator learning

To run PINO for Darcy Flow, use, e.g.,

python3 train_operator.py --config_path configs/pretrain/Darcy-pretrain.yaml

To evaluate operator for Darcy Flow, use, e.g.,

python3 eval_operator.py --config_path configs/test/darcy.yaml

Test-time optimization

To do test-time optimization for Darcy Flow, use, e.g.,

python3 run_pino2d.py --config_path configs/finetune/Darcy-finetune.yaml --start [starting index] --stop [stopping index]

Baseline

To run DeepONet, use, e.g.,

python3 deeponet.py --config_path configs/pretrain/Darcy-pretrain-deeponet.yaml --mode train 

To test DeepONet, use, e.g.,

python3 deeponet.py --config_path configs/test/darcy.yaml --mode test

Code for Navier Stokes equation

Train PINO for short time period

To run operator learning, use, e.g.,

python3 train_operator.py --config_path configs/pretrain/Re500-pretrain-05s-4C0.yaml

To evaluate trained operator, use

python3 eval_operator.py --config_path configs/test/Re500-05s.yaml

To run test-time optimization, use

python3 train_PINO3d.py --config_path configs/***.yaml 

To train Navier Stokes equations sequentially without running train_PINO3d.py multiple times, use

python3 run_pino3d.py --config_path configs/[configuration file name].yaml --start [index of the first data] --stop [which data to stop]

Baseline for short time period

To train DeepONet, use

python3 deeponet.py --config_path configs/[configuration file].yaml --mode train

To test DeepONet, use

python3 deeponet.py --config_path configs/[configuration file].yaml --mode test

To train and test PINNs, use, e.g.,

python3 nsfnet.py --config_path configs/Re500-pinns-05s.yaml --start [starting index] --stop [stopping index]

Baseline for long roll out

To train and test PINNs, use

python3 nsfnet.py --config_path configs/scratch/NS-50s.yaml --long --start [starting index] --stop [stopping index]

Pseudospectral solver for Navier Stokes equation

To run solver, use

python3 run_solver.py --config_path configs/Re500-0.5s.yaml
DeLag: Detecting Latency Degradation Patterns in Service-based Systems

DeLag: Detecting Latency Degradation Patterns in Service-based Systems Replication package of the work "DeLag: Detecting Latency Degradation Patterns

SEALABQualityGroup @ University of L'Aquila 2 Mar 24, 2022
The official codes of "Semi-supervised Models are Strong Unsupervised Domain Adaptation Learners".

SSL models are Strong UDA learners Introduction This is the official code of paper "Semi-supervised Models are Strong Unsupervised Domain Adaptation L

Yabin Zhang 26 Dec 26, 2022
Hands-On Machine Learning for Algorithmic Trading, published by Packt

Hands-On Machine Learning for Algorithmic Trading Hands-On Machine Learning for Algorithmic Trading, published by Packt This is the code repository fo

Packt 981 Dec 29, 2022
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022
Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach

Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach Thanh Luan Nguyen, Tri Nhu Do, Georges Kaddoum

Thanh Luan Nguyen 2 Oct 10, 2022
Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.

HiddenLayer A lightweight library for neural network graphs and training metrics for PyTorch, Tensorflow, and Keras. HiddenLayer is simple, easy to ex

Waleed 1.7k Dec 31, 2022
Transformer model implemented with Pytorch

transformer-pytorch Transformer model implemented with Pytorch Attention is all you need-[Paper] Architecture Self-Attention self_attention.py class

Mingu Kang 12 Sep 03, 2022
Adversarial Reweighting for Partial Domain Adaptation

Adversarial Reweighting for Partial Domain Adaptation Code for paper "Xiang Gu, Xi Yu, Yan Yang, Jian Sun, Zongben Xu, Adversarial Reweighting for Par

12 Dec 01, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
Unsupervised phone and word segmentation using dynamic programming on self-supervised VQ features.

Unsupervised Phone and Word Segmentation using Vector-Quantized Neural Networks Overview Unsupervised phone and word segmentation on speech data is pe

Herman Kamper 13 Dec 11, 2022
Explaining Deep Neural Networks - A comparison of different CAM methods based on an insect data set

Explaining Deep Neural Networks - A comparison of different CAM methods based on an insect data set This is the repository for the Deep Learning proje

Robert Krug 3 Feb 06, 2022
A tensorflow implementation of an HMM layer

tensorflow_hmm Tensorflow and numpy implementations of the HMM viterbi and forward/backward algorithms. See Keras example for an example of how to use

Zach Dwiel 283 Oct 19, 2022
FaceAPI: AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS

FaceAPI AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using

Vladimir Mandic 395 Dec 29, 2022
[NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data

MosaicKD Code for NeurIPS-21 paper "Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data" 1. Motivation Natural images share common l

ZJU-VIPA 37 Nov 10, 2022
Self-Supervised Image Denoising via Iterative Data Refinement

Self-Supervised Image Denoising via Iterative Data Refinement Yi Zhang1, Dasong Li1, Ka Lung Law2, Xiaogang Wang1, Hongwei Qin2, Hongsheng Li1 1CUHK-S

Zhang Yi 72 Jan 01, 2023
official code for dynamic convolution decomposition

Revisiting Dynamic Convolution via Matrix Decomposition (ICLR 2021) A pytorch implementation of DCD. If you use this code in your research please cons

Yunsheng Li 110 Nov 23, 2022
DataCLUE: 国内首个以数据为中心的AI测评(含模型分析报告)

DataCLUE: A Benchmark Suite for Data-centric NLP You can get the english version of README. 以数据为中心的AI测评(DataCLUE) 内容导引 章节 描述 简介 介绍以数据为中心的AI测评(DataCLUE

CLUE benchmark 135 Dec 22, 2022
All-in-one Docker container that allows a user to explore Nautobot in a lab environment.

Nautobot Lab This container is not for production use! Nautobot Lab is an all-in-one Docker container that allows a user to quickly get an instance of

Nautobot 29 Sep 16, 2022
Some toy examples of score matching algorithms written in PyTorch

toy_gradlogp This repo implements some toy examples of the following score matching algorithms in PyTorch: ssm-vr: sliced score matching with variance

Ending Hsiao 21 Dec 26, 2022