Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Overview

TensorDiffEq logo

Package Build Package Release pypi downloads python versions

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly!

Efficient and Scalable Physics-Informed Deep Learning

Collocation-based PINN PDE solvers for prediction and discovery methods on top of Tensorflow 2.X for multi-worker distributed computing.

Use TensorDiffEq if you require:

  • A meshless PINN solver that can distribute over multiple workers (GPUs) for forward problems (inference) and inverse problems (discovery)
  • Scalable domains - Iterated solver construction allows for N-D spatio-temporal support
    • support for N-D spatial domains with no time element is included
  • Self-Adaptive Collocation methods for forward and inverse PINNs
  • Intuitive user interface allowing for explicit definitions of variable domains, boundary conditions, initial conditions, and strong-form PDEs

What makes TensorDiffEq different?

  • Completely open-source

  • Self-Adaptive Solvers for forward and inverse problems, leading to increased accuracy of the solution and stability in training, resulting in less overall training time

  • Multi-GPU distributed training for large or fine-grain spatio-temporal domains

  • Built on top of Tensorflow 2.0 for increased support in new functionality exclusive to recent TF releases, such as XLA support, autograph for efficent graph-building, and grappler support for graph optimization* - with no chance of the source code being sunset in a further Tensorflow version release

  • Intuitive interface - defining domains, BCs, ICs, and strong-form PDEs in "plain english"

*In development

If you use TensorDiffEq in your work, please cite it via:

@article{mcclenny2021tensordiffeq,
  title={TensorDiffEq: Scalable Multi-GPU Forward and Inverse Solvers for Physics Informed Neural Networks},
  author={McClenny, Levi D and Haile, Mulugeta A and Braga-Neto, Ulisses M},
  journal={arXiv preprint arXiv:2103.16034},
  year={2021}
}

Thanks to our additional contributors:

@marcelodallaqua, @ragusa, @emiliocoutinho

Comments
  • Latest version of package

    Latest version of package

    The examples in the doc use the latest code of master branch but the library on Pypi is still the version in May. Can you build the lib and update the version on Pypi?

    opened by devzhk 5
  • ADAM training on batches

    ADAM training on batches

    It is possible to define a batch size and this will be applied to the calculation of the residual loss function, in splitting the collocation points in batches during the training.

    opened by emiliocoutinho 3
  • Pull Request using PyCharm

    Pull Request using PyCharm

    Dear Levi,

    I tried to make a Pull Request on this repository using PyCharm, and I received the following message:

    Although you appear to have the correct authorization credentials, the tensordiffeq organization has enabled OAuth App access restrictions, meaning that data access to third-parties is limited. For more information on these restrictions, including how to whitelist this app, visit https://help.github.com/articles/restricting-access-to-your-organization-s-data/

    I would kindly ask you to authorize PyCharm to access your organization data to use the GUI to make future pull requests.

    Best Regards

    opened by emiliocoutinho 1
  • Update method def get_sizes of utils.py

    Update method def get_sizes of utils.py

    Fix bug on the method def get_sizes(layer_sizes) of utils.py. The method was only allowing neural nets with an identical number of nodes in each hidden layer. Which was making the L- BFGS optimization to crash.

    opened by marcelodallaqua 1
  • model.save ?

    model.save ?

    Sometimes, it's useful to save the model for later use. I couldn't find a .save method and pickle (and dill) didn't let me dump the object for later re-use. (example of error with pickle: Can't pickle local object 'make_gradient_clipnorm_fn..').

    Is it currently possible to save the model? Thanks!

    opened by ragusa 1
  • add model.save and model.load_model

    add model.save and model.load_model

    Add model.save and model.load_model to CollocationSolverND class ref #3

    Will be released in the next stable.

    currently this can be done by using the Keras integration via running model.u_model.save("path/to/file"). This change will allow a direct save by calling model.save() on the CollocationSolverND class. Same with load_model().

    The docs will be updated to reflect this change.

    opened by levimcclenny 0
  • 2D Burgers Equation

    2D Burgers Equation

    Hello @levimcclenny and thanks for recommending this library!

    I have modified the 1D burger example to be in 2D, but I did not get good comparison results. Any suggestions?

    import math
    import scipy.io
    import tensordiffeq as tdq
    from tensordiffeq.boundaries import *
    from tensordiffeq.models import CollocationSolverND
    
    Domain = DomainND(["x", "y", "t"], time_var='t')
    
    Domain.add("x", [-1.0, 1.0], 256)
    Domain.add("y", [-1.0, 1.0], 256)
    Domain.add("t", [0.0, 1.0], 100)
    
    N_f = 10000
    Domain.generate_collocation_points(N_f)
    
    
    def func_ic(x,y):
        p =2
        q =1
        return np.sin (p * math.pi * x) * np.sin(q * math.pi * y)
        
    
    init = IC(Domain, [func_ic], var=[['x','y']])
    upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
    lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
    upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
    lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")
    
    BCs = [init, upper_x, lower_x, upper_y, lower_y]
    
    
    def f_model(u_model, x, y, t):
        u = u_model(tf.concat([x, y, t], 1))
        u_x = tf.gradients(u, x)
        u_xx = tf.gradients(u_x, x)
        u_y = tf.gradients(u, y)
        u_yy = tf.gradients(u_y, y)
        u_t = tf.gradients(u, t)
        f_u = u_t + u * (u_x + u_y) - (0.01 / tf.constant(math.pi)) * (u_xx+u_yy)
        return f_u
    
    
    layer_sizes = [3, 20, 20, 20, 20, 20, 20, 20, 20, 1]
    
    model = CollocationSolverND()
    model.compile(layer_sizes, f_model, Domain, BCs)
    
    # to reproduce results from Raissi and the SA-PINNs paper, train for 10k newton and 10k adam
    model.fit(tf_iter=10000, newton_iter=10000)
    
    model.save("burger2D_Training_Model")
    #model.load("burger2D_Training_Model")
    
    #######################################################
    #################### PLOTTING #########################
    #######################################################
    
    data = np.load('py-pde_2D_burger_data.npz')
    
    Exact = data['u_output']
    Exact_u = np.real(Exact)
    
    x = Domain.domaindict[0]['xlinspace']
    y = Domain.domaindict[1]['ylinspace']
    t = Domain.domaindict[2]["tlinspace"]
    
    X, Y, T = np.meshgrid(x, y, t)
    
    X_star = np.hstack((X.flatten()[:, None], Y.flatten()[:, None], T.flatten()[:, None]))
    u_star = Exact_u.T.flatten()[:, None]
    
    u_pred, f_u_pred = model.predict(X_star)
    
    error_u = tdq.helpers.find_L2_error(u_pred, u_star)
    print('Error u: %e' % (error_u))
    
    lb = np.array([-1.0, -1.0, 0.0])
    ub = np.array([1.0, 1.0, 1])
    
    tdq.plotting.plot_solution_domain2D(model, [x, y, t], ub=ub, lb=lb, Exact_u=Exact_u.T)
    
    
    Screen Shot 2022-03-04 at 11 15 31 PM Screen Shot 2022-03-04 at 11 15 44 PM Screen Shot 2022-03-04 at 11 15 18 PM
    opened by engsbk 3
  • 2D Wave Equation

    2D Wave Equation

    Thank you for the great contribution!

    I'm trying to extend the 1D example problems to 2D, but I want to make sure my changes are in the correct place:

    1. Dimension variables. I changed them like so:

    Domain = DomainND(["x", "y", "t"], time_var='t')

    Domain.add("x", [0.0, 5.0], 100) Domain.add("y", [0.0, 5.0], 100) Domain.add("t", [0.0, 5.0], 100)

    1. My IC is zero, but for the BCs I'm not sure how to define the left and right borders, please let me know if my implementation is correct:
    
    def func_ic(x,y):
        return 0
    
    init = IC(Domain, [func_ic], var=[['x','y']])
    upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
    lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
    upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
    lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")
            
    BCs = [init, upper_x, lower_x, upper_y, lower_y]
    

    All of my BCs and ICs are zero. And my equation has a (forcing) time-dependent source term as such:

    
    def f_model(u_model, x, y, t):
        c = tf.constant(1, dtype = tf.float32)
        Amp = tf.constant(2, dtype = tf.float32)
        freq = tf.constant(1, dtype = tf.float32)
        sigma = tf.constant(0.2, dtype = tf.float32)
    
        source_x = tf.constant(0.5, dtype = tf.float32)
        source_y = tf.constant(2.5, dtype = tf.float32)
    
        GP = Amp * tf.exp(-0.5*( ((x-source_x)/sigma)**2 + ((y-source_y)/sigma)**2 ))
        
        S = GP * tf.sin( 2 * tf.constant(math.pi)  * freq * t )
        u = u_model(tf.concat([x,y,t], 1))
        u_x = tf.gradients(u,x)
        u_xx = tf.gradients(u_x, x)
        u_y = tf.gradients(u,y)
        u_yy = tf.gradients(u_y, y)
        u_t = tf.gradients(u,t)
        u_tt = tf.gradients(u_t,t)
    
    
        f_u = u_xx + u_yy - (1/c**2) * u_tt + S
        
        return f_u
    

    Please advise.

    Looking forward to your reply!

    opened by engsbk 13
  • Reproducibility

    Reproducibility

    Dear @levimcclenny,

    Have you considered in adapt TensorDiffEq to be deterministic? In the way the code is implemented, we can find two sources of randomness:

    • The function Domain.generate_collocation_points has a random number generation
    • The TensorFlow training procedure (weights initialization and possibility of the use o random batches)

    Both sources of randomness can be solved with not much effort. We can define a random state for the first one that can be passed to the function Domain.generate_collocation_points. For the second, we can use the implementation provided on Framework Determinism. I have used the procedures suggested by this code, and the results of TensorFlow are always reproducible (CPU or GPU, serial or distributed).

    If you want, I can implement these two features.

    Best Regards

    opened by emiliocoutinho 3
Releases(v0.2.0)
Owner
tensordiffeq
Scalable PINN solvers for PDE Inference and Discovery
tensordiffeq
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi.

Spchcat Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi. Description spchcat is a command-line tool that read

Pete Warden 279 Jan 03, 2023
Supervised forecasting of sequential data in Python.

Supervised forecasting of sequential data in Python. Intro Supervised forecasting is the machine learning task of making predictions for sequential da

The Alan Turing Institute 54 Nov 15, 2022
State of the art Semantic Sentence Embeddings

Contrastive Tension State of the art Semantic Sentence Embeddings Published Paper · Huggingface Models · Report Bug Overview This is the official code

Fredrik Carlsson 88 Dec 30, 2022
The code of “Similarity Reasoning and Filtration for Image-Text Matching” [AAAI2021]

SGRAF PyTorch implementation for AAAI2021 paper of “Similarity Reasoning and Filtration for Image-Text Matching”. It is built on top of the SCAN and C

Ronnie_IIAU 149 Dec 22, 2022
Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders"

AAVAE Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders" Abstract Recent methods for self-supervised learnin

Grid AI Labs 48 Dec 12, 2022
Pytorch domain adaptation package

DomainAdaptation This package is created to tackle the problem of domain shifts when dealing with two domains of different feature distributions. In d

Institute of Computational Perception 7 Oct 22, 2022
Repo for our ICML21 paper Unsupervised Learning of Visual 3D Keypoints for Control

Unsupervised Learning of Visual 3D Keypoints for Control [Project Website] [Paper] Boyuan Chen1, Pieter Abbeel1, Deepak Pathak2 1UC Berkeley 2Carnegie

Boyuan Chen 34 Jul 22, 2022
MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens

MSG-Transformer Official implementation of the paper MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens, by Jiemin

Hust Visual Learning Team 68 Nov 16, 2022
Scalable implementation of Lee / Mykland (2012) and Ait-Sahalia / Jacod (2012) Jump tests for noisy high frequency data

JumpDetectR Name of QuantLet : JumpDetectR Published in : 'To be published as "Jump dynamics in high frequency crypto markets"' Description : 'Scala

LvB 12 Jan 01, 2023
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi

A tutorial showing how to set up TensorFlow's Object Detection API on the Raspberry Pi

Evan 1.1k Dec 26, 2022
Code for the RA-L (ICRA) 2021 paper "SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition"

SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition [ArXiv+Supplementary] [IEEE Xplore RA-L 2021] [ICRA 2021 YouTube Video]

Sourav Garg 63 Dec 12, 2022
Pytorch implementation for "Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter".

Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter This is a pytorch-based implementation for paper Implicit Feature Alignme

wangtianwei 61 Nov 12, 2022
GUI for a Vocal Remover that uses Deep Neural Networks.

GUI for a Vocal Remover that uses Deep Neural Networks.

4.4k Jan 07, 2023
A curated list of awesome Machine Learning frameworks, libraries and software.

Awesome Machine Learning A curated list of awesome machine learning frameworks, libraries and software (by language). Inspired by awesome-php. If you

Joseph Misiti 57.1k Jan 03, 2023
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 9.2k Jan 02, 2023
NLP made easy

GluonNLP: Your Choice of Deep Learning for NLP GluonNLP is a toolkit that helps you solve NLP problems. It provides easy-to-use tools that helps you l

Distributed (Deep) Machine Learning Community 2.5k Jan 04, 2023
The codes I made while I practiced various TensorFlow examples

TensorFlow_Exercises The codes I made while I practiced various TensorFlow examples About the codes I didn't create these codes by myself, but re-crea

Terry Taewoong Um 614 Dec 08, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022