Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

Overview

Authors:

Code for sound field predictions in domains with Neumann and impedance boundaries. Used for generating results from the paper "Physics-informed neural networks for 1D sound field predictions with parameterized sources and impedance boundaries" by N. Borrel-Jensen, A. P. Engsig-Karup, and C. Jeong.

Run

Train

Run

python3 main_train.py --path_settings="path/to/script.json"

Scripts for setting up models with Neumann, frequency-independent and dependent boundaries can be found in scripts/settings (see JSON settings).

Evaluate

Run

python3 main_evaluate.py

The settings are

do_animations = do_side_by_side_plot = ">
id_dir = <unique id>
settings_filename = 'settings.json'
base_dir = "path/to/base/dir"

do_plots_for_paper = <bool>
do_animations = <bool>
do_side_by_side_plot = <bool>

The id_dir corresponds to the output directory generated after training, settings_filename is the name of the settings file used for training (located inside the id_dir directory), base_dir is the path to the base directory (see Input/output directory structure).

Evaluate model execution time

To evaluate the execution time of the surrogate model, run

python3 main_evaluate_timings.py --path_settings="path/to/script.json" --trained_model_tag="trained-model-dir"

The trained_model_tag is the directory with the trained model weights trained using the scripts located at the path given in path_settings.

Settings

Input/output directory structure

The input data should be located in a specific relative directory structure as (data used for the paper can be downloaded here)

base_path/
    trained_models/
        trained_model_tag/
            checkpoint
            cp.ckpt.data-00000-of-00001
            cp.ckpt.index
    training_data/
        freq_dep_1D_2000.00Hz_sigma0.2_c1_d0.02_srcs3.hdf5
        ...
        freq_indep_1D_2000.00Hz_sigma0.2_c1_xi5.83_srcs3.hdf5
        ...
        neumann_1D_2000.00Hz_sigma0.2_c1_srcs3.hdf5
        ...

The reference data are located inside the training_data/ directory generated, where the data for impedance boundaries are generated using our SEM simulator, and for Neumann boundaries, the Python script main_generate_analytical_data.py was used.

Output result data are located inside the results folder

base_path/
    results/
        id_folder/
            figs/
            models/
                LossType.PINN/
                    checkpoint
                    cp.ckpt.data-00000-of-00001
                    cp.ckpt.index
            settings.json

The settings.json file is identical to the settings file used for training indicated by the --path_settings argument. The directory LossType.PINN contains the trained model weights.

JSON settings

The script scripts/settings/neumann.json was used for training the Neumann model from the paper

{
    "id": "neumann_srcs3_sine_3_256_7sources_loss02",
    "base_dir": "../data/pinn",
    
    "c": 1,
    "c_phys": 343,
    "___COMMENT_fmax___": "2000Hz*c/343 = 5.8309 for c=1, =23.3236 for c=4",
    "fmax": 5.8309,

    "tmax": 4,
    "xmin": -1,
    "xmax": 1,
    "source_pos": [-0.3,-0.2,-0.1,0.0,0.1,0.2,0.3],
    
    "sigma0": 0.2,
    "rho": 1.2,
    "ppw": 5,

    "epochs": 25000,
    "stop_loss_value": 0.0002,
    
    "boundary_type": "NEUMANN",
    "data_filename": "neumann_1D_2000.00Hz_sigma0.2_c1_srcs7.hdf5",
    
    "batch_size": 512,
    "learning_rate": 0.0001,
    "optimizer": "adam",

    "__comment0__": "NN setting for the PDE",
    "activation": "sin",
    "num_layers": 3,
    "num_neurons": 256,

    "ic_points_distr": 0.25,
    "bc_points_distr": 0.45,

    "loss_penalties": {
        "pde":1,
        "ic":20,
        "bc":1
    },

    "verbose_out": false,
    "show_plots": false
}

The script scripts/settings/freq_indep.json was used for training the Neumann model from the paper

{
    "id": "freq_indep_sine_3_256_7sources_loss02",
    "base_dir": "../data/pinn",

    "c": 1,
    "c_phys": 343,
    "___COMMENT_fmax___": "2000Hz*c/343 = 5.8309 for c=1, =23.3236 for c=4",
    "fmax": 5.8309,

    "tmax": 4,
    "xmin": -1,
    "xmax": 1,
    "source_pos": [-0.3,-0.2,-0.1,0.0,0.1,0.2,0.3],
    
    "sigma0": 0.2,
    "rho": 1.2,
    "ppw": 5,

    "epochs": 25000,
    "stop_loss_value": 0.0002,
    
    "batch_size": 512,
    "learning_rate": 0.0001,
    "optimizer": "adam",

    "boundary_type": "IMPEDANCE_FREQ_INDEP",
    "data_filename": "freq_indep_1D_2000.00Hz_sigma0.2_c1_xi5.83_srcs7.hdf5",

    "__comment0__": "NN setting for the PDE",
    "activation": "sin",
    "num_layers": 3,
    "num_neurons": 256,

    "impedance_data": {
        "__comment1__": "xi is the acoustic impedance ONLY for freq. indep. boundaries",
        "xi": 5.83
    },

    "ic_points_distr": 0.25,
    "bc_points_distr": 0.45,
    
    "loss_penalties": {
        "pde":1,
        "ic":20,
        "bc":1
    },

    "verbose_out": false,
    "show_plots": false
}

The script scripts/settings/freq_dep.json was used for training the Neumann model from the paper

{
    "id": "freq_dep_sine_3_256_7sources_d01",
    "base_dir": "../data/pinn",

    "c": 1,
    "c_phys": 343,
    "___COMMENT_fmax___": "2000Hz*c/343 = 5.8309 for c=1, =23.3236 for c=4",
    "fmax": 5.8309,

    "tmax": 4,
    "xmin": -1,
    "xmax": 1,
    "source_pos": [-0.3,-0.2,-0.1,0.0,0.1,0.2,0.3],
    
    "sigma0": 0.2,
    "rho": 1.2,
    "ppw": 5,

    "epochs": 50000,
    "stop_loss_value": 0.0002,

    "do_transfer_learning": false,

    "boundary_type": "IMPEDANCE_FREQ_DEP",
    "data_filename": "freq_dep_1D_2000.00Hz_sigma0.2_c1_d0.10_srcs7.hdf5",
    
    "batch_size": 512,
    "learning_rate": 0.0001,
    "optimizer": "adam",

    "__comment0__": "NN setting for the PDE",
    "activation": "sin",
    "num_layers": 3,
    "num_neurons": 256,

    "__comment1__": "NN setting for the auxillary differential ODE",
    "activation_ade": "tanh",
    "num_layers_ade": 3,
    "num_neurons_ade": 20,

    "impedance_data": {
        "d": 0.1,
        "type": "IMPEDANCE_FREQ_DEP",
        "lambdas": [7.1109025021758407,205.64002739443146],
        "alpha": [6.1969460587749818],
        "beta": [-15.797795759219973],
        "Yinf": 0.76935257750377573,
        "A": [-7.7594660571346719,0.0096108036858666163],
        "B": [-0.016951521199665469],
        "C": [-2.4690553703530442]
      },

    "accumulator_factors": [10.26, 261.37, 45.88, 21.99],

    "ic_points_distr": 0.25,
    "bc_points_distr": 0.45,

    "loss_penalties": {
        "pde":1,
        "ic":20,
        "bc":1,
        "ade":[10,10,10,10]
    },

    "verbose_out": false,
    "show_plots": false
}

HPC (DTU)

The scripts for training the models on the GPULAB clusters at DTU are located at scripts/settings/run_*.sh.

VSCode

Launch scripts for VS Code are located inside .vscode and running the settings script local_train.json in debug mode is done selecting the Python: TRAIN scheme (open pinn-acoustics.code-workspace to enable the workspace).

License

See LICENSE

Owner
DTU Acoustic Technology Group
DTU Acoustic Technology Group
ICRA 2021 - Robust Place Recognition using an Imaging Lidar

Robust Place Recognition using an Imaging Lidar A place recognition package using high-resolution imaging lidar. For best performance, a lidar equippe

Tixiao Shan 293 Dec 27, 2022
Blind Video Temporal Consistency via Deep Video Prior

deep-video-prior (DVP) Code for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior PyTorch implementation | paper | project web

Chenyang LEI 272 Dec 21, 2022
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility function

Facebook Research 724 Jan 04, 2023
A collection of awesome resources image-to-image translation.

awesome image-to-image translation A collection of resources on image-to-image translation. Contributing If you think I have missed out on something (

876 Dec 28, 2022
Mahadi-Now - This Is Pakistani Just Now Login Tools

PAKISTANI JUST NOW LOGIN TOOLS Install apt update apt upgrade apt install python

MAHADI HASAN AFRIDI 19 Apr 06, 2022
A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Hyunsoo Cho 1 Dec 20, 2021
Implementation of Fast Transformer in Pytorch

Fast Transformer - Pytorch Implementation of Fast Transformer in Pytorch. This only work as an encoder. Yannic video AI Epiphany Install $ pip install

Phil Wang 167 Dec 27, 2022
Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning

Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning. Circuit Training is an open-s

Google Research 479 Dec 25, 2022
This repository contains all source code, pre-trained models related to the paper "An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator"

An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator This is a Pytorch implementation for the paper "An Empirical Study o

Cuong Nguyen 3 Nov 15, 2021
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022
Implementation for the "Surface Reconstruction from 3D Line Segments" paper.

Surface Reconstruction from 3D Line Segments Surface reconstruction from 3d line segments. Langlois, P. A., Boulch, A., & Marlet, R. In 2019 Internati

85 Jan 04, 2023
Pytorch Implementation of Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations

NANSY: Unofficial Pytorch Implementation of Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations Notice Papers' D

Dongho Choi 최동호 104 Dec 23, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
A collection of random and hastily hacked together scripts for investigating EU-DCC

A collection of random and hastily hacked together scripts for investigating EU-DCC

Ryan Barrett 8 Mar 01, 2022
This repo provides the source code & data of our paper "GreaseLM: Graph REASoning Enhanced Language Models"

GreaseLM: Graph REASoning Enhanced Language Models This repo provides the source code & data of our paper "GreaseLM: Graph REASoning Enhanced Language

137 Jan 02, 2023
Episodic-memory - Ego4D Episodic Memory Benchmark

Ego4D Episodic Memory Benchmark EGO4D is the world's largest egocentric (first p

3 Feb 18, 2022
Res2Net for Instance segmentation and Object detection using MaskRCNN

Res2Net for Instance segmentation and Object detection using MaskRCNN Since the MaskRCNN-benchmark of facebook is deprecated, we suggest to use our mm

Res2Net Applications 55 Oct 30, 2022
Framework for joint representation learning, evaluation through multimodal registration and comparison with image translation based approaches

CoMIR: Contrastive Multimodal Image Representation for Registration Framework 🖼 Registration of images in different modalities with Deep Learning 🤖

Methods for Image Data Analysis - MIDA 55 Dec 09, 2022
Implementation for NeurIPS 2021 Submission: SparseFed

READ THIS FIRST This repo is an anonymized version of an existing repository of GitHub, for the AIStats 2021 submission: SparseFed: Mitigating Model P

2 Jun 15, 2022
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Madry Lab 35 Nov 14, 2022