Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning

Overview

Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning

This repository is the implementation of the paper "Thermal Control of Laser Powder Bed Fusion Using Deep Reinforcement Learning", linked here. The project makes use of the Deep Reinforcement Library stable-baselines3 to derive a control policy that maximizes melt pool depth consistency. drl_am

Simulation Framework

The Repeated Usage of Stored Line Solutions (RUSLS) method proposed by Wolfer et al. is used to simulate the temperature dynamics in this work. More detail can be found in the following paper:

  • Fast solution strategy for transient heat conduction for arbitrary scan paths in additive manufacturing, Additive Manufacturing, Volume 30, 2019 (link)

Prerequisites

The following packages are required in order to run the associated code:

  • gym==0.17.3
  • torch==1.5.0
  • stable_baselines3==0.7.0
  • numba==0.50.1

These packages can be installed independently, or all at once by running pip install -r requirements.txt. We recommend that these packages are installed in a new conda environment to avoid clashes with existing package installations. Instructions on defining a new conda environment can be found here.

Usage

The overall workflow for this project first defines a gym environment based on the desired scan path, then performs Proximal Policy Optimization to derive a suitable control policy based on the environment. This is done through the following:

Overview

  • EagarTsaiModel.py: implements the RUSLS solution to the Rosenthal equation, as proposed by Wolfer et al.
  • power_square_gym.py, power_triangle_gym.py, velocity_square_gym.py, velocity_triangle_gym.py: Defines custom gym environments for the respective scan paths and control variables. square is used as shorthand for the predefined horizontal cross-hatching path and triangle is used as shorthand for the predefined concentric triangular path.
  • RL_learn_square.py, RL_learn_triangle.py performs Proximal Policy Optimization on the respective scan paths, with command line arguments to change which control parameter is varied.
  • evaluate_learned_policy.py runs a derived control policy on a specific environment. The environment is specified using command line arguments detailed below.

Testing a trained model

To test a trained model on a specific combination of scan path and control parameter, enter this command:

python evaluate_learned_policy.py --path [scan_path] --param [parameter]

Note: [scan_path] should be replaced by square for the horizontal cross-hatching scan path and triangle for the concentric triangular path. [parameter] should be replaced by power to specify power as a control parameter, and velocity to specify velocity as a control parameter.

Upon running this command, you will be prompted to enter the path to the .zip file for the trained model.

Once the evaluation is complete, the results are stored in the folder results/[scan_path]_[parameter]_control/. This folder will contain plots of the variation of the melt depth and control parameters over time, as well as their raw values for later analysis.

Pre-trained models for each of the four possible combinations of scan path and control parameter can be found in pretrained_models.

Training a new model

In order to train a new model based on the predefined horizontal cross-hatching scan path, enter the command:

python RL_learn_square.py --param [parameter]

Here, [parameter] should be replaced by the control parameter desired. The possible options are power and velocity.

The process is similar for the predefined concentric triangular scan path. To train a new model, enter the command:

python RL_learn_triangle.py --param [parameter]

Again, [parameter] should be replaced by the control parameter desired. The possible options are power and velocity.

During training, intermediate model checkpoints will be saved at

training_checkpoints/ppo_[scan_path]_[parameter]/best_model.zip

At the conclusion of training, the finished model is stored at

trained_models/ppo_[scan_path]_[parameter].zip

Defining a custom domain

Changing the powder bed features

In order to define a custom domain for use with a different problem configuration, the EagarTsaiModel.py file should be edited directly. Within the EagarTsai() class instantiation, the thermodynamic properties and domain dimensions can be specified. Additionally, the resolution and boundary conditions can be provided as arguments to the EagarTsai class. bc = 'flux' and bc = 'temp' implements an adiabatic and constant temperature boundary condition respectively.

Changing the scan path

A new scan path can be defined by creating a new custom gym environment, and writing a custom step() function to represent the desired scan path, similar to the [parameter]_[scan_path]_gym.py scripts in this repository. Considerations for both how the laser moves during a single segment and the placement of each segment within the overall path should be described in this function. More detail on the gym framework for defining custom environments can be found here.

Monitoring the training process with TensorBoard

Tensorboard provides resources for monitoring various metrics of the PPO training process, and can be installed using pip install tensorboard. To open the tensorboard dashboard, enter the command:

tensorboard --log_dir ./tensorboard_logs/ppo_[scan_path]_[parameter]/ppo_[scan_path]_[parameter]_[run_ID]

Tensorboard log files are periodically saved during training, with information on cumulative reward as well as various loss metrics.

Owner
BaratiLab
BaratiLab
A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Google Research 258 Dec 29, 2022
Sequence to Sequence Models with PyTorch

Sequence to Sequence models with PyTorch This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch At present it ha

Sandeep Subramanian 708 Dec 19, 2022
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

CAiRE 42 Jan 07, 2023
10x faster matrix and vector operations

Bolt is an algorithm for compressing vectors of real-valued data and running mathematical operations directly on the compressed representations. If yo

2.3k Jan 09, 2023
VR-Caps: A Virtual Environment for Active Capsule Endoscopy

VR-Caps: A Virtual Environment for Capsule Endoscopy Overview We introduce a virtual active capsule endoscopy environment developed in Unity that prov

DeepMIA Lab 90 Dec 27, 2022
Differential rendering based motion capture blender project.

TraceArmature Summary TraceArmature is currently a set of python scripts that allow for high fidelity motion capture through the use of AI pose estima

William Rodriguez 4 May 27, 2022
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

AugMix Introduction We propose AugMix, a data processing technique that mixes augmented images and enforces consistent embeddings of the augmented ima

Google Research 876 Dec 17, 2022
OntoProtein: Protein Pretraining With Ontology Embedding

OntoProtein This is the implement of the paper "OntoProtein: Protein Pretraining With Ontology Embedding". OntoProtein is an effective method that mak

ZJUNLP 80 Dec 14, 2022
Methods to get the probability of a changepoint in a time series.

Bayesian Changepoint Detection Methods to get the probability of a changepoint in a time series. Both online and offline methods are available. Read t

Johannes Kulick 554 Dec 30, 2022
Code for Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021)

Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021) Single-cause Perturbation (SCP) is a framework to estimate the m

Zhaozhi Qian 9 Sep 28, 2022
PyTorch implementation of SimSiam: Exploring Simple Siamese Representation Learning

SimSiam: Exploring Simple Siamese Representation Learning This is a PyTorch implementation of the SimSiam paper: @Article{chen2020simsiam, author =

Facebook Research 834 Dec 30, 2022
It's a implement of this paper:Relation extraction via Multi-Level attention CNNs

Relation Classification via Multi-Level Attention CNNs It's a implement of this paper:Relation Classification via Multi-Level Attention CNNs. Training

Aybss 2 Nov 04, 2022
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

DART Implementation for ICLR2022 paper Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners. Environment

ZJUNLP 83 Dec 27, 2022
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
Python Interview Questions

Python Interview Questions Clone the code to your computer. You need to understand the code in main.py and modify the content in if __name__ =='__main

ClassmateLin 575 Dec 28, 2022
R interface to fast.ai

R interface to fastai The fastai package provides R wrappers to fastai. The fastai library simplifies training fast and accurate neural nets using mod

113 Dec 20, 2022
Optimizing DR with hard negatives and achieving SOTA first-stage retrieval performance on TREC DL Track (SIGIR 2021 Full Paper).

Optimizing Dense Retrieval Model Training with Hard Negatives Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma This repo provi

Jingtao Zhan 99 Dec 27, 2022
Re-implement CycleGAN in Tensorlayer

CycleGAN_Tensorlayer Re-implement CycleGAN in TensorLayer Original CycleGAN Improved CycleGAN with resize-convolution Prerequisites: TensorLayer Tenso

89 Aug 15, 2022
Learning Saliency Propagation for Semi-supervised Instance Segmentation

Learning Saliency Propagation for Semi-supervised Instance Segmentation PyTorch Implementation This repository contains: the PyTorch implementation of

Berkeley DeepDrive 68 Oct 18, 2022
Code and Datasets from the paper "Self-supervised contrastive learning for volcanic unrest detection from InSAR data"

Code and Datasets from the paper "Self-supervised contrastive learning for volcanic unrest detection from InSAR data" You can download the pretrained

Bountos Nikos 3 May 07, 2022