Image Segmentation with U-Net Algorithm on Carvana Dataset using AWS Sagemaker

Overview

Image Segmentation with U-Net Algorithm on Carvana Dataset using AWS Sagemaker

This is a full project of image segmentation using the model built with U-Net Algorithm on Carvana competition Dataset from Kaggle using Sagemaker as Udacity's ML Nanodegree Capstone Project.

Image Segmentation with U-Net Algorithm

Use AWS Sagemaker to train the model built with U-Net algorithm/architecture that can perform image segmentation on Carvana Dataset from Kaggle Competition.

Project Set Up and Installation

Enter AWS through the gateway and create a Sagemaker notebook instance of your choice, ml.t2.medium is a sweet spot for this project as we will not use the GPU in the notebook and will use the Sagemaker Container to train the model. Wait for the instance to launch and then create a jupyter notebook with conda_pytorch_latest_p36 kernel, this comes preinstalled with the needed modules related to pytorch we will use along the project. Set up your sagemaker roles and regions.

Dataset

We use the Carvana Dataset from Kaggle Competition to use as data for the model training job. To get the Dataset. Register or Login to your Kaggle account, create new api in the user setting and get the api key and put it in the root of your sagemaker environment root location. After that !kaggle competitions download carvana-image-masking-challenge -f train.zip and !kaggle competitions download carvana-image-masking-challenge -f train_masks.zip will download the necessary files to your notebook environment. We will then unzip the data, upload it to S3 bucket with !aws s3 sync command.

Script Files used

  1. hpo.py for hyperparameter tuning jobs where we train the model for multiple time with different hyperparameters and search for the best combination based on loss metrics.
  2. training.py for the final training of the model with the best parameters getting from the previous tuning jobs, and put debug and profiler hooks for debugging purpose and get the tensors emits during training.
  3. inference.py for using the trained model as inference and pre-processing and serializing the data before it passes to the model for segmentaion. Now this can be used locally and user friendly
  4. Note at this time, the sagemaker endpoint has an error and can't make prediction, so I have managed to create a new instance in sagemaker(ml.g4dn.xlarge to utilize the GPU) and used endpoint_local.ipynb notebook to get the inference result.
  5. requirements.txt is use to install the dependencies in the training container, these include Albumentations, higher version of torch dependencies to utilize in the training script.

Hyperparameter Tuning

I used U-Net Algorithm to create an image segmentation model. The hyperparameter searchspaces are learning-rate, number of epochs and batchsize. Note The batch size over 128(inclusive) can't be used as the GPU memory may run out during the training. Deploy a hyperparameter tuning job on sagemaker and wait for the combination of hyperparameters turn out with best metric.

hyperparameter tuning job

We pick the hyperparameters from the best training job to train the final model.

best job's hyperparameters

Debugging and Profiling

The Debugger Hook is set to record the Loss Criterion of the process in both training and validation/testing. The Plot of the Dice Coefficient is shown below.

Dice Coefficient

we can see that the validation plot is high and this means that our model had entered a state of overtraining. We can reduce this by adding dropout or L1 L2 regularization, or added more different training data, or can early stop the model before it overfit. by adding the metric definition, I could also managed to get the average accuracy and loss dat during the validation phase in AWS Cloudwatch(a powerful too to monitor your metrics of any kind). Metrics

Results

Result is pretty good, as I was using ml.g4dn.xlarge to utilize the GPU of the instance, both the hpo jobs and training job did't take too much time.

Inferenceing your data

Sagemaker Endpoint got an 500 status code error so I tried using another sagemaker instance with GPU(ml.g4dn.xlarge) and running the endpoint_local.ipynb will get you the desired output of your choice. Result

Thank You So Much For Your Time! Please don't hesitate to contribute.

Ref: Github repo of neirinzaralwin

Owner
Htin Aung Lu
I am a Machine Learning enginner. I like to work on various machine learning projects. I have more experience on @AWS @Sagemaker platform than other.
Htin Aung Lu
Author's PyTorch implementation of Randomized Ensembled Double Q-Learning (REDQ) algorithm.

REDQ source code Author's PyTorch implementation of Randomized Ensembled Double Q-Learning (REDQ) algorithm. Paper link: https://arxiv.org/abs/2101.05

109 Dec 16, 2022
render sprites into your desktop environment as shaped windows using GTK

spritegtk render static or animated sprites into your desktop environment as dynamic shaped windows using GTK requires pycairo and PYGobject: pip inst

hermit 20 Oct 27, 2022
The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation

BiMix The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation arxiv Framework: visualization results: Requiremen

stanley 18 Sep 18, 2022
LQM - Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstract Object detection aims to locate and classify object instances in ima

IM Lab., POSTECH 0 Sep 28, 2022
Yet another video caption

Yet another video caption

Fan Zhimin 5 May 26, 2022
[ICCV 2021 (oral)] Planar Surface Reconstruction from Sparse Views

Planar Surface Reconstruction From Sparse Views Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey University of Michigan ICCV 2021 (Oral) This re

Linyi Jin 89 Jan 05, 2023
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jianfei Guo 239 Dec 22, 2022
Meta graph convolutional neural network-assisted resilient swarm communications

Resilient UAV Swarm Communications with Graph Convolutional Neural Network This repository contains the source codes of Resilient UAV Swarm Communicat

62 Dec 06, 2022
Few-shot Neural Architecture Search

One-shot Neural Architecture Search uses a single supernet to approximate the performance each architecture. However, this performance estimation is super inaccurate because of co-adaption among oper

Yiyang Zhao 38 Oct 18, 2022
A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery

A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery This repository is the official implementati

Aatif Jiwani 42 Dec 08, 2022
Source code for paper "Deep Diffusion Models for Robust Channel Estimation", TBA.

diffusion-channels Source code for paper "Deep Diffusion Models for Robust Channel Estimation". Generic flow: Use 'matlab/main.mat' to generate traini

The University of Texas Computational Sensing and Imaging Lab 15 Dec 22, 2022
Wav2Vec for speech recognition, classification, and audio classification

Soxan در زبان پارسی به نام سخن This repository consists of models, scripts, and notebooks that help you to use all the benefits of Wav2Vec 2.0 in your

Mehrdad Farahani 140 Dec 15, 2022
Pytorch modules for paralel models with same architecture. Ideal for multi agent-based systems

WideLinears Pytorch parallel Neural Networks A package of pytorch modules for fast paralellization of separate deep neural networks. Ideal for agent-b

1 Dec 17, 2021
Train DeepLab for Semantic Image Segmentation

Train DeepLab for Semantic Image Segmentation Martin Kersner, [email protected]

Martin Kersner 172 Dec 14, 2022
2021-AIAC-QQ-Browser-Hyperparameter-Optimization-Rank6

2021-AIAC-QQ-Browser-Hyperparameter-Optimization-Rank6

Aigege 8 Mar 31, 2022
Codebase for "ProtoAttend: Attention-Based Prototypical Learning."

Codebase for "ProtoAttend: Attention-Based Prototypical Learning." Authors: Sercan O. Arik and Tomas Pfister Paper: Sercan O. Arik and Tomas Pfister,

47 2 May 17, 2022
A very tiny, very simple, and very secure file encryption tool.

Picocrypt is a very tiny (hence "Pico"), very simple, yet very secure file encryption tool. It uses the modern ChaCha20-Poly1305 cipher suite as well

Evan Su 1k Dec 30, 2022
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Vision Transformer with Progressive Sampling This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

yuexy 123 Jan 01, 2023
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

TL;DR Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. Click on the image to

4.2k Jan 01, 2023
🔮 A refreshing functional take on deep learning, compatible with your favorite libraries

Thinc: A refreshing functional take on deep learning, compatible with your favorite libraries From the makers of spaCy, Prodigy and FastAPI Thinc is a

Explosion 2.6k Dec 30, 2022