Modification of convolutional neural net "UNET" for image segmentation in Keras framework

Overview

ZF_UNET_224 Pretrained Model

Modification of convolutional neural net "UNET" for image segmentation in Keras framework

Requirements

Python 3.*, Keras 2.1, Tensorflow 1.4

Usage

from zf_unet_224_model import ZF_UNET_224, dice_coef_loss, dice_coef
from keras.optimizers import Adam

model = ZF_UNET_224(weights='generator')
optim = Adam()
model.compile(optimizer=optim, loss=dice_coef_loss, metrics=[dice_coef])

model.fit(...)

Notes

Pretrained weights

Download: Weights for Tensorflow backend ~123 MB (Keras 2.1, Dice coef: 0.998)

Weights were obtained with random image generator (generator code available here: train_infinite_generator.py). See example of images from generator below.

Example of images from generator

Dice coefficient for pretrained weights: ~0.998. See history of learning below:

Log of dice coefficient during training process

Comments
  • Extended example

    Extended example

    Hi, I have created extended example based on your repository: https://github.com/mrgloom/keras-semantic-segmentation-example

    It also use random colors for foreground and background (not like lighter and darker like here https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model/blob/master/train_infinite_generator.py#L24 ), one idea behind it is that in that case network can learn 'shape of object' not just 'thresholding and separating background and foreground', also looks like using random colors make problem harder and network converges slower.

    Also I have experienced some problems:

    1. Netwoks not always converges on second run with fixed params even for this toy problem, looks like it depens on random seed.
    2. Dice loss and jaccard loss are harder to train than binary crossentropy, any ideas why? Network architecture is the same just loss differs, I even tried to load trained weights from binary crossentropy loss network and use them in dice loss network which show high dice coef.
    opened by mrgloom 8
  • Deeper network

    Deeper network

    I know this is not an issue, but I wanted to contact you to know how did you make the network deeper in keras for the DSTL competition using this model?

    opened by nassarofficial 6
  • Tensorflow problem

    Tensorflow problem

    When I use tensorflow-1.3.0 as backend, I get this kind of error:

    builtins.ValueError: Dimension 2 in both shapes must be equal, but are 3 and 32 for 'Assign' (op: 'Assign') with input shapes: [3,3,3,32], [3,3,32,3].
    
    opened by lawlite19 5
  • preprocess_batch for real data

    preprocess_batch for real data

    Here is preprocessing for the batch (looks like 256 should be 255 ;) ) https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model/blob/master/zf_unet_224_model.py#L27

    Is it ok for real images to use code like this or it should be calculated for entire dataset?

    batch=batch-np.mean(batch)
    batch=batch/np.std(batch)
    

    Also how crucial is impact of data normalization for U-net? In my tests even on this simple synthetic data network doesn't converges if input is not normalized.

    opened by mrgloom 2
  • Applying pretrained weights to 128*128 size image

    Applying pretrained weights to 128*128 size image

    You have generated pretrained weights for 224224 input size, but I have 128128. How can we use such weights in this situation, but without padding/upsampling 128*128 images. Sorry for silly question - is it worth trying in kaggle salt competition?

    opened by Diyago 1
  • Attribute Error

    Attribute Error

    Traceback (most recent call last): File "train.py", line 11, in import segmentation_models as sm File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/segmentation_models/init.py", line 98, in set_framework(_framework) File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/segmentation_models/init.py", line 68, in set_framework import efficientnet.keras # init custom objects File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/efficientnet/keras.py", line 17, in init_keras_custom_objects() File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/efficientnet/init.py", line 71, in init_keras_custom_objects keras.utils.generic_utils.get_custom_objects().update(custom_objects) AttributeError: module 'keras.utils' has no attribute 'generic_utils'

    when I run the code, I got the result below but don't know why there is no generic_utils attribute in the library since there is in the keras.

    opened by melih1996 0
  • How to run the model for 6 input channels?

    How to run the model for 6 input channels?

    Is it possible to run the model for 6 input channels? Three inputs in that are RGB values and the other three are metrics I want to pass on into the architecture for my use case.

    opened by ShreyaPandita01 2
  • dice and jaccard metrics

    dice and jaccard metrics

    Thanks for the repo. I am wondering why do you use a smoothing factor of 1.0 in both dice and jaccard coefficients? Where does this value comes from? And what about using another smaller value close to zero, e.g. K.epsilon()

    opened by tinalegre 3
  • model.fit step

    model.fit step

    Hi! I would like to know how I should perform the model.fit instruction. model.fit(trainSet, mask_trainSet, batch_size=20, nb_epoch=1, verbose=1,validation_split=0.2, shuffle=True, callbacks=[model_checkpoint])¿? What I write in callback??

    And how should I use the weights if I wan't to use pretained weights??

    Thank you very much and sorry for the inconvenience!

    opened by AmericaBG 7
  • How to generate img and mask correctly

    How to generate img and mask correctly

    I run your code and then find that the img batch has a shape(16,224,224,3),but mask batch has a shape(16,1,224,224). I don't understand it.Can you explain it to me?I use my dataset to train unet and then the dice coef is high,but the real effect is bad.

    opened by wong-way 6
Releases(v1.0)
Pytorch Implementation of LNSNet for Superpixel Segmentation

LNSNet Overview Official implementation of Learning the Superpixel in a Non-iterative and Lifelong Manner (CVPR'21) Learning Strategy The proposed LNS

42 Oct 11, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 08, 2022
TJU Deep Learning & Neural Network

Deep_Learning & Neural_Network_Lab 实验环境 Python 3.9 Anaconda3(官网下载或清华镜像都行) PyTorch 1.10.1(安装代码如下) conda install pytorch torchvision torchaudio cudatool

St3ve Lee 1 Jan 19, 2022
[Link]mareteutral - pars tradg wth M []

pairs-trading-with-ML Jonathan Larkin, August 2017 One popular strategy classification is Pairs Trading. Though this category of strategies can exhibi

Jonathan Larkin 134 Jan 06, 2023
GEP (GDB Enhanced Prompt) - a GDB plug-in for GDB command prompt with fzf history search, fish-like autosuggestions, auto-completion with floating window, partial string matching in history, and more!

GEP (GDB Enhanced Prompt) GEP (GDB Enhanced Prompt) is a GDB plug-in which make your GDB command prompt more convenient and flexibility. Why I need th

Alan Li 23 Dec 21, 2022
wlad 2 Dec 19, 2022
Probabilistic Gradient Boosting Machines

PGBM Probabilistic Gradient Boosting Machines (PGBM) is a probabilistic gradient boosting framework in Python based on PyTorch/Numba, developed by Air

Olivier Sprangers 112 Dec 28, 2022
https://arxiv.org/abs/2102.11005

LogME LogME: Practical Assessment of Pre-trained Models for Transfer Learning How to use Just feed the features f and labels y to the function, and yo

THUML: Machine Learning Group @ THSS 149 Dec 19, 2022
TensorFlow (v2.7.0) benchmark results on an M1 Macbook Air 2020 laptop (macOS Monterey v12.1).

M1-tensorflow-benchmark TensorFlow (v2.7.0) benchmark results on an M1 Macbook Air 2020 laptop (macOS Monterey v12.1). I was initially testing if Tens

particle 2 Jan 05, 2022
A very short and easy implementation of Quantile Regression DQN

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
Towards Boosting the Accuracy of Non-Latin Scene Text Recognition

Convolutional Recurrent Neural Network + CTCLoss | STAR-Net Code for paper "Towards Boosting the Accuracy of Non-Latin Scene Text Recognition" Depende

Sanjana Gunna 7 Aug 07, 2022
Automatic packaging of the open-composite libs for OvGME

OvGME Packager for OpenXR – OpenComposite for DCS Note This repository is currently unsupported and needs to be migrated to the upstream OpenComposite

12 Nov 03, 2022
Python-experiments - A Repository which contains python scripts to automate things and make your life easier with python

Python Experiments A Repository which contains python scripts to automate things

Vivek Kumar Singh 11 Sep 25, 2022
JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces

JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces JAXMAPP is a JAX-based library for multi-agent path planning (MAPP) in c

OMRON SINIC X 24 Dec 28, 2022
style mixing for animation face

An implementation of StyleGAN on Animation dataset. Install git clone https://github.com/MorvanZhou/anime-StyleGAN cd anime-StyleGAN pip install -r re

Morvan 46 Nov 30, 2022
Official implementation of "A Unified Objective for Novel Class Discovery", ICCV2021 (Oral)

A Unified Objective for Novel Class Discovery This is the official repository for the paper: A Unified Objective for Novel Class Discovery Enrico Fini

Enrico Fini 118 Dec 26, 2022
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

pgmpy 2.2k Jan 03, 2023
Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22)

Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22) Ok-Topk is a scheme for distributed training with sparse gradients

Shigang Li 9 Oct 29, 2022