Modification of convolutional neural net "UNET" for image segmentation in Keras framework

Overview

ZF_UNET_224 Pretrained Model

Modification of convolutional neural net "UNET" for image segmentation in Keras framework

Requirements

Python 3.*, Keras 2.1, Tensorflow 1.4

Usage

from zf_unet_224_model import ZF_UNET_224, dice_coef_loss, dice_coef
from keras.optimizers import Adam

model = ZF_UNET_224(weights='generator')
optim = Adam()
model.compile(optimizer=optim, loss=dice_coef_loss, metrics=[dice_coef])

model.fit(...)

Notes

Pretrained weights

Download: Weights for Tensorflow backend ~123 MB (Keras 2.1, Dice coef: 0.998)

Weights were obtained with random image generator (generator code available here: train_infinite_generator.py). See example of images from generator below.

Example of images from generator

Dice coefficient for pretrained weights: ~0.998. See history of learning below:

Log of dice coefficient during training process

Comments
  • Extended example

    Extended example

    Hi, I have created extended example based on your repository: https://github.com/mrgloom/keras-semantic-segmentation-example

    It also use random colors for foreground and background (not like lighter and darker like here https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model/blob/master/train_infinite_generator.py#L24 ), one idea behind it is that in that case network can learn 'shape of object' not just 'thresholding and separating background and foreground', also looks like using random colors make problem harder and network converges slower.

    Also I have experienced some problems:

    1. Netwoks not always converges on second run with fixed params even for this toy problem, looks like it depens on random seed.
    2. Dice loss and jaccard loss are harder to train than binary crossentropy, any ideas why? Network architecture is the same just loss differs, I even tried to load trained weights from binary crossentropy loss network and use them in dice loss network which show high dice coef.
    opened by mrgloom 8
  • Deeper network

    Deeper network

    I know this is not an issue, but I wanted to contact you to know how did you make the network deeper in keras for the DSTL competition using this model?

    opened by nassarofficial 6
  • Tensorflow problem

    Tensorflow problem

    When I use tensorflow-1.3.0 as backend, I get this kind of error:

    builtins.ValueError: Dimension 2 in both shapes must be equal, but are 3 and 32 for 'Assign' (op: 'Assign') with input shapes: [3,3,3,32], [3,3,32,3].
    
    opened by lawlite19 5
  • preprocess_batch for real data

    preprocess_batch for real data

    Here is preprocessing for the batch (looks like 256 should be 255 ;) ) https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model/blob/master/zf_unet_224_model.py#L27

    Is it ok for real images to use code like this or it should be calculated for entire dataset?

    batch=batch-np.mean(batch)
    batch=batch/np.std(batch)
    

    Also how crucial is impact of data normalization for U-net? In my tests even on this simple synthetic data network doesn't converges if input is not normalized.

    opened by mrgloom 2
  • Applying pretrained weights to 128*128 size image

    Applying pretrained weights to 128*128 size image

    You have generated pretrained weights for 224224 input size, but I have 128128. How can we use such weights in this situation, but without padding/upsampling 128*128 images. Sorry for silly question - is it worth trying in kaggle salt competition?

    opened by Diyago 1
  • Attribute Error

    Attribute Error

    Traceback (most recent call last): File "train.py", line 11, in import segmentation_models as sm File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/segmentation_models/init.py", line 98, in set_framework(_framework) File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/segmentation_models/init.py", line 68, in set_framework import efficientnet.keras # init custom objects File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/efficientnet/keras.py", line 17, in init_keras_custom_objects() File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/efficientnet/init.py", line 71, in init_keras_custom_objects keras.utils.generic_utils.get_custom_objects().update(custom_objects) AttributeError: module 'keras.utils' has no attribute 'generic_utils'

    when I run the code, I got the result below but don't know why there is no generic_utils attribute in the library since there is in the keras.

    opened by melih1996 0
  • How to run the model for 6 input channels?

    How to run the model for 6 input channels?

    Is it possible to run the model for 6 input channels? Three inputs in that are RGB values and the other three are metrics I want to pass on into the architecture for my use case.

    opened by ShreyaPandita01 2
  • dice and jaccard metrics

    dice and jaccard metrics

    Thanks for the repo. I am wondering why do you use a smoothing factor of 1.0 in both dice and jaccard coefficients? Where does this value comes from? And what about using another smaller value close to zero, e.g. K.epsilon()

    opened by tinalegre 3
  • model.fit step

    model.fit step

    Hi! I would like to know how I should perform the model.fit instruction. model.fit(trainSet, mask_trainSet, batch_size=20, nb_epoch=1, verbose=1,validation_split=0.2, shuffle=True, callbacks=[model_checkpoint])¿? What I write in callback??

    And how should I use the weights if I wan't to use pretained weights??

    Thank you very much and sorry for the inconvenience!

    opened by AmericaBG 7
  • How to generate img and mask correctly

    How to generate img and mask correctly

    I run your code and then find that the img batch has a shape(16,224,224,3),but mask batch has a shape(16,1,224,224). I don't understand it.Can you explain it to me?I use my dataset to train unet and then the dice coef is high,but the real effect is bad.

    opened by wong-way 6
Releases(v1.0)
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

hzwer 190 Jan 08, 2023
Deploy optimized transformer based models on Nvidia Triton server

🤗 Hugging Face Transformer submillisecond inference 🤯 and deployment on Nvidia Triton server Yes, you can perfom inference with transformer based mo

Lefebvre Sarrut Services 1.2k Jan 05, 2023
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 04, 2023
Scripts used to make and evaluate OpenAlex's concept tagging model

openalex-concept-tagging This repository contains all of the code for getting the concept tagger up and running. To learn more about where this model

OurResearch 18 Dec 09, 2022
Dilated RNNs in pytorch

PyTorch Dilated Recurrent Neural Networks PyTorch implementation of Dilated Recurrent Neural Networks (DilatedRNN). Getting Started Installation: $ pi

Zalando Research 200 Nov 17, 2022
DIR-GNN - Discovering Invariant Rationales for Graph Neural Networks

DIR-GNN "Discovering Invariant Rationales for Graph Neural Networks" (ICLR 2022)

Ying-Xin (Shirley) Wu 70 Nov 13, 2022
Training neural models with structured signals.

Neural Structured Learning in TensorFlow Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured

955 Jan 02, 2023
Data, notebooks, and articles associated with the RSNA AI Deep Learning Lab at RSNA 2021

RSNA AI Deep Learning Lab 2021 Intro Welcome Deep Learners! This document provides all the information you need to participate in the RSNA AI Deep Lea

RSNA 65 Dec 16, 2022
Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition - NeurIPS2021

Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition Project Page | Video | Paper Implementation for Neural-PIL. A novel method wh

Computergraphics (University of Tübingen) 64 Dec 29, 2022
This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

19 Oct 27, 2022
This code is for our paper "VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers"

ICCV Workshop 2021 VTGAN This code is for our paper "VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers"

Sharif Amit Kamran 25 Dec 08, 2022
A Topic Modeling toolbox

Topik A Topic Modeling toolbox. Introduction The aim of topik is to provide a full suite and high-level interface for anyone interested in applying to

Anaconda, Inc. (formerly Continuum Analytics, Inc.) 93 Dec 01, 2022
Simple tools for logging and visualizing, loading and training

TNT TNT is a library providing powerful dataloading, logging and visualization utilities for Python. It is closely integrated with PyTorch and is desi

1.5k Jan 02, 2023
Data and extra materials for the food safety publications classifier

Data and extra materials for the food safety publications classifier The subdirectories contain detailed descriptions of their contents in the README.

1 Jan 20, 2022
4th place solution to datafactory challenge by Intermarché.

Solution to Datafactory challenge by Intermarché. 4th place solution to datafactory challenge by Intermarché. The objective of the challenge is to pre

Raphael Sourty 11 Mar 19, 2022
Reinforcement Learning for Automated Trading

Reinforcement Learning for Automated Trading This thesis has been realized for the obtention of the Master's in Mathematical Engineering at the Polite

Pierpaolo Necchi 80 Jun 19, 2022
RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching

RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching This repository contains the source code for our paper: RAFT-Stereo: Multilevel

Princeton Vision & Learning Lab 328 Jan 09, 2023
[ACL-IJCNLP 2021] Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning

CLNER The code is for our ACL-IJCNLP 2021 paper: Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning CLNER is a

71 Dec 08, 2022
Repository for the AugmentedPCA Python package.

Overview This Python package provides implementations of Augmented Principal Component Analysis (AugmentedPCA) - a family of linear factor models that

Billy Carson 6 Dec 07, 2022
Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE)

OG-SPACE Introduction Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE) is a computational framewo

Data and Computational Biology Group UNIMIB (was BI*oinformatics MI*lan B*icocca) 0 Nov 17, 2021