Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection

Overview

Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection

License: GPL v3

Introduction

This repository includes codes and models of "Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection" paper. link: https://doi.org/10.1016/j.compbiomed.2020.104121

Dataset

First you should download the MHSMA dataset using:

git clone https://github.com/soroushj/mhsma-dataset.git

Usage

First of all,the configuration file should be setted.So open dmtl.txt or dtl.txt and set the setting you want.This files contains paramaters of the model that you are going to train.

  • dtl.txt have only one line and contains paramaters to train a DTL model.

  • dmtl.txt contains two lines:paramaters of stage 1 are kept in the first line of the file and paramaters of stage 2 are kept in the second line of the file.
    Some paramaters have an aray of three values that they keep the value of three labels.To set them,consider this sequence:[Acrosome,Vacoule,Head].

  • To train a DTL model,use the following commands and arguments:

python train.py -t dtl [-e epchos] [-label label]  [-model model] [-w file] 

Argumetns:

Argument Description
-t type of network(dtl or dmtl)
-e number of epochs
-label label(a,v or h)
-model pre-trained model
-w name of best weihgt file
--phase You can use it to choose stage in DMTL(1 or 2)
--second_model The base model for second stage of DMTL

1.Train

  • To choose a pre-trained model, you can use one of the following models:
model argument Description
vgg_19 VGG 19
vgg_16 VGG 16
resnet_50 Resnet 50
resnet_101 Resnet 101
resnet_502 Resnet 502
  • To train a DMTL model,use the following commands and arguments:
python train.py -t dmtl [--phase phase] [-e epchos] [-label label] [-model model] [-w file]

Also you can use your own pre-trained model by using address of your model instead of the paramaters been told in the table above.

Example:
python train.py -t dmtl --phase 1 -e 100 -label a -model C:\model.h5 -w w.h5

2.K Fold

  • To perform K Fold on a model,use "-k_fold True" argument.
python train.py -k_fold True [-t type] [-e epchos] [-label label] [-model model] [-w file]

3.Threshold Search

  • To find a good threshold for your model,use the following code:
python threshold.py [-t type] [-addr model address] [-l label]

Models

The CNN models that were introduced and evaluated in our research paper can be found in the v1.0 release of this repository.

You might also like...
Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (MTCNN)
Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (MTCNN)

Face-Detection-with-MTCNN Face detection is a computer vision problem that involves finding faces in photos. It is a trivial problem for humans to sol

Multi-task yolov5 with detection and segmentation based on yolov5
Multi-task yolov5 with detection and segmentation based on yolov5

YOLOv5DS Multi-task yolov5 with detection and segmentation based on yolov5(branch v6.0) decoupled head anchor free segmentation head README中文 Ablation

Code for the ICML 2021 paper
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

A novel Engagement Detection with Multi-Task Training (ED-MTT) system
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Comments
  • a possible typo(bug)

    a possible typo(bug)

    Very interesting idea and complements!

    In LoadData.py, starting from line 150, ` if phase == 'search':

        return {
                "x_train": x_train_128,
                "y_train": y_train,
                "x_train_128": x_train_128,
                'x_val_128': x_valid_128,
                "x_val": x_valid_128,
                "y_val": y_valid,
                "x_test": x_test_128,
                "y_test": y_test
                }`
    

    here, I think that the first key-value pair should probably be "x_train": x_train instead of "x_train": x_train_128, which causes an error of shape mismatch during fit.

    opened by captainst 0
Releases(v1.0)
Owner
Amir Abbasi
Student at University of Guilan (Computer Engineering), Working on Computer Vision & Reinforcement Learning
Amir Abbasi
Learned Initializations for Optimizing Coordinate-Based Neural Representations

Learned Initializations for Optimizing Coordinate-Based Neural Representations Project Page | Paper Matthew Tancik*1, Ben Mildenhall*1, Terrance Wang1

Matthew Tancik 127 Jan 03, 2023
Official repository for ABC-GAN

ABC-GAN The work represented in this repository is the result of a 14 week semesterthesis on photo-realistic image generation using generative adversa

IgorSusmelj 10 Jun 23, 2022
LaneAF: Robust Multi-Lane Detection with Affinity Fields

LaneAF: Robust Multi-Lane Detection with Affinity Fields This repository contains Pytorch code for training and testing LaneAF lane detection models i

155 Dec 17, 2022
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

Shihua Huang 23 Jul 22, 2022
Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.

Conceptual 12M We introduce the Conceptual 12M (CC12M), a dataset with ~12 million image-text pairs meant to be used for vision-and-language pre-train

Google Research Datasets 226 Dec 07, 2022
Robust, modular and efficient implementation of advanced Hamiltonian Monte Carlo algorithms

AdvancedHMC.jl AdvancedHMC.jl provides a robust, modular and efficient implementation of advanced HMC algorithms. An illustrative example for Advanced

The Turing Language 167 Jan 01, 2023
Simple, efficient and flexible vision toolbox for mxnet framework.

MXbox: Simple, efficient and flexible vision toolbox for mxnet framework. MXbox is a toolbox aiming to provide a general and simple interface for visi

Ligeng Zhu 31 Oct 19, 2019
Rethinking Transformer-based Set Prediction for Object Detection

Rethinking Transformer-based Set Prediction for Object Detection Here are the code for the ICCV paper. The code is adapted from Detectron2 and AdelaiD

Zhiqing Sun 62 Dec 03, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
这是一个facenet-pytorch的库,可以用于训练自己的人脸识别模型。

Facenet:人脸识别模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 预测步骤 How2predict 训练步骤 How2train 参考资料 Reference 性能情况 训练数据

Bubbliiiing 210 Jan 06, 2023
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

20 Jul 18, 2022
Python code for the paper How to scale hyperparameters for quickshift image segmentation

How to scale hyperparameters for quickshift image segmentation Python code for the paper How to scale hyperparameters for quickshift image segmentatio

0 Jan 25, 2022
An 16kHz implementation of HiFi-GAN for soft-vc.

HiFi-GAN An 16kHz implementation of HiFi-GAN for soft-vc. Relevant links: Official HiFi-GAN repo HiFi-GAN paper Soft-VC repo Soft-VC paper Example Usa

Benjamin van Niekerk 42 Dec 27, 2022
Code to reproduce the results for Statistically Robust Neural Network Classification, published in UAI 2021

Code to reproduce the results for Statistically Robust Neural Network Classification, published in UAI 2021

1 Jun 02, 2022
Pytorch implementation of OCNet series and SegFix.

openseg.pytorch News 2021/09/14 MMSegmentation has supported our ISANet and refer to ISANet for more details. 2021/08/13 We have released the implemen

openseg-group 1.1k Dec 23, 2022
Statistical-Rethinking-with-Python-and-PyMC3 - Python/PyMC3 port of the examples in " Statistical Rethinking A Bayesian Course with Examples in R and Stan" by Richard McElreath

Statistical Rethinking with Python and PyMC3 This repository has been deprecated in favour of this one, please check that repository for updates, for

Osvaldo Martin 786 Dec 29, 2022
PyTorch implementation of MuseMorphose, a Transformer-based model for music style transfer.

MuseMorphose This repository contains the official implementation of the following paper: Shih-Lun Wu, Yi-Hsuan Yang MuseMorphose: Full-Song and Fine-

Yating Music, Taiwan AI Labs 142 Jan 08, 2023
PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"

Non-Autoregressive Transformer Code release for Non-Autoregressive Neural Machine Translation by Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K.

Salesforce 261 Nov 12, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 03, 2023
Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs

PhyCRNet Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs Paper link: [ArXiv] By: Pu Ren, Chengping Rao, Yang

Pu Ren 11 Aug 23, 2022