Implementation of Neonatal Seizure Detection using EEG signals for deploying on edge devices including Raspberry Pi.

Overview

NeonatalSeizureDetection

Description

Link: https://arxiv.org/abs/2111.15569

Citation:

@misc{nagarajan2021scalable,
      title={Scalable Machine Learning Architecture for Neonatal Seizure Detection on Ultra-Edge Devices}, 
      author={Vishal Nagarajan and Ashwini Muralidharan and Deekshitha Sriraman and Pravin Kumar S},
      year={2021},
      eprint={2111.15569},
      archivePrefix={arXiv},
      primaryClass={eess.SP}
}

This repository contains code for the implementation of the paper titled "Scalable Machine Learning Architecture for Neonatal Seizure Detection on Ultra-Edge Devices", which has been accepted at the AISP '22: 2nd International Conference on Artificial Intelligence and Signal Processing. A typical neonatal seizure and non-seizure event is illustrated below. Continuous EEG signals are filtered and segmented with varying window lengths of 1, 2, 4, 8, and 16 seconds. The data used here for experimentation can be downloaded from here.

Seizure Event Non-seizure Event

This end-to-end architecture receives raw EEG signal, processes it and classifies it as ictal or normal activity. After preprocessing, the signal is passed to a feature extraction engine that extracts the necessary feature set Fd. It is followed by a scalable machine learning (ML) classifier that performs prediction as illustrated in the figure below.

Pipeline Architecture

Files description

  1. dataprocessing.ipynb -> Notebook for converting edf files to csv files.
  2. filtering.ipynb -> Notebook for filtering the input EEG signals in order to observe the specific frequencies.
  3. segmentation.ipynb -> Notebook for segmenting the input into appropriate windows lengths and overlaps.
  4. features_final.ipynb -> Notebook for extracting relevant features from segmented data.
  5. protoNN_example.py -> Script used for running protoNN model using .npy files.
  6. inference_time.py -> Script used to record and report inference times.
  7. knn.ipynb -> Notebook used to compare results of ProtoNN and kNN models.

Dependencies

If you are using conda, it is recommended to switch to a new environment.

    $ conda create -n myenv
    $ conda activate myenv
    $ conda install pip
    $ pip install -r requirements.txt

If you wish to use virtual environment,

    $ pip install virtualenv
    $ virtualenv myenv
    $ source myenv/bin/activate
    $ pip install -r requirements.txt

Usage

  1. Clone the ProtoNN package from here, antropy package from here, and envelope_derivative_operator package from here.

  2. Replace the protoNN_example.py with protoNN_example.py.

  3. Prepare the train and test data .npy files and save it in a DATA_DIR directory.

  4. Execute the following command in terminal after preparing the data files. Create an output directory should you need to save the weights of the ProtoNN object as OUT_DIR.

        $ python protoNN_example.py -d DATA_DIR -e 500 -o OUT_DIR
    

Authors

Vishal Nagarajan

Ashwini Muralidharan

Deekshitha Sriraman

Acknowledgements

ProtoNN built using EdgeML provided by Microsoft. Features extracted using antropy and otoolej repositories.

References

[1] Nathan Stevenson, Karoliina Tapani, Leena Lauronen, & Sampsa Vanhatalo. (2018). A dataset of neonatal EEG recordings with seizures annotations [Data set]. Zenodo. https://doi.org/10.5281/zenodo.1280684.

[2] Gupta, Ankit et al. "ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices." Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, PMLR 70.

Owner
Vishal Nagarajan
Undergraduate ML Research Assistant at Solarillion Foundation B.E. (CSE) @ SSNCE
Vishal Nagarajan
Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition"

Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition" Pre-trained Deep Convo

Ankush Malaker 5 Nov 11, 2022
OpenMMLab Text Detection, Recognition and Understanding Toolbox

Introduction English | 简体中文 MMOCR is an open-source toolbox based on PyTorch and mmdetection for text detection, text recognition, and the correspondi

OpenMMLab 3k Jan 07, 2023
CLASP - Contrastive Language-Aminoacid Sequence Pretraining

CLASP - Contrastive Language-Aminoacid Sequence Pretraining Repository for creating models pretrained on language and aminoacid sequences similar to C

Michael Pieler 133 Dec 29, 2022
Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer

VidLanKD Implementation of VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer by Zineng Tang, Jaemin Cho, Hao Tan, Mohi

Zineng Tang 54 Dec 20, 2022
Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback

CoSMo.pytorch Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback, Seungmin Lee*, Dongwan Kim*, Bohyung

Seung Min Lee 54 Dec 08, 2022
Tree Nested PyTorch Tensor Lib

DI-treetensor treetensor is a generalized tree-based tensor structure mainly developed by OpenDILab Contributors. Almost all the operation can be supp

OpenDILab 167 Dec 29, 2022
Convnet transfer - Code for paper How transferable are features in deep neural networks?

How transferable are features in deep neural networks? This repository contains source code necessary to reproduce the results presented in the follow

Jason Yosinski 143 Sep 13, 2022
Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples

Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples (WACV 2022) and Beyond Simple Meta-Learning: Multi-Purpose Model

PLAI Group at UBC 42 Dec 06, 2022
TagLab: an image segmentation tool oriented to marine data analysis

TagLab: an image segmentation tool oriented to marine data analysis TagLab was created to support the activity of annotation and extraction of statist

Visual Computing Lab - ISTI - CNR 49 Dec 29, 2022
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
Code for CPM-2 Pre-Train

CPM-2 Pre-Train Pre-train CPM-2 此分支为110亿非 MoE 模型的预训练代码,MoE 模型的预训练代码请切换到 moe 分支 CPM-2技术报告请参考link。 0 模型下载 请在智源资源下载页面进行申请,文件介绍如下: 文件名 描述 参数大小 100000.tar

Tsinghua AI 136 Dec 28, 2022
A PaddlePaddle version image model zoo.

Paddle-Image-Models English | 简体中文 A PaddlePaddle version image model zoo. Install Package Install by pip: $ pip install ppim Install by wheel package

AgentMaker 131 Dec 07, 2022
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation

STCN Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [a

Rex Cheng 456 Dec 12, 2022
g9.py - Torch interactive graphics

g9.py - Torch interactive graphics A Torch toy in the browser. Demo at https://srush.github.io/g9py/ This is a shameless copy of g9.js, written in Pyt

Sasha Rush 13 Nov 16, 2022
Text completion with Hugging Face and TensorFlow.js running on Node.js

Katana ML Text Completion 🤗 Description Runs with with Hugging Face DistilBERT and TensorFlow.js on Node.js distilbert-model - converter from Hugging

Katana ML 2 Nov 04, 2022
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
PyTorch implementation of the Flow Gaussian Mixture Model (FlowGMM) model from our paper

Flow Gaussian Mixture Model (FlowGMM) This repository contains a PyTorch implementation of the Flow Gaussian Mixture Model (FlowGMM) model from our pa

Pavel Izmailov 124 Nov 06, 2022
Stratified Transformer for 3D Point Cloud Segmentation (CVPR 2022)

Stratified Transformer for 3D Point Cloud Segmentation Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia

DV Lab 195 Jan 01, 2023
The Official Implementation of Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose [NIPS 2021].

Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose Release Notes The offical PyTorch implementation of Neural View Sy

Angtian Wang 20 Oct 09, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022