Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear Devices

Related tags

Deep LearningEMOShip
Overview

EMOShip

This repository contains the EMO-Film dataset described in the paper "Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear Devices".

If you use this dataset in your work, please cite our paper:

@article{chang2021memx,
  title={MemX: An Attention-Aware Smart Eyewear System for Personalized Moment Auto-capture},
  author={Chang, Yuhu and Zhao, Yingying and Dong, Mingzhi and Wang, Yujiang and Lu, Yutian and Lv, Qin and Dick, Robert P and Lu, Tun and Gu, Ning and Shang, Li},
  journal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},
  year={2021},
  doi = {10.1145/3463509}
}

TBD

Dataset

The data of EMO-Film dataset is collected in a controlled laboratory environment. The video clips were selected from the FilmStim dataset, as FilmStim is one of the widely-used emotion-eliciting video dataset. We divided all videos of FilmStim dataset (64 video clips in total) into 7 categories based on the provided sentiment labels, each category corresponding to one emotional class (the neutral plus six basic emotion). The detailed description was given in Section 4.1 in the paper.

Due to the privacy concerns raised by some volunteers, we cannot release the full dataset with all 25 the subjects included. However, following the outcomes of the privacy survey, we are able to make public a filtered version of our dataset, which consists of 16 subjects giving their permissions to release the data. The videos from the rest 9 participants are therefore omitted to protect their privacy.

The dataset can be downloaded here (TBD).

Data Format

EMO-Film has two parts and a csv file:

eye.tar.gz: This compressed package contains eye images captured when each participant watched different video segments. It contains 16 folders, each corresponding to participants. There are two subfolders under each user folder, corresponding to the two video clips watched by the participant. Each subfolder contains eye images stored in JPG format.

filmstim.tar.gz: This compressed package contains the 64 video clips mentioned above. There are 64 folders corresponding to 64 video clips, and each folder contains the frames in JPG format of video clips.

label.csv: This CSV file contains the corresponding relationship between the eye part and the filmstim part, as well as the gaze position of the eyes and the user's emotion annotation.

It contains the following attributes:

user: The participant number.

eye_frame_path: The relative path of eye image frame. The frame has cropped to preserve only the eye area.

world_frame_path: The relative path of filmstim image frame. Please note that participants actually watched video clips from the display with glasses. After post-processing, the area outside the monitor has been excluded. Here is the content displayed on the monitor, that is, the frame of FilmStim dataset.

gaze_x and gaze_y: The gaze position in the space of the scene frame. The are floating point numbers and origin 0,0 at the bottom left and 1,1 at the top right. Please note that corresponding to the above, the areas outside the screen have been excluded.

PD_x and PD_y: The pupil diameter in pixels in two axial directions.

confidence: The confidence of pupil position. A value of 0 indicates no confidence and 1 indicates perfect confidence.

label: The emotion categories marked by the user, 0-6 respectively indicate angry, disgust, fear, happy, sad, surprise, and neutral.

Course about deep learning for computer vision and graphics co-developed by YSDA and Skoltech.

Deep Vision and Graphics This repo supplements course "Deep Vision and Graphics" taught at YSDA @fall'21. The course is the successor of "Deep Learnin

Yandex School of Data Analysis 160 Jan 02, 2023
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
InvTorch: memory-efficient models with invertible functions

InvTorch: Memory-Efficient Invertible Functions This module extends the functionality of torch.utils.checkpoint.checkpoint to work with invertible fun

Modar M. Alfadly 12 May 12, 2022
Drone Task1 - Drone Task1 With Python

Drone_Task1 Matching Results 3.mp4 1.mp4

MLV Lab (Machine Learning and Vision Lab at Korea University) 11 Nov 14, 2022
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
Python package for dynamic system estimation of time series

PyDSE Toolset for Dynamic System Estimation for time series inspired by DSE. It is in a beta state and only includes ARMA models right now. Documentat

Blue Yonder GmbH 40 Oct 07, 2022
Implementation of various Vision Transformers I found interesting

Implementation of various Vision Transformers I found interesting

Kim Seonghyeon 78 Dec 06, 2022
Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning

RIIT Our open-source code for RIIT: Rethinking the Importance of Implementation Tricks in Multi-AgentReinforcement Learning. We implement and standard

405 Jan 06, 2023
nnFormer: Interleaved Transformer for Volumetric Segmentation

nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation ". Please

jsguo 610 Dec 28, 2022
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Jan 02, 2023
Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save SAVE_NAME --data PATH_TO_DATA_DIR --dataset DATASET --model model_name [options] --n 1000 - train - t

Geoff Pleiss 5 Dec 12, 2022
Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"

Medical-Transformer Pytorch Code for the paper "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation" About this repo: This repo

Jeya Maria Jose 615 Dec 25, 2022
Unofficial implementation of the ImageNet, CIFAR 10 and SVHN Augmentation Policies learned by AutoAugment using pillow

AutoAugment - Learning Augmentation Policies from Data Unofficial implementation of the ImageNet, CIFAR10 and SVHN Augmentation Policies learned by Au

Philip Popien 1.3k Jan 02, 2023
Open Source Light Field Toolbox for Super-Resolution

BasicLFSR BasicLFSR is an open-source and easy-to-use Light Field (LF) image Super-Ressolution (SR) toolbox based on PyTorch, including a collection o

Squidward 50 Nov 18, 2022
Py4fi2nd - Jupyter Notebooks and code for Python for Finance (2nd ed., O'Reilly) by Yves Hilpisch.

Python for Finance (2nd ed., O'Reilly) This repository provides all Python codes and Jupyter Notebooks of the book Python for Finance -- Mastering Dat

Yves Hilpisch 1k Jan 05, 2023
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python=3.7 pytorch=1.6.0 torchvision=0.8

Yunfan Li 210 Dec 30, 2022
Secure Distributed Training at Scale

Secure Distributed Training at Scale This repository contains the implementation of experiments from the paper "Secure Distributed Training at Scale"

Yandex Research 9 Jul 11, 2022
Self-supervised Label Augmentation via Input Transformations (ICML 2020)

Self-supervised Label Augmentation via Input Transformations Authors: Hankook Lee, Sung Ju Hwang, Jinwoo Shin (KAIST) Accepted to ICML 2020 Install de

hankook 96 Dec 29, 2022
Aydin is a user-friendly, feature-rich, and fast image denoising tool

Aydin is a user-friendly, feature-rich, and fast image denoising tool that provides a number of self-supervised, auto-tuned, and unsupervised image denoising algorithms.

Royer Lab 99 Dec 14, 2022