TeST: Temporal-Stable Thresholding for Semi-supervised Learning

Related tags

Deep LearningTeST
Overview

TeST: Temporal-Stable Thresholding for Semi-supervised Learning


TeST Illustration

Semi-supervised learning (SSL) offers an effective method for large-scale data scenes that can utilize large amounts of unlabeled samples. The mainstream SSL approaches use only the criterion of fixed confidence threshold to assess whether the prediction of a sample is of sufficiently high quality to serve as a pseudo-label. However, this simple quality assessment ignores how well the model learns a sample and the uncertainty possessed by that sample itself, failing to fully exploit a large number of correct samples below the confidence threshold. We propose a novel pseudo-label quality assessment method, TeST (Temporal-Stable Thresholding), to design the adaptive thresholds for each instance to recall high-quality samples that are more likely to be correct but discarded by a fixed threshold. We first record the predictions of all instances over a continuous time series. Then we calculate the mean and standard deviation of these predictions to reflect the learning status and temporal uncertainty of the samples, respectively, and use to select pseudo-labels dynamically. In addition, we introduce more diverse samples for TeST to be supervised by high-quality pseudo-labels, thus reducing the uncertainty of overall samples. Our method achieves state-of-the-art performance in various SSL benchmarks, including $5.33%$ and $4.52%$ accuracy improvements on CIFAR-10 with 40 labels and Mini-ImageNet with 4000 labels, respectively. The ablation study further demonstrates that TeST is capable of extending the high-quality pseudo-labels with more temporal-stable and correct pseudo-labels.

Requirements

All experiments are done with python 3.7, torch==1.7.1; torchvision==0.8.2

Prepare environment

  1. Create conda virtual environment and activate it.
conda create -n tst python=3.7 -y
conda activate tst
  1. Install PyTorch and torchvision following the official instructions.
conda install pytorch==1.7.1 torchvision==0.8.2 -c pytorch

Prepare environment

git clone https://github.com/Harry887/TeST.git
cd tst
pip install -r requirements.txt
pip install -v -e .  # or "python setup.py develop"

Training

FixMatch for CIFAR10 with 250 labels

python tst/tools/train_semi.py -d 0-3 -b 64 -f tst/exps/fixmatch/fixmatch_cifar10_exp.py --exp-options out=outputs/exp/cifar10/250/[email protected]_4x16

TeST for Mini-ImageNet with 4000 labels

python tst/tools/train_semi_tst_dual.py -d 0-3 -b 64 -f tst/exps/tst/tst_miniimagenet_dual_exp.py --exp-options out=outputs/exp/miniimagenet/4000/[email protected]_4x16

Development

pre-commit code check

pip install -r requirements-dev.txt
pre-commit install
Owner
Xiong Weiyu
Xiong Weiyu
OBG-FCN - implementation of 'Object Boundary Guided Semantic Segmentation'

OBG-FCN This repository is to reproduce the implementation of 'Object Boundary Guided Semantic Segmentation' in http://arxiv.org/abs/1603.09742 Object

Jiu XU 3 Mar 11, 2019
[WACV 2022] Contextual Gradient Scaling for Few-Shot Learning

CxGrad - Official PyTorch Implementation Contextual Gradient Scaling for Few-Shot Learning Sanghyuk Lee, Seunghyun Lee, and Byung Cheol Song In WACV 2

Sanghyuk Lee 4 Dec 05, 2022
End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model

onnx-facial-lmk-detector End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model, model.onnx. Demo You can

atksh 42 Dec 30, 2022
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

BoomStar 51 Dec 13, 2022
DLFlow is a deep learning framework.

DLFlow是一套深度学习pipeline,它结合了Spark的大规模特征处理能力和Tensorflow模型构建能力。利用DLFlow可以快速处理原始特征、训练模型并进行大规模分布式预测,十分适合离线环境下的生产任务。利用DLFlow,用户只需专注于模型开发,而无需关心原始特征处理、pipeline构建、生产部署等工作。

DiDi 152 Oct 27, 2022
Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Duong H. Le 18 Jun 13, 2022
When in Doubt: Improving Classification Performance with Alternating Normalization

When in Doubt: Improving Classification Performance with Alternating Normalization Findings of EMNLP 2021 Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoa

Menglin Jia 13 Nov 06, 2022
AAAI 2022: Stationary diffusion state neural estimation

Stationary Diffusion State Neural Estimation Although many graph-based clustering methods attempt to model the stationary diffusion state in their obj

绽琨 33 Nov 24, 2022
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

HEP Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior Implementation Python3 PyTorch=1.0 NVIDIA GPU+CUDA Training process The

FengZhang 34 Dec 04, 2022
TensorFlow2 Classification Model Zoo playing with TensorFlow2 on the CIFAR-10 dataset.

Training CIFAR-10 with TensorFlow2(TF2) TensorFlow2 Classification Model Zoo. I'm playing with TensorFlow2 on the CIFAR-10 dataset. Architectures LeNe

Chia-Hung Yuan 16 Sep 27, 2022
Official code of paper: MovingFashion: a Benchmark for the Video-to-Shop Challenge

SEAM Match-RCNN Official code of MovingFashion: a Benchmark for the Video-to-Shop Challenge paper Installation Requirements: Pytorch 1.5.1 or more rec

HumaticsLAB 31 Oct 10, 2022
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

TVT Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation Datasets: Digit: MNIST, SVHN, USPS Object: Office, Office-Home, Vi

37 Dec 15, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 02, 2023
SphereFace: Deep Hypersphere Embedding for Face Recognition

SphereFace: Deep Hypersphere Embedding for Face Recognition By Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj and Le Song License SphereFa

Weiyang Liu 1.5k Dec 29, 2022
The Official PyTorch Implementation of DiscoBox.

DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision Paper | Project page | Demo (Youtube) | Demo (Bilib

NVIDIA Research Projects 89 Jan 09, 2023
A flexible and extensible framework for gait recognition.

A flexible and extensible framework for gait recognition. You can focus on designing your own models and comparing with state-of-the-arts easily with the help of OpenGait.

Shiqi Yu 335 Dec 22, 2022
Self Driving RC Car Code

Derp Learning Derp Learning is a Python package that collects data, trains models, and then controls an RC car for track racing. Hardware You will nee

Not Karol 39 Dec 07, 2022
Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! Very tiny! Stock Market Financial Technical Analysis Python library . Quant Trading automation or cryptocoin exchange

MyTT Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! to Stock Market Financial Technical Analysis Python

dev 34 Dec 27, 2022
Python library for computer vision labeling tasks. The core functionality is to translate bounding box annotations between different formats-for example, from coco to yolo.

PyLabel pip install pylabel PyLabel is a Python package to help you prepare image datasets for computer vision models including PyTorch and YOLOv5. I

PyLabel Project 176 Jan 01, 2023