Learning an Adaptive Meta Model-Generator for Incrementally Updating Recommender Systems

Related tags

Deep LearningASMG
Overview

Learning an Adaptive Meta Model-Generator for Incrementally Updating Recommender Systems

This is our experimental code for RecSys 2021 paper "Learning an Adaptive Meta Model-Generator for Incrementally Updating Recommender Systems".

The paper is available here.
The video is available here.
The slide is available here.

Requirements

tensorflow 1.4.0
pandas
numpy

GPUs with memory >= 10GB

Data Preprocessing

The raw data can be obtained from:
Tmall Data data_format1
Sobazaar Data Data > Sobazaar-hashID.csv.gz
MovieLens Data ml-25m

To preprocess the above raw data, save them in the raw_data folder under the root directory, and do

cd preproc
python tmall_preproc.py
python soba_preproc.py
python ml_preproc.py

The preprocessed datasets will be saved in the datasets folder for later use.

Pretraining

To simulate the real-world applications, the first 10 periods of dataset are used to pretrain an initial Embedding&MLP base model, and all the compared model updating methods will restore from the same pretrained model.

To pretrain a model for Tmall/Sobazaar/MovieLens, do

cd Tmall/pretrain
python train_tmall.py

cd Sobazaar/pretrain
python train_soba.py

cd MovieLens/pretrain
python train_ml.py

The pretrained base model will be saved in Tmall/pretrain/ckpts, Sobazaar/pretrain/ckpts and MovieLens/pretrain/ckpts respectively.

All the hyper-parameters can be easily configured in train_config at the beginning of each entry file (i.e., train_xxx.py).

Note: pretraining must be done before conducting any model updating method.

Baselines and Variants

All the compared model updating methods for a specific dataset are contained in the folder named by that dataset.

Our proposed method:
ASMGgru_multi

Baseline methods:
IU
BU
SPMF
IncCTR
SML
SMLmf

Variants of ASMGgru_multi:
ASMGgru_zero
ASMGgru_full
ASMGgru_single
(we do not create a separate folder for ASMGgru_uniform, as it can be easily implemented in ASMGgru_multi, see the code for more details)

To perform any of the ASMGgru methods, we need to first conduct a run of IU to generate the input model sequence.

For example, to perform a run of IU experiment for Tmall, do

cd Tmall/IU
python train_tmall.py

Then we can proceed to perform any of the ASMGgru methods

cd Tmall/ASMGgru_multi
python train_tmall.py

Other model updating methods can be conducted on their own without any pre-requisite.

Note that for SMLmf, since it is based on a different base model (i.e., Matrix Factorization), additional pretraining needs to be performed for this method.

cd Tmall/SMLmf/pretrain
python train_tmall.py

Then

cd Tmall/SMLmf/SML
python train_tmall.py

All the hyper-parameters can be easily configured in train_config at the beginning of each entry file (i.e., train_xxx.py).

The evaluation results can be found from the path with the following format:

/ /ckpts/ / /test_metrics.txt

where is configured in train_config of the entry file, containing some essential hyper-parameter settings, and by default is date20141030 for Tmall and period30 for MovieLens and Sobazaar.

Here are some examples of the possible paths that the evaluation results may reside:

Tmall/ASMGgru_multi/ckpts/ASMGgru_multi_linear_train11-23_test24-30_4emb_4mlp_1epoch_3_0.01/date20141030/test_metrics.txt

MovieLens/IU/ckpts/IU_train11-23_test24-30_1epoch_0.001/period30/test_metrics.txt

Citation

If you find this repo useful in your research, please cite the following:

@inproceedings{peng2021learning,
  title={Learning an Adaptive Meta Model-Generator for Incrementally Updating Recommender Systems},
  author={Peng, Danni and Pan, Sinno Jialin and Zhang, Jie and Zeng, Anxiang},
  booktitle={Fifteenth ACM Conference on Recommender Systems},
  pages={411--421},
  year={2021}
}
FinEAS: Financial Embedding Analysis of Sentiment 📈

FinEAS: Financial Embedding Analysis of Sentiment 📈 (SentenceBERT for Financial News Sentiment Regression) This repository contains the code for gene

LHF Labs 31 Dec 13, 2022
Official Pytorch implementation of Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

Scene Representation Networks This is the official implementation of the NeurIPS submission "Scene Representation Networks: Continuous 3D-Structure-Aw

Vincent Sitzmann 365 Jan 06, 2023
Language Models Can See: Plugging Visual Controls in Text Generation

Language Models Can See: Plugging Visual Controls in Text Generation Authors: Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lin

Yixuan Su 195 Dec 22, 2022
Multiple-Object Tracking with Transformer

TransTrack: Multiple-Object Tracking with Transformer Introduction TransTrack: Multiple-Object Tracking with Transformer Models Training data Training

Peize Sun 537 Jan 04, 2023
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)

Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021) Overview of paths used in DIG and IG. w is the word being attributed. The

INK Lab @ USC 17 Oct 27, 2022
Fairness Metrics: All you need to know

Fairness Metrics: All you need to know Testing machine learning software for ethical bias has become a pressing current concern. Recent research has p

Anonymous2020 1 Jan 17, 2022
Collection of generative models in Tensorflow

tensorflow-generative-model-collections Tensorflow implementation of various GANs and VAEs. Related Repositories Pytorch version Pytorch version of th

3.8k Dec 30, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
This repository contains the code for the paper 'PARM: Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval' published at ECIR'22.

Paragraph Aggregation Retrieval Model (PARM) for Dense Document-to-Document Retrieval This repository contains the code for the paper PARM: A Paragrap

Sophia Althammer 33 Aug 26, 2022
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 9 Jun 06, 2022
Cockpit is a visual and statistical debugger specifically designed for deep learning.

Cockpit: A Practical Debugging Tool for Training Deep Neural Networks

Felix Dangel 421 Dec 29, 2022
PyTorch implementation for paper Neural Marching Cubes.

NMC PyTorch implementation for paper Neural Marching Cubes, Zhiqin Chen, Hao Zhang. Paper | Supplementary Material (to be updated) Citation If you fin

Zhiqin Chen 109 Dec 27, 2022
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
Dataset and Code for ICCV 2021 paper "Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme"

Dataset and Code for RealVSR Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme Xi Yang, Wangmeng Xiang,

Xi Yang 92 Jan 04, 2023
This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Koltun"

Learning to propose objects This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Ko

Philipp Krähenbühl 90 Sep 10, 2021
Brain tumor detection using CNN (InceptionResNetV2 Model)

Brain-Tumor-Detection Building a detection model using a convolutional neural network in Tensorflow & Keras. Used brain MRI images. InceptionResNetV2

1 Feb 13, 2022
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

Karttikeya Manglam 40 Nov 18, 2022
The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

ycj_project 1 Jan 18, 2022