Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021

Overview

Learning Intents behind Interactions with Knowledge Graph for Recommendation

This is our PyTorch implementation for the paper:

Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, Xiangnan He and Tat-Seng Chua (2021). Learning Intents behind Interactions with Knowledge Graph for Recommendation. Paper in arXiv. In WWW'2021, Ljubljana, Slovenia, April 19-23, 2021.

Author: Dr. Xiang Wang (xiangwang at u.nus.edu) and Mr. Tinglin Huang (tinglin.huang at zju.edu.cn)

Introduction

Knowledge Graph-based Intent Network (KGIN) is a recommendation framework, which consists of three components: (1)user Intent modeling, (2)relational path-aware aggregation, (3)indepedence modeling.

Citation

If you want to use our codes and datasets in your research, please cite:

@inproceedings{KGIN2020,
  author    = {Xiang Wang and
              Tinglin Huang and 
              Dingxian Wang and
              Yancheng Yuan and
              Zhenguang Liu and
              Xiangnan He and
              Tat{-}Seng Chua},
  title     = {Learning Intents behind Interactions with Knowledge Graph for Recommendation},
  booktitle = {{WWW}},
  year      = {2021}
}

Environment Requirement

The code has been tested running under Python 3.6.5. The required packages are as follows:

  • pytorch == 1.5.0
  • numpy == 1.15.4
  • scipy == 1.1.0
  • sklearn == 0.20.0
  • torch_scatter == 2.0.5
  • networkx == 2.5

Reproducibility & Example to Run the Codes

To demonstrate the reproducibility of the best performance reported in our paper and faciliate researchers to track whether the model status is consistent with ours, we provide the best parameter settings (might be different for the custormized datasets) in the scripts, and provide the log for our trainings.

The instruction of commands has been clearly stated in the codes (see the parser function in utils/parser.py).

  • Last-fm dataset
python main.py --dataset last-fm --dim 64 --lr 0.0001 --sim_regularity 0.0001 --batch_size 1024 --node_dropout True --node_dropout_rate 0.5 --mess_dropout True --mess_dropout_rate 0.1 --gpu_id 0 --context_hops 3
  • Amazon-book dataset
python main.py --dataset amazon-book --dim 64 --lr 0.0001 --sim_regularity 0.00001 --batch_size 1024 --node_dropout True --node_dropout_rate 0.5 --mess_dropout True --mess_dropout_rate 0.1 --gpu_id 0 --context_hops 3
  • Alibaba-iFashion dataset
python main.py --dataset alibaba-fashion --dim 64 --lr 0.0001 --sim_regularity 0.0001 --batch_size 1024 --node_dropout True --node_dropout_rate 0.5 --mess_dropout True --mess_dropout_rate 0.1 --gpu_id 0 --context_hops 3

Important argument:

  • sim_regularity
    • It indicates the weight to control the independence loss.
    • 1e-4(by default), which uses 0.0001 to control the strengths of correlation.

Dataset

We provide three processed datasets: Amazon-book, Last-FM, and Alibaba-iFashion.

  • You can find the full version of recommendation datasets via Amazon-book, Last-FM, and Alibaba-iFashion.
  • We follow KB4Rec to preprocess Amazon-book and Last-FM datasets, mapping items into Freebase entities via title matching if there is a mapping available.
Amazon-book Last-FM Alibaba-ifashion
User-Item Interaction #Users 70,679 23,566 114,737
#Items 24,915 48,123 30,040
#Interactions 847,733 3,034,796 1,781,093
Knowledge Graph #Entities 88,572 58,266 59,156
#Relations 39 9 51
#Triplets 2,557,746 464,567 279,155
  • train.txt
    • Train file.
    • Each line is a user with her/his positive interactions with items: (userID and a list of itemID).
  • test.txt
    • Test file (positive instances).
    • Each line is a user with her/his positive interactions with items: (userID and a list of itemID).
    • Note that here we treat all unobserved interactions as the negative instances when reporting performance.
  • user_list.txt
    • User file.
    • Each line is a triplet (org_id, remap_id) for one user, where org_id and remap_id represent the ID of such user in the original and our datasets, respectively.
  • item_list.txt
    • Item file.
    • Each line is a triplet (org_id, remap_id, freebase_id) for one item, where org_id, remap_id, and freebase_id represent the ID of such item in the original, our datasets, and freebase, respectively.
  • entity_list.txt
    • Entity file.
    • Each line is a triplet (freebase_id, remap_id) for one entity in knowledge graph, where freebase_id and remap_id represent the ID of such entity in freebase and our datasets, respectively.
  • relation_list.txt
    • Relation file.
    • Each line is a triplet (freebase_id, remap_id) for one relation in knowledge graph, where freebase_id and remap_id represent the ID of such relation in freebase and our datasets, respectively.

Acknowledgement

Any scientific publications that use our datasets should cite the following paper as the reference:

@inproceedings{KGIN2020,
  author    = {Xiang Wang and
              Tinglin Huang and 
              Dingxian Wang and
              Yancheng Yuan and
              Zhenguang Liu and
              Xiangnan He and
              Tat{-}Seng Chua},
  title     = {Learning Intents behind Interactions with Knowledge Graph for Recommendation},
  booktitle = {{WWW}},
  year      = {2021}
}

Nobody guarantees the correctness of the data, its suitability for any particular purpose, or the validity of results based on the use of the data set. The data set may be used for any research purposes under the following conditions:

  • The user must acknowledge the use of the data set in publications resulting from the use of the data set.
  • The user may not redistribute the data without separate permission.
  • The user may not try to deanonymise the data.
  • The user may not use this information for any commercial or revenue-bearing purposes without first obtaining permission from us.
Owner
A postgraduate student
A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

Taojiannan Yang 72 Nov 09, 2022
Official source code of Fast Point Transformer, CVPR 2022

Fast Point Transformer Project Page | Paper This repository contains the official source code and data for our paper: Fast Point Transformer Chunghyun

182 Dec 23, 2022
SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation

SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation SeqFormer SeqFormer: a Frustratingly Simple Model for Video Instance Segmentat

Junfeng Wu 298 Dec 22, 2022
PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Rithesh Kumar 135 Oct 27, 2022
Deep Learning Head Pose Estimation using PyTorch.

Hopenet is an accurate and easy to use head pose estimation network. Models have been trained on the 300W-LP dataset and have been tested on real data with good qualitative performance.

Nataniel Ruiz 1.3k Dec 26, 2022
Latex code for making neural networks diagrams

PlotNeuralNet Latex code for drawing neural networks for reports and presentation. Have a look into examples to see how they are made. Additionally, l

Haris Iqbal 18.6k Jan 01, 2023
An implementation of EWC with PyTorch

EWC.pytorch An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural

Ryuichiro Hataya 166 Dec 22, 2022
Pytorch code for "DPFM: Deep Partial Functional Maps" - 3DV 2021 (Oral)

DPFM Code for "DPFM: Deep Partial Functional Maps" - 3DV 2021 (Oral) Installation This implementation runs on python = 3.7, use pip to install depend

Souhaib Attaiki 29 Oct 03, 2022
Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Google 89 Dec 22, 2022
AntiFuzz: Impeding Fuzzing Audits of Binary Executables

AntiFuzz: Impeding Fuzzing Audits of Binary Executables Get the paper here: https://www.usenix.org/system/files/sec19-guler.pdf Usage: The python scri

Chair for Sys­tems Se­cu­ri­ty 88 Dec 21, 2022
The official repository for BaMBNet

BaMBNet-Pytorch Paper

Junjun Jiang 18 Dec 04, 2022
Python implementation of ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images, AAAI2022.

ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images Binh M. Le & Simon S. Woo, "ADD:

2 Oct 24, 2022
Temporally Coherent GAN SIGGRAPH project.

TecoGAN This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution

Duc Linh Nguyen 2 Jan 18, 2022
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise

45 Dec 08, 2022
Official PyTorch Implementation of Hypercorrelation Squeeze for Few-Shot Segmentation, arXiv 2021

Hypercorrelation Squeeze for Few-Shot Segmentation This is the implementation of the paper "Hypercorrelation Squeeze for Few-Shot Segmentation" by Juh

Juhong Min 165 Dec 28, 2022
Polynomial-time Meta-Interpretive Learning

Louise - polynomial-time Program Learning Getting help with Louise Louise's author can be reached by email at Stassa Patsantzis 64 Dec 26, 2022

DI-HPC is an acceleration operator component for general algorithm modules in reinforcement learning algorithms

DI-HPC: Decision Intelligence - High Performance Computation DI-HPC is an acceleration operator component for general algorithm modules in reinforceme

OpenDILab 185 Dec 29, 2022
This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML)

package tests docs license stats support This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML

National Center for Cognitive Research of ITMO University 482 Dec 26, 2022
Sparse R-CNN: End-to-End Object Detection with Learnable Proposals, CVPR2021

End-to-End Object Detection with Learnable Proposal, CVPR2021

Peize Sun 1.2k Dec 27, 2022
The Python code for the paper A Hybrid Quantum-Classical Algorithm for Robust Fitting

About The Python code for the paper A Hybrid Quantum-Classical Algorithm for Robust Fitting The demo program was only tested under Conda in a standard

Anh-Dzung Doan 5 Nov 28, 2022