Group-Buying Recommendation for Social E-Commerce

Overview

Group-Buying Recommendation for Social E-Commerce

This is the official implementation of the paper Group-Buying Recommendation for Social E-Commerce (PDF) accepted by ICDE'2021.

Group-Buying Dataset

Group buying, as an emerging form of purchase in social e-commerce websites, such as Pinduoduo.com , has recently achieved great success. In this new business model, users, initiator, can launch a group and share products to their social networks, and when there are enough friends, participants, join it, the deal is clinched. Group-buying recommendation for social ecommerce, which recommends an item list when users want to launch a group, plays an important role in the group success ratio and sales.

The information about the dataset can be found in BeiBei/readme.txt.

Code

We separate model definition from the framework librecframework for easily understanding.

You can find the framework librecframework in https://github.com/Sweetnow/librecframework.

Both modules mentioned in requirements.txt and librecframework should be installed before running the code.

More details about our codes will be added soon.

Usage

  1. Download both librecframework and this repo
git clone [email protected]:Sweetnow/librecframework.git
git clone [email protected]:Sweetnow/group-buying-recommendation.git
  1. Install librecframework (Python >= 3.8)
cd librecframework/
bash install.sh
  1. Install dgl

  2. Download negative.zip from Release, unzip it and copy *.negative.txt to datasets/BeiBei/

wget https://github.com/Sweetnow/group-buying-recommendation/releases/download/v1.0/negative.zip
unzip negative.zip
cp negative/* ${PATH-TO-GROUP-BUYING-RECOMMENDATION}/datasets/BeiBei

PS: negative sampling file is used for testing. More details can be found in Datasets README

  1. Set config/config.json and config/pretrain.json following Docs.

  2. Run the following command to know the CLI and check python environment:

python3 GBGCN train -h
# or
# python3 GBGCN test -h

PS: If you set hyperparameters that support multi input to multi values, the framework will automatically do grid-search accroding to your input. That is, use the Cartesian product of the hyperparameters for training and testing. For example, set --lr 0.1 0.01 -L 1 2, the codes will train and test model with hyperparameters [(0.1, 1), (0.1, 2), (0.01, 1), (0.01, 2)].

Citation

If you want to use our codes or dataset in your research, please cite:

@inproceedings{zhang2021group,
  title={Group-Buying Recommendation for Social E-Commerce},
  author={Zhang, Jun and Gao, Chen and Jin, Depeng and Li, Yong},
  booktitle={2021 IEEE 37th International Conference on Data Engineering (ICDE)},
  year={2021},
  organization={IEEE}
}

Acknowledgement

Comments
  • About Testing

    About Testing

    Hi,

    Since I always fail to run the testing mode (for both GBMF and GBGCN) due to lack of "model.json", I'm wondering how to save a pretrained (GBMF) model as a json file and how to run the testing mode. Thanks.

    opened by vincenttsai2015 16
  • About negative samples for testing

    About negative samples for testing

    Hi,

    After resolving the issues of testing execution, I'm wondering if the following error is due to the lack of test.negative.txt.

    image

    If so, how can I generate negative samples? Thanks.

    opened by vincenttsai2015 14
  • It was killed before the training process started when the code was reproduced

    It was killed before the training process started when the code was reproduced

    hello,I want to know what the computer configuration should be to successfully reproduce these jobs. I can't reproduce it on 2080ti with 11gb of memory, and it's useless when I try to make the batchsize small enough. Or can you specify the hyperparameter setting?

    opened by ZQSong1997 6
  • Implementing the GBGCN in google colab

    Implementing the GBGCN in google colab

    Hi, After installing setup.py file from the mentioned frame work in the GitHub, I tried to run the GBGCN.py file by this command in Google Colab, "! python GBGCN.py train --tag 'true' --SL2 0.001 --L2 0.001 --lr 1e-2 --layer 2 --alpha 0.6 --beta 0.01 ". The below Errors showed: ( Any help to solve these errors and run the file properly would be appreciated! Just to mentioned that when I tried to run the whole data on GBGCN.py, it was not successful. I think the not enough RAM on colab was the problem( my colab RAM is around 12 GB) so I tried to reduce the size of BeiBei data set( 0.01 of the data set) to tackle this issue. Then these errors showed)

    INFO:root:Environment Arguments(OrderedDict([('dataset', 'BeiBei'), ('device', [0]), ('sample_epoch', 500), ('sample_worker', 16), ('epoch', 500), ('tag', 'true')])) INFO:root:Dataloader Arguments(OrderedDict([('batch_size', 4096), ('batch_worker', 2), ('test_batch_size', 128), ('test_batch_worker', 2)])) INFO:root:Hyperparameter Arguments(OrderedDict([('embedding_size', 32), ('act', 'sigmoid'), ('pretrain', True), ('SL2', [0.001]), ('L2', [0.001]), ('lr', [0.01]), ('layer', [2]), ('alpha', [0.6]), ('beta', [0.01])])) INFO:root:{'comment': '固定参数', 'user': 'user', 'visdom': {'server': '127.0.0.1', 'port': {'BeiBei_itemrec': 16670, 'BeiBei_grouprec': 16670, 'BeiBei_SIGR': 16670, 'BeiBei': 16670, 'comment': '16671 is temporary'}}, 'training': {'test_interval': 5, 'early_stop': 50, 'overfit': {'protected_epoch': 10, 'threshold': 1}}, 'dataset': {'path': './BeiBei', 'seed': 123, 'use_backup': True}, 'logger': {'path': './log', 'policy': 'best'}, 'metric': {'target': {'type': 'NDCG', 'topk': 10}, 'metrics': [{'type': 'Recall', 'topk': 3}, {'type': 'Recall', 'topk': 5}, {'type': 'Recall', 'topk': 10}, {'type': 'Recall', 'topk': 20}, {'type': 'NDCG', 'topk': 3}, {'type': 'NDCG', 'topk': 5}, {'type': 'NDCG', 'topk': 10}, {'type': 'NDCG', 'topk': 20}]}} INFO:root:{'BeiBei': {'GBMF': ''}} DEBUG:root:Load BeiBei/BeiBei/BeiBei-neg-500-123-default.pkl DEBUG:root:finish loading neg sample INFO:root:GPU search space: [0] INFO:root:Auto select GPU 0 WARNING:visdom:Setting up a new session... Exception in user code:

    Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 159, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw) File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 80, in create_connection raise err File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 70, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 600, in urlopen chunked=chunked) File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 354, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/lib/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/usr/lib/python3.7/http/client.py", line 972, in send self.connect() File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 181, in connect conn = self._new_conn() File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 168, in _new_conn self, "Failed to establish a new connection: %s" % e) urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fd8435d5c50>: Failed to establish a new connection: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 449, in send timeout=timeout File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 638, in urlopen _stacktrace=sys.exc_info()[2]) File "/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py", line 399, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=16670): Max retries exceeded with url: /env/GBGCN_true-32-0.01-0.001-0.001-2-0.6-0.01-sigmoid-True-True (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd8435d5c50>: Failed to establish a new connection: [Errno 111] Connection refused'))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/visdom/init.py", line 711, in _send data=json.dumps(msg), File "/usr/local/lib/python3.7/dist-packages/visdom/init.py", line 677, in _handle_post r = self.session.post(url, data=data) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 578, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=16670): Max retries exceeded with url: /env/GBGCN_true-32-0.01-0.001-0.001-2-0.6-0.01-sigmoid-True-True (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd8435d5c50>: Failed to establish a new connection: [Errno 111] Connection refused')) INFO:visdom:Socket refused connection, running socketless ERROR:visdom:[Errno 111] Connection refused ERROR:websocket:error from callback <function Visdom.setup_socket..on_close at 0x7fd84346ec20>: on_close() takes 1 positional argument but 3 were given File "/usr/local/lib/python3.7/dist-packages/websocket/_app.py", line 407, in _callback callback(self, *args) Traceback (most recent call last): File "GBGCN.py", line 556, in torch.optim.SGD) File "/usr/local/lib/python3.7/dist-packages/librecframework-1.3.0-py3.7.egg/librecframework/pipeline.py", line 633, in during_running model_class, other_args, trainhooks, optim_type) File "/usr/local/lib/python3.7/dist-packages/librecframework-1.3.0-py3.7.egg/librecframework/pipeline.py", line 239, in during_running model.load_pretrain(self._pretrain[self._eam['dataset']]) File "GBGCN.py", line 102, in load_pretrain pretrain = torch.load(path, map_location='cpu') File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 381, in load f = open(f, 'rb') FileNotFoundError: [Errno 2] No such file or directory: ''

    opened by Ali-khn 5
  • About ranking metric evaluation

    About ranking metric evaluation

    Hi,

    I'm wondering if it is possible to evaluate the ranking metrics of MAP(mean average precision) @ K and HR(hit ratio) @ K of GBMF/GBGCN under librecframework. If yes, how can I modify the code? Thanks.

    opened by vincenttsai2015 2
  • 您好,我在复现您的代码时遇到以下问题,想请教一下。

    您好,我在复现您的代码时遇到以下问题,想请教一下。

    命令: python GBGCN.py train [-h] 错误: Using backend: pytorch usage: GBGCN.py train [-h] [-DS DATASET] [-D DEVICE [DEVICE ...]] -T TAG [-SEP SAMPLE_EPOCH] [-SW SAMPLE_WORKER] [-EP EPOCH] [-BS BATCH_SIZE] [-BW BATCH_WORKER] [-TBS TEST_BATCH_SIZE] [-TBW TEST_BATCH_WORKER] [-EB EMBEDDING_SIZE] [--lr LR [LR ...]] --L2 L2 [L2 ...] --SL2 SL2 [SL2 ...] -L LAYER [LAYER ...] -A ALPHA [ALPHA ...] -B BETA [BETA ...] [--act ACT] [--pretrain | --no-pretrain] GBGCN.py train: error: the following arguments are required: -T/--tag, --L2, --SL2, -L/--layer, -A/--alpha, -B/--beta

    opened by Ganoder 2
  • Negative sample files

    Negative sample files

    Hi, My question is how to use negative sample file in order to run the whole model correctly? should I copy the file in BeiBei folder? Can I run the model correctly without "negative sample file"? Any instruction from scratch would be of any help. Thanks

    opened by Ali-khn 2
  • TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

    TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

    作者您好,我安装了requirements.txt和librecframework之后运行GBGCN.py,遇到了以下错误: Traceback (most recent call last): File "GBGCN.py", line 14, in from librecframework.argument.manager import HyperparamManager File "C:\Users\ZSX\AppData\Roaming\Python\Python36\site-packages\librecframework\argument_init_.py", line 11, in class Argument(NamedTuple, Generic[T]): TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

    请问这是什么情况呢

    opened by zanshuxun 2
Owner
Jun Zhang
EE, Tsinghua University
Jun Zhang
RecList is an open source library providing behavioral, "black-box" testing for recommender systems.

RecList is an open source library providing behavioral, "black-box" testing for recommender systems.

Jacopo Tagliabue 375 Dec 30, 2022
Pytorch domain library for recommendation systems

TorchRec (Experimental Release) TorchRec is a PyTorch domain library built to provide common sparsity & parallelism primitives needed for large-scale

Meta Research 1.3k Jan 05, 2023
Accuracy-Diversity Trade-off in Recommender Systems via Graph Convolutions

Accuracy-Diversity Trade-off in Recommender Systems via Graph Convolutions This repository contains the code of the paper "Accuracy-Diversity Trade-of

2 Sep 16, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and newly state-of-the-art recommendation models are implemented.

Yu 1.4k Dec 27, 2022
Code for my ORSUM, ACM RecSys 2020, HeroGRAPH: A Heterogeneous Graph Framework for Multi-Target Cross-Domain Recommendation

HeroGRAPH Code for my ORSUM @ RecSys 2020, HeroGRAPH: A Heterogeneous Graph Framework for Multi-Target Cross-Domain Recommendation Paper, workshop pro

Qiang Cui 9 Sep 14, 2022
The implementation of the submitted paper "Deep Multi-Behaviors Graph Network for Voucher Redemption Rate Prediction" in SIGKDD 2021 Applied Data Science Track.

DMBGN: Deep Multi-Behaviors Graph Networks for Voucher Redemption Rate Prediction The implementation of the accepted paper "Deep Multi-Behaviors Graph

10 Jul 12, 2022
Code for KHGT model, AAAI2021

KHGT Code for KHGT accepted by AAAI2021 Please unzip the data files in Datasets/ first. To run KHGT on Yelp data, use python labcode_yelp.py For Movi

32 Nov 29, 2022
Jointly Learning Explainable Rules for Recommendation with Knowledge Graph

Jointly Learning Explainable Rules for Recommendation with Knowledge Graph

57 Nov 03, 2022
Deep recommender models using PyTorch.

Spotlight uses PyTorch to build both deep and shallow recommender models. By providing both a slew of building blocks for loss functions (various poin

Maciej Kula 2.8k Dec 29, 2022
Implementation of a hadoop based movie recommendation system

Implementation-of-a-hadoop-based-movie-recommendation-system 通过编写代码,设计一个基于Hadoop的电影推荐系统,通过此推荐系统的编写,掌握在Hadoop平台上的文件操作,数据处理的技能。windows 10 hadoop 2.8.3 p

汝聪(Ricardo) 5 Oct 02, 2022
A PyTorch implementation of "Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information" (WSDM 2021)

FairGNN A PyTorch implementation of "Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information" (

31 Jan 04, 2023
Attentive Social Recommendation: Towards User And Item Diversities

ASR This is a Tensorflow implementation of the paper: Attentive Social Recommendation: Towards User And Item Diversities Preprint, https://arxiv.org/a

Dongsheng Luo 1 Nov 14, 2021
It is a movie recommender web application which is developed using the Python.

Movie Recommendation 🍿 System Watch Tutorial for this project Source IMDB Movie 5000 Dataset Inspired from this original repository. Features Simple

Kushal Bhavsar 10 Dec 26, 2022
Codes for AAAI'21 paper 'Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation'

DHCN Codes for AAAI 2021 paper 'Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation'. Please note that the default link

Xin Xia 124 Dec 14, 2022
Graph Neural Networks for Recommender Systems

This repository contains code to train and test GNN models for recommendation, mainly using the Deep Graph Library (DGL).

217 Jan 04, 2023
Recommendation Systems for IBM Watson Studio platform

Recommendation-Systems-for-IBM-Watson-Studio-platform Project Overview In this project, I analyze the interactions that users have with articles on th

Milad Sadat-Mohammadi 1 Jan 21, 2022
Plex-recommender - Get movie recommendations based on your current PleX library

plex-recommender Description: Get movie/tv recommendations based on your current

5 Jul 19, 2022
Hierarchical Fashion Graph Network for Personalized Outfit Recommendation, SIGIR 2020

hierarchical_fashion_graph_network This is our Tensorflow implementation for the paper: Xingchen Li, Xiang Wang, Xiangnan He, Long Chen, Jun Xiao, and

LI Xingchen 70 Dec 05, 2022
Incorporating User Micro-behaviors and Item Knowledge 59 60 3 into Multi-task Learning for Session-based Recommendation

MKM-SR Incorporating User Micro-behaviors and Item Knowledge into Multi-task Learning for Session-based Recommendation Paper data and code This is the

ciecus 38 Dec 05, 2022
Persine is an automated tool to study and reverse-engineer algorithmic recommendation systems.

Persine, the Persona Engine Persine is an automated tool to study and reverse-engineer algorithmic recommendation systems. It has a simple interface a

Jonathan Soma 87 Nov 29, 2022