Easy-to-use,Modular and Extendible package of deep-learning based CTR models .

Overview

DeepCTR

Python Versions TensorFlow Versions Downloads PyPI Version GitHub Issues

Documentation Status CI status codecov Codacy Badge Disscussion License

DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can be used to easily build custom models.You can use any complex model with model.fit() ,and model.predict() .

  • Provide tf.keras.Model like interface for quick experiment . example
  • Provide tensorflow estimator interface for large scale data and distributed training . example
  • It is compatible with both tf 1.x and tf 2.x.

Some related projects:

Let's Get Started!(Chinese Introduction) and welcome to join us!

Models List

Model Paper
Convolutional Click Prediction Model [CIKM 2015]A Convolutional Click Prediction Model
Factorization-supported Neural Network [ECIR 2016]Deep Learning over Multi-field Categorical Data: A Case Study on User Response Prediction
Product-based Neural Network [ICDM 2016]Product-based neural networks for user response prediction
Wide & Deep [DLRS 2016]Wide & Deep Learning for Recommender Systems
DeepFM [IJCAI 2017]DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Piece-wise Linear Model [arxiv 2017]Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
Deep & Cross Network [ADKDD 2017]Deep & Cross Network for Ad Click Predictions
Attentional Factorization Machine [IJCAI 2017]Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks
Neural Factorization Machine [SIGIR 2017]Neural Factorization Machines for Sparse Predictive Analytics
xDeepFM [KDD 2018]xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems
Deep Interest Network [KDD 2018]Deep Interest Network for Click-Through Rate Prediction
AutoInt [CIKM 2019]AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks
Deep Interest Evolution Network [AAAI 2019]Deep Interest Evolution Network for Click-Through Rate Prediction
FwFM [WWW 2018]Field-weighted Factorization Machines for Click-Through Rate Prediction in Display Advertising
ONN [arxiv 2019]Operation-aware Neural Networks for User Response Prediction
FGCNN [WWW 2019]Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction
Deep Session Interest Network [IJCAI 2019]Deep Session Interest Network for Click-Through Rate Prediction
FiBiNET [RecSys 2019]FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction
FLEN [arxiv 2019]FLEN: Leveraging Field for Scalable CTR Prediction
BST [DLP-KDD 2019]Behavior sequence transformer for e-commerce recommendation in Alibaba
IFM [IJCAI 2019]An Input-aware Factorization Machine for Sparse Prediction
DCN V2 [arxiv 2020]DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems
DIFM [IJCAI 2020]A Dual Input-aware Factorization Machine for CTR Prediction
FEFM and DeepFEFM [arxiv 2020]Field-Embedded Factorization Machines for Click-through rate prediction
SharedBottom [arxiv 2017]An Overview of Multi-Task Learning in Deep Neural Networks
ESMM [SIGIR 2018]Entire Space Multi-Task Model: An Effective Approach for Estimating Post-Click Conversion Rate
MMOE [KDD 2018]Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts
PLE [RecSys 2020]Progressive Layered Extraction (PLE): A Novel Multi-Task Learning (MTL) Model for Personalized Recommendations

Citation

If you find this code useful in your research, please cite it using the following BibTeX:

@misc{shen2017deepctr,
  author = {Weichen Shen},
  title = {DeepCTR: Easy-to-use,Modular and Extendible package of deep-learning based CTR models},
  year = {2017},
  publisher = {GitHub},
  journal = {GitHub Repository},
  howpublished = {\url{https://github.com/shenweichen/deepctr}},
}

DisscussionGroup

  • Discussions

  • 公众号:浅梦学习笔记

  • wechat ID: deepctrbot

    wechat

Main contributors(welcome to join us!)

pic
Shen Weichen

Alibaba Group

pic
Zan Shuxun

Alibaba Group

pic
Harshit Pande

Amazon

pic
Lai Mincai

ShanghaiTech University

pic
Li Zichao

Peking University

pic
Tan Tingyi

Chongqing University
of Posts and
Telecommunications

Comments
  • tf2.7版本 报错

    tf2.7版本 报错

    WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.linalg.matmul_1), but are not present in its tracked objects: <tf.Variable 'dense_1/kernel:0' shape=(64, 1) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer.

    AttributeError Traceback (most recent call last) in () 31 32 # 4.Define Model,train,predict and evaluate ---> 33 model = DeepFM(linear_feature_columns, dnn_feature_columns, task='regression') 34 model.compile("adam", "mse", metrics=['mse'], ) 35

    4 frames /usr/local/lib/python3.7/dist-packages/keras/engine/functional_utils.py in is_input_keras_tensor(tensor) 46 if not node_module.is_keras_tensor(tensor): 47 raise ValueError(_KERAS_TENSOR_TYPE_CHECK_ERROR_MSG.format(tensor)) ---> 48 return tensor.node.is_input 49 50

    AttributeError: 'KerasTensor' object has no attribute 'node' 是不是tensorflow版本的问题?

    bug to be solved question 
    opened by moseshu 16
  • 这是keras版本不对吗?

    这是keras版本不对吗?

    Traceback (most recent call last): File "E:/testings/DeepCTR-master/examples/run_classification_criteo.py", line 44, in model = DeepFM(linear_feature_columns, dnn_feature_columns, task='binary') File "E:\testings\DeepCTR-master\deepctr\models\deepfm.py", line 64, in DeepFM model = tf.keras.models.Model(inputs=inputs_list, outputs=output) File "C:\Users\Lenovo\anaconda3\envs\python37\lib\site-packages\tensorflow\python\training\tracking\base.py", line 629, in _method_wrapper result = method(self, *args, **kwargs) File "C:\Users\Lenovo\anaconda3\envs\python37\lib\site-packages\keras\engine\functional.py", line 144, in init for t in tf.nest.flatten(inputs)]): File "C:\Users\Lenovo\anaconda3\envs\python37\lib\site-packages\keras\engine\functional.py", line 144, in for t in tf.nest.flatten(inputs)]): File "C:\Users\Lenovo\anaconda3\envs\python37\lib\site-packages\keras\engine\functional_utils.py", line 48, in is_input_keras_tensor return tensor.node.is_input AttributeError: 'KerasTensor' object has no attribute 'node'

    to be solved question 
    opened by Dimitri666 13
  • DIN模型样例代码run_din.py运行错误

    DIN模型样例代码run_din.py运行错误

    运行 https://github.com/shenweichen/DeepCTR/blob/master/examples/run_din.py报错 : ..../site-packages/deepctr/layers/sequence.py:198 call * outputs._uses_learning_phase = attention_score._uses_learning_phase AttributeError: 'Tensor' object has no attribute '_uses_learning_phase' 请问如何解决呢? Operating environment(运行环境):

    • python version 3.6
    • tensorflow version 1.4.0
    • deepctr version 0.5.2
    opened by waterbeach 12
  • Run the Examples Classification: Criteo, 报错

    Run the Examples Classification: Criteo, 报错

    运行官网的例子时,报错信息为: tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 104 is not in [0, 14) [[{{node sparse_emb_18-C14/embedding_lookup}} = ResourceGather[Tindices=DT_INT32, _class=["loc:@training/Adam/gradients/sparse_emb_18-C14/embedding_lookup_grad/Reshape"], dtype=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](sparse_emb_18-C14/embeddings, linear_emb_18-C14/Cast)]]

    系统版本: tensorflow:1.11.0 keras:2.2.4 deepctr:0.2.2

    opened by somTian 12
  • New Feature: Modify the Hash layer to support the lookup table

    New Feature: Modify the Hash layer to support the lookup table

    New Feature:

    Modify the Hash layer to support the lookup feature.

    Now there are two hash techniques supported in the Hash layer:

    1. Lookup Table: when setup vocabulary_path, it can looks up input keys in a table and output the corresponding values. Missed keys are always return the default value, eg. 0.
    2. Bucket Hash: when vocabulary_path is not set, Hash will hash the input keys to [0,num_buckets). Parameter mask_zero can set True, which will set the hash value 0 when the input keys are 0 or 0.0, and other value will be hash in range [1,num_buckets).

    Initializing Hash with vocabulary_path CSV file which need follow the convention:the first column as keys and second column as values which are seperated by comma.

    The following is example snippet:

    * `1,emerson`
    * `2,lake`
    * `3,palmer`
    
    >>> hash = Hash(
    ...   num_buckets=3+1,
    ...   vocabulary_path=filename,
    ...   default_value=0)
    >>> hash(tf.constant('lake')).numpy()
    2
    >>> hash(tf.constant('lakeemerson')).numpy()
    0
    
    opened by dengc367 9
  • Add the CSV hash table in Hash layer and fix a bug.

    Add the CSV hash table in Hash layer and fix a bug.

    • delete the Lambda sublayer in LocalActivationUnit Layer class

    • add vocabulary_path in the SparseFeat to support the csv HashTable functionality

    • update docs and add examples in doc

    • Remove trailing whitespace

    opened by dengc367 9
  • can't concat when embedding_size is set to

    can't concat when embedding_size is set to "auto"

    Describe the bug(问题描述) When set the embedding size to "auto", the Concatenate layer can't merge all input Embedding with different size at axis=2

    def concat_fun(inputs, axis=-1): if len(inputs) == 1: return inputs[0] else: return Concatenate(axis=axis)(inputs)

    To Reproduce(复现步骤) Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 1, 36), (None, 1, 30), (None, 1, 6), (None, 1, 12), (None, 1, 12), (None, 1, 30), (None, 1, 12)]

    Operating environment(运行环境):

    • python version [e.g. 3.4, 3.6]
    • tensorflow version [e.g. 1.4.0, 1.12.0]
    • deepctr version [e.g. 0.2.3,]

    Additional context Add any other context about the problem here.

    opened by dev-wei 8
  • 你这个模型在进行推理的时候性能很差,不知道是什么原因

    你这个模型在进行推理的时候性能很差,不知道是什么原因

    Describe the question(问题描述) 在使用deepfm模型进行推理时,单次推理达到300ms,这么高的延迟,不知道是我的操作原因,还是框架本身的问题,在给的例子中进行测试,也得到了相同的结论,如果有解决办法请告知一下,非常感谢!

    Additional context Add any other context about the problem here.

    Operating environment(运行环境):

    • python version [e.g. 3.6]
    • tensorflow version [e.g. 2.1.0,]
    • deepctr version [e.g. 0.7.4,]
    question 
    opened by rickyhuw 7
  • DIEN报错ValueError: The name

    DIEN报错ValueError: The name "seq_length" is used in 2 times in the model. All layer names should be unique.

    Describe the bug(问题描述) 运行run_dien.py时,报错ValueError: The name "seq_length" is used in 2 times in the model. All layer names should be unique.

    To Reproduce(复现步骤) Steps to reproduce the behavior:

    1. 运行examples/run_dien.py
    2. See error

    Operating environment(运行环境):

    • python version [3.7.4]
    • tensorflow version [2.0.0]
    • deepctr version [0.7.4]

    Additional context Add any other context about the problem here.

    bug help wanted 
    opened by MildAdam 7
  • 请问如何使用梯度截断功能呢?

    请问如何使用梯度截断功能呢?

    Describe the question(问题描述) 同样的数据,跑了不同的模型,在DeepFM模型上一切正常。 而使用NFFM模型的时候一直梯度爆炸,显示为Nan,请问如何处理这个问题,可以使用梯度阶段吗,该怎么使用呢?

    Additional context Add any other context about the problem here.

    Operating environment(运行环境):

    • python version 3.6
    • tensorflow version [e.g. 1.3.0,]
    • deepctr version [e.g. 0.4,]
    question 
    opened by zhuangjiayue 7
  • 使用DeepFMEstimator,GPU物理机 , 速度不增反降(每轮40分钟),训练数据大概35G

    使用DeepFMEstimator,GPU物理机 , 速度不增反降(每轮40分钟),训练数据大概35G

    train_df=pd.read_parquet("parquet_train_20201219_110000",engine='pyarrow') test_df=pd.read_parquet("parquet_test_20201220_110000",engine='pyarrow') df = pd.concat([train_df,test_df],axis=0)

    train_size=len(train_df) test_size=len(test_df)

    target = ['label'] dense_features = ["c1","c2","c3","c4","c5"] sparse_features = [x for x in df.columns if x not in dense_features+target]

    for feat in sparse_features: lbe = LabelEncoder() df[feat] = lbe.fit_transform(df[feat]) mms = MinMaxScaler(feature_range=(0, 1))

    df[sparse_features] = df[sparse_features].fillna('-1', ) df[dense_features] = mms.fit_transform(df[dense_features])

    dnn_feature_columns = [] linear_feature_columns = []

    for i, feat in enumerate(sparse_features): dnn_feature_columns.append(tf.feature_column.embedding_column( tf.feature_column.categorical_column_with_identity(feat, df[feat].nunique()), 4)) linear_feature_columns.append(tf.feature_column.categorical_column_with_identity(feat, df[feat].nunique())) for feat in dense_features: dnn_feature_columns.append(tf.feature_column.numeric_column(feat)) linear_feature_columns.append(tf.feature_column.numeric_column(feat))

    train = df[0:train_size] test=df[train_size:]

    train_model_input = input_fn_pandas(train, sparse_features + dense_features, 'label', shuffle=True) test_model_input = input_fn_pandas(test, sparse_features + dense_features, None, shuffle=False)

    model = DeepFMEstimator(linear_feature_columns, dnn_feature_columns, task='binary')

    model.train(train_model_input) pred_ans_iter = model.predict(test_model_input) pred_ans = list(map(lambda x: x['pred'], pred_ans_iter)) print("test AUC", round(roc_auc_score(test[target].values, pred_ans), 4))

    还有其他的设置吗,设置多gpu跑,也不太快。GPU 利用率20%。

    question 
    opened by whk6688 6
  • dropout和batchnorm层已经传了training,outputs._uses_learning_phase = training is not None的作用是什么?

    dropout和batchnorm层已经传了training,outputs._uses_learning_phase = training is not None的作用是什么?

    Describe the question(问题描述) 既然DNN中的bn_layers和dropout_layers已经传入了training状态参数来区分是在训练还是在预测阶段,手动设置_uses_learning_phase的作用是什么? 我的tf版本2.10.0,应该是执行outputs._uses_learning_phase = training is not None https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/sequence.py#L266-L288

    在求attention_score的时候,不是已经把training状态传给DNN里的dropout和batchnorm层了吗? https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/sequence.py#L266

    https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/core.py#L104 https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/core.py#L198 https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/core.py#L205

    为什么还需要对_uses_learning_phase手动设置? https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/sequence.py#L283-L286

    这里手动设置_uses_learning_phase是否发挥实际作用?去掉会怎么样?对于outputs这个tf.Tensor来说,好像是没有_uses_learning_phase这个类属性,这么写好像是临时新建个了_uses_learning_phase属性,然后赋值成xxx

    outputs._uses_learning_phase = xxx
    

    求大佬指点

    Additional context Add any other context about the problem here.

    Operating environment(运行环境):

    • python version 3.9.12
    • tensorflow version 2.10.0
    • deepctr version 0.9.3
    question 
    opened by Daemoonn 0
  • Why Linear layer mode 0 will keep dimension for the sparse feature while mode 2 will not?

    Why Linear layer mode 0 will keep dimension for the sparse feature while mode 2 will not?

    In deepctr tensorflow package, the output for Linear if only sparse features are presented would be the reduce_sum(sparse_input, axis=-1, keep_dims=True), but if there are both sparse and dense features, the output would be reduce_sum(sparse_input, axis=-1, keep_dims=False), what's the rationale for that? Thanks

    question 
    opened by fengyinyang 0
  • 自动驾驶更新笔记 Autopilot Updating Notes

    自动驾驶更新笔记 Autopilot Updating Notes

    您好,仓库内容很全面,非常受益, 可否引荐下本人的笔记,把我对自动驾驶的理解分享给大家,希望大家和我一起不断完善相关内容,谢谢您!

    Hello, the content of the repository is very comprehensive and very beneficial. Could you introduce my notes and share my understanding of autopilot with others? I hope you can continue to improve the relevant content with me, thank you!

    Autopilot-Updating-Notes

    enhancement&feature request 
    opened by nwaysir 0
  • Bump tensorflow from 2.6.2 to 2.9.3 in /docs

    Bump tensorflow from 2.6.2 to 2.9.3 in /docs

    Bumps tensorflow from 2.6.2 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • How to get multi output for DeepFM?

    How to get multi output for DeepFM?

    I'm trying to use DeepFM to predict scores for the world cup, which has 2 outputs the [left score, right score]. For the y values, the model takes in a (35245, 2) shape, and runs, but only gives one value for the output. Basically I'm asking how to set the amount of nodes in the output layer to 2 instead of 1.

    Y_train looks like this:

       home_score  away_score
             0.0         1.0
             1.0         1.0
             0.0         1.0
             1.0         2.0
    

    The model is:

     model = DeepFM(linear_feature_columns,dnn_feature_columns,task='regression')
     model.compile("adam", "mean_squared_error",
                   metrics=['mean_squared_error'], )
    history = model.fit(train_model_input, Y_train.values,
                        batch_size=256, epochs=10, verbose=2, validation_split=0.2, )
    pred_ans = model.predict(test_model_input, batch_size=256)
    
    

    Using google colab

    question 
    opened by jonathonbird 0
Releases(v0.9.3)
This project intends to use SVM supervised learning to determine whether or not an individual is diabetic given certain attributes.

Diabetes Prediction Using SVM I explore a diabetes prediction algorithm using a Diabetes dataset. Using a Support Vector Machine for my prediction alg

Jeff Shen 1 Jan 14, 2022
Recognize numbers from an (28 x 28) image using neural networks

Number recognition Recognize numbers from a 28 x 28 image using neural networks Usage This is an example of a simple usage of number-recognition NOTE:

Mauro Baladés 2 Dec 29, 2021
This is a collection of all challenges in HKCERT CTF 2021

香港網絡保安新生代奪旗挑戰賽 2021 (HKCERT CTF 2021) This is a collection of all challenges (and writeups) in HKCERT CTF 2021 Challenges ID Chinese name Name Score S

10 Jan 27, 2022
A PyTorch Toolbox for Face Recognition

FaceX-Zoo FaceX-Zoo is a PyTorch toolbox for face recognition. It provides a training module with various supervisory heads and backbones towards stat

JDAI-CV 1.6k Jan 06, 2023
Pytorch implementation for Patient Knowledge Distillation for BERT Model Compression

Patient Knowledge Distillation for BERT Model Compression Knowledge distillation for BERT model Installation Run command below to install the environm

Siqi 180 Dec 19, 2022
Additional environments compatible with OpenAI gym

Decentralized Control of Quadrotor Swarms with End-to-end Deep Reinforcement Learning A codebase for training reinforcement learning policies for quad

Zhehui Huang 40 Dec 06, 2022
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Libo Qin 25 Sep 06, 2022
Prometheus Exporter for data scraped from datenplattform.darmstadt.de

darmstadt-opendata-exporter Scrapes data from https://datenplattform.darmstadt.de and presents it in the Prometheus Exposition format. Pull requests w

Martin Weinelt 2 Apr 12, 2022
Convert weight file.pth to weight file.blob

CONVERT YOUR MODEL TO IR FORMAT INSTALLATION OpenVino Toolkit Download openvinotoolkit 2021.3 version : Link Instruction of installation : Link Pytorc

Tran Anh Tuan 3 Nov 18, 2021
A simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations.

IllustrationGAN A simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations. Generated Images

268 Nov 27, 2022
Official implementation for the paper: Generating Smooth Pose Sequences for Diverse Human Motion Prediction

Generating Smooth Pose Sequences for Diverse Human Motion Prediction This is official implementation for the paper Generating Smooth Pose Sequences fo

Wei Mao 28 Dec 10, 2022
Source code for CVPR 2020 paper "Learning to Forget for Meta-Learning"

L2F - Learning to Forget for Meta-Learning Sungyong Baik, Seokil Hong, Kyoung Mu Lee Source code for CVPR 2020 paper "Learning to Forget for Meta-Lear

Sungyong Baik 29 May 22, 2022
SMPLpix: Neural Avatars from 3D Human Models

subject0_validation_poses.mp4 Left: SMPL-X human mesh registered with SMPLify-X, middle: SMPLpix render, right: ground truth video. SMPLpix: Neural Av

Sergey Prokudin 292 Dec 30, 2022
Benchmarking Pipeline for Prediction of Protein-Protein Interactions

B4PPI Benchmarking Pipeline for the Prediction of Protein-Protein Interactions How this benchmarking pipeline has been built, and how to use it, is de

Loïc Lannelongue 4 Jun 27, 2022
BrainGNN - A deep learning model for data-driven discovery of functional connectivity

A deep learning model for data-driven discovery of functional connectivity https://doi.org/10.3390/a14030075 Usman Mahmood, Zengin Fu, Vince D. Calhou

Usman Mahmood 3 Aug 28, 2022
Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation [Arxiv] [Video] Evaluation code for Unrestricted Facial Geometry Reconstr

Matan Sela 242 Dec 30, 2022
Generating Videos with Scene Dynamics

Generating Videos with Scene Dynamics This repository contains an implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirs

Carl Vondrick 706 Jan 04, 2023
Prompt-BERT: Prompt makes BERT Better at Sentence Embeddings

Prompt-BERT: Prompt makes BERT Better at Sentence Embeddings Results on STS Tasks Model STS12 STS13 STS14 STS15 STS16 STSb SICK-R Avg. unsup-prompt-be

196 Jan 08, 2023
PyTorch implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

Simple PyTorch Implementation of "Grokking" Implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets Usage Running

Teddy Koker 15 Sep 29, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Dec 29, 2022