METER: Multimodal End-to-end TransformER

Related tags

Deep LearningMETER
Overview

METER

Code and pre-trained models will be publicized soon.

Citation

@article{dou2021meter,
  title={An Empirical Study of Training End-to-End Vision-and-Language Transformers},
  author={Dou, Zi-Yi and Xu, Yichong and Gan, Zhe and Wang, Jianfeng and Wang, Shuohang and Wang, Lijuan and Zhu, Chenguang and Peng, Nanyun and Liu, Zicheng and Zeng, Michael},
  journal={arXiv},
  year={2021},
  url={https://arxiv.org/abs/2111.02387},
}

Acknowledgements

The code is based on ViLT and some of the code is borrowed from CLIP and Swin-Transformer.

Comments
  • questions about VQA

    questions about VQA

    Hi, could you share the VQAv2 result fine-tuning with image resolution of 384, the result implemented by me is 76.52 and it is based on your checkpoint pretrained on COCO, SBU, VG, CC3M.

    opened by Henry9805 20
  • Some questions for the paper

    Some questions for the paper

    What is the difference between the score in Table 5 and Table 8? 77.19 in Table 5 results on test-dev set of VQAv2, and, 77.68 in Table 8 results on test-dev set of VQAv2.

    opened by wanng-ide 17
  • How much is the per gpu batch size?

    How much is the per gpu batch size?

    How much is the per gpu batch size? total batchsize is 4096, GPU num is 8, so per gpu batch size is 512? But I use A100 GPU, the batch size only can be set 16?

    opened by qiao1025566574 5
  • pretraining task

    pretraining task

    Hello, the author, great work! I'm curious whether you have tried to add Image Text Contrast Learning in the pretraining task? Because in the ALBEF paper, they reported that the ITC task had a great impact on the experimental results.

    opened by mactavish91 4
  • Inference with Fine-tuned SNLI Model

    Inference with Fine-tuned SNLI Model

    Hi,

    Thank you for the great work and the fine-tuned models, but I just wanted to ask how I should go about running inference with the fine-tuned model. Currently, I run into this error in my notebook:

    1 model = METERTransformerSS(cfg)
    ----> 2 model.load_state_dict(torch.load("/content/meter_clip16_288_roberta_snli.ckpt")['state_dict'])
    
    [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in load_state_dict(self, state_dict, strict)
       1050         if len(error_msgs) > 0:
       1051             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    -> 1052                                self.__class__.__name__, "\n\t".join(error_msgs)))
       1053         return _IncompatibleKeys(missing_keys, unexpected_keys)
       1054 
    
    RuntimeError: Error(s) in loading state_dict for METERTransformerSS:
    	Unexpected key(s) in state_dict: "vit_model.token_embedding.weight". 
    	size mismatch for vit_model.visual.positional_embedding: copying a param with shape torch.Size([577, 768]) from checkpoint, the shape in current model is torch.Size([197, 768]).
    

    I wonder if this is due to how I configure the model or not, is there a specific way I should create the config for inference? Thank you in advance.

    opened by sramshetty 4
  • The model meter_clip16_288_roberta_flickr.ckpt is inconsistent with the network weight parameter dimension

    The model meter_clip16_288_roberta_flickr.ckpt is inconsistent with the network weight parameter dimension

    Hi, Thank you for your excellent work, may I use this model "METER-CLIP16-RoBERTa fine-tuned on Flickr30k IR/TR (resolution: 384^2)" as meter_clip16_288_roberta_flickr.ckpt, why does the code report this error showing inconsistent dimensions, thank you answer my question. Y88) J7_592_AJQR}4LBP4W

    opened by attutude 4
  • Unable to train models faster with more gpus

    Unable to train models faster with more gpus

    Hi, I am facing an issue where, on increasing the number of gpus and nodes, the number of steps for each epoch doesnot change. for eg if I run

    python run.py with data_root=/data/datasets/meter_data_combined num_gpus=4 num_nodes=8 task_mlm_itm_clip_bert per_gpu_batchsize=64 clip16 text_roberta image_size=224 precision=16 datasets='["vg"]'

    the number of steps per epoch is nearly 150k. I observe that the number of steps is 150k when num_gpus=1 num_nodes=1, and when num_gpus=4 num_nodes=8. I made sure that all gpus were being utilized when I set num_gpus=4 num_nodes=8. I also observe that while using num_gpus=4 num_nodes=8, the time for each epoch is ~160 hours in my case, while it is ~30 hours if I set num_gpus=1 num_nodes=1.

    Is there any suggestion that you have for this problem?

    opened by HarmanDotpy 3
  • GPU OOM when pretraining

    GPU OOM when pretraining

    HI, I'm trying to pre-train the METER by using 8 A100 GPUS with the recommended config:

    python run.py with num_gpus=8 num_nodes=1 task_mlm_itm_clip_bert per_gpu_batchsize=32 clip16 text_roberta image_size=288
    

    but the GPU OOM occurred.

    So what is the extract per_gpu_batchsize? And how can I pre-train the model in about 8 days as mentioned in the paper.

    By the way, will the mixed precision training (precision=16) cause a performance drop?

    Many thanks!

    opened by hi-zhenyu 3
  • The training set of using different pretraining datasets.

    The training set of using different pretraining datasets.

    When I tried to reproduce the results in Table 17, I found that using the default learning rate and only using the coco pertaining dataset worked extremely poorly on downstream tasks.

    So, I would like to ask, do you set different training parameters (eg, lr, bs, max epoch, etc) for different pre-training datasets?

    opened by ShiYaya 2
  • question about the pre-trained weights

    question about the pre-trained weights

    Dear authors, Thanks for the great work! I have downloaded the pre-trained weights of ViT-B-16(224)+RoBERTa checkpoint from https://github.com/zdou0830/METER/releases/download/checkpoint2/meter_clip16_224_roberta_pretrain.ckpt, and found that the last layer of the visual encoder "vit_model.visual.transformer.resblocks.11..." is not included in the ckpt file? Did I miss something? Could you please help me to check it?

    opened by Junction4Nako 2
  • About license

    About license

    Thanks for the great work! The codebase is released under an MIT license (https://github.com/zdou0830/METER/blob/main/LICENSE) and an Apache License (https://github.com/zdou0830/METER/blob/main/ViLT_LICENSE).

    I want to know whether the pre-trained models are also released under the same license? Thanks.

    opened by WangWenhao0716 2
  • Pretrained weights of CLIP-ViT-224/32

    Pretrained weights of CLIP-ViT-224/32

    Hi,

    Thanks for the code! I wonder if you plan to release the pretrained weights of CLIP-ViT-224/32 (e.g., METER-CLIP32-RoBERTa (resolution: 224^2) pre-trained on GCC+SBU+COCO+VG)? It would be helpful for those who want to play with your model but don't have enough computational resources. Thanks!

    opened by bfshi 0
  • The last checkpoint or the best one on the Val split?

    The last checkpoint or the best one on the Val split?

    Hi, I'm confused by the testing checkpoint in the downstream tasks.

    I wonder which checkpoint should I use to evaluate, the last ckpt or the saved top-1 on the val split?

    opened by hi-zhenyu 3
  • Why the test results are different using same data?

    Why the test results are different using same data?

    I used pl.seed_everything to set seed,

    pl.seed_everything(_config["seed"], workers=True)
    

    but I still got different result when I tested flickr30k Image2Text Retrieval task on the model trained by myself. First:

    (tensor(0.7382),  tensor(0.9274), tensor(0.9638), tensor(0.8965), tensor(0.9814), tensor(0.9941)) 0)
    

    Second:

    (tensor(0.7366), tensor(0.9294), tensor(0.9656), tensor(0.8975), tensor(0.9814), tensor(0.9941)) 0
    

    I ensure the config files are same. Do you meet this problem?

    opened by qiao1025566574 1
  • ValueError and AttributeError

    ValueError and AttributeError

    Hi, I‘m trying to making "run.py" work for Pre-training, but I got ValueError and AttributeError, and I didn't find a solution, can you help me to check it? Thank you very much!

    Traceback (most recent call last): File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 312, in run_commandline return self.run( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 276, in run run() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/run.py", line 238, in call self.result = self.main_function(*args) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/config/captured_function.py", line 42, in captured_function result = wrapped(*args, **kwargs) File "run.py", line 20, in main dm = MTDataModule(_config, dist=True) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/multitask_datamodule.py", line 19, in init self.dm_dicts = {key: _datamoduleskey for key in datamodule_keys} File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/multitask_datamodule.py", line 19, in self.dm_dicts = {key: _datamoduleskey for key in datamodule_keys} File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/coco_caption_karpathy_datamodule.py", line 7, in init super().init(*args, **kwargs) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/datamodule_base.py", line 60, in init self.tokenizer = get_pretrained_tokenizer(tokenizer) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/datamodule_base.py", line 25, in get_pretrained_tokenizer return RobertaTokenizer.from_pretrained(from_pretrained) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1672, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/file_utils.py", line 1271, in cached_path output_path = get_from_cache( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/file_utils.py", line 1494, in get_from_cache raise ValueError( ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "run.py", line 16, in def main(_config): File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 190, in automain self.run_commandline() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 347, in run_commandline print_filtered_stacktrace() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 493, in print_filtered_stacktrace print(format_filtered_stacktrace(filter_traceback), file=sys.stderr) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 528, in format_filtered_stacktrace return "".join(filtered_traceback_format(tb_exception)) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 568, in filtered_traceback_format current_tb = tb_exception.exc_traceback AttributeError: 'TracebackException' object has no attribute 'exc_traceback'

    opened by huhuhud 3
  • Pre-trained models for the Merged Attention Model?

    Pre-trained models for the Merged Attention Model?

    Thanks for the amazing repository. The code is really clean. If I understand correctly, the current implementation is co-attention model, and same for pre-trained weights. I wanted to know if you had plans to release the merge attention model weights as well! Thanks in advance!

    opened by TheShadow29 1
Owner
Zi-Yi Dou
Zi-Yi Dou (窦子轶).
Zi-Yi Dou
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
(ICCV 2021 Oral) Re-distributing Biased Pseudo Labels for Semi-supervised Semantic Segmentation: A Baseline Investigation.

DARS Code release for the paper "Re-distributing Biased Pseudo Labels for Semi-supervised Semantic Segmentation: A Baseline Investigation", ICCV 2021

CVMI Lab 58 Jan 01, 2023
Simple object detection app with streamlit

object-detection-app Simple object detection app with streamlit. Upload an image and perform object detection. Adjust the confidence threshold to see

Robin Cole 68 Jan 02, 2023
Generating Anime Images by Implementing Deep Convolutional Generative Adversarial Networks paper

AnimeGAN - Deep Convolutional Generative Adverserial Network PyTorch implementation of DCGAN introduced in the paper: Unsupervised Representation Lear

Rohit Kukreja 23 Jul 21, 2022
Code release for NeurIPS 2020 paper "Co-Tuning for Transfer Learning"

CoTuning Official implementation for NeurIPS 2020 paper Co-Tuning for Transfer Learning. [News] 2021/01/13 The COCO 70 dataset used in the paper is av

THUML @ Tsinghua University 35 Sep 23, 2022
PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)

D-VQA We provide the PyTorch implementation for Debiased Visual Question Answering from Feature and Sample Perspectives (NeurIPS 2021). Dependencies P

Zhiquan Wen 19 Dec 22, 2022
An inofficial PyTorch implementation of PREDATOR based on KPConv.

PREDATOR: Registration of 3D Point Clouds with Low Overlap An inofficial PyTorch implementation of PREDATOR based on KPConv. The code has been tested

ZhuLifa 14 Aug 03, 2022
Codebase for Time-series Generative Adversarial Networks (TimeGAN)

Codebase for Time-series Generative Adversarial Networks (TimeGAN)

Jinsung Yoon 532 Dec 31, 2022
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

pgmpy 2.2k Jan 03, 2023
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
CRF-RNN for Semantic Image Segmentation - PyTorch version

This repository contains the official PyTorch implementation of the "CRF-RNN" semantic image segmentation method, published in the ICCV 2015

Sadeep Jayasumana 170 Dec 13, 2022
🏃‍♀️ A curated list about human motion capture, analysis and synthesis.

Awesome Human Motion 🏃‍♀️ A curated list about human motion capture, analysis and synthesis. Contents Introduction Human Models Datasets Data Process

Dennis Wittchen 274 Dec 14, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

88 Nov 22, 2022
Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation

Unseen Object Clustering: Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation Introduction In this work, we propose a new method

NVIDIA Research Projects 132 Dec 13, 2022
SatelliteSfM - A library for solving the satellite structure from motion problem

Satellite Structure from Motion Maintained by Kai Zhang. Overview This is a libr

Kai Zhang 190 Dec 08, 2022
Predicting a person's gender based on their weight and height

Logistic Regression Advanced Case Study Gender Classification: Predicting a person's gender based on their weight and height 1. Introduction We turn o

1 Feb 01, 2022
Privacy-Preserving Portrait Matting [ACM MM-21]

Privacy-Preserving Portrait Matting [ACM MM-21] This is the official repository of the paper Privacy-Preserving Portrait Matting. Jizhizi Li∗, Sihan M

Jizhizi_Li 212 Dec 27, 2022
The project page of paper: Architecture disentanglement for deep neural networks [ICCV 2021, oral]

This is the project page for the paper: Architecture Disentanglement for Deep Neural Networks, Jie Hu, Liujuan Cao, Tong Tong, Ye Qixiang, ShengChuan

Jie Hu 15 Aug 30, 2022
PyTorch implementation for the paper Pseudo Numerical Methods for Diffusion Models on Manifolds

Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM) This repo is the official PyTorch implementation for the paper Pseudo Numerical Meth

Luping Liu (刘路平) 196 Jan 05, 2023
Orbivator AI - To Determine which features of data (measurements) are most important for diagnosing breast cancer and find out if breast cancer occurs or not.

Orbivator_AI Breast Cancer Wisconsin (Diagnostic) GOAL To Determine which features of data (measurements) are most important for diagnosing breast can

anurag kumar singh 1 Jan 02, 2022