METER: Multimodal End-to-end TransformER

Related tags

Deep LearningMETER
Overview

METER

Code and pre-trained models will be publicized soon.

Citation

@article{dou2021meter,
  title={An Empirical Study of Training End-to-End Vision-and-Language Transformers},
  author={Dou, Zi-Yi and Xu, Yichong and Gan, Zhe and Wang, Jianfeng and Wang, Shuohang and Wang, Lijuan and Zhu, Chenguang and Peng, Nanyun and Liu, Zicheng and Zeng, Michael},
  journal={arXiv},
  year={2021},
  url={https://arxiv.org/abs/2111.02387},
}

Acknowledgements

The code is based on ViLT and some of the code is borrowed from CLIP and Swin-Transformer.

Comments
  • questions about VQA

    questions about VQA

    Hi, could you share the VQAv2 result fine-tuning with image resolution of 384, the result implemented by me is 76.52 and it is based on your checkpoint pretrained on COCO, SBU, VG, CC3M.

    opened by Henry9805 20
  • Some questions for the paper

    Some questions for the paper

    What is the difference between the score in Table 5 and Table 8? 77.19 in Table 5 results on test-dev set of VQAv2, and, 77.68 in Table 8 results on test-dev set of VQAv2.

    opened by wanng-ide 17
  • How much is the per gpu batch size?

    How much is the per gpu batch size?

    How much is the per gpu batch size? total batchsize is 4096, GPU num is 8, so per gpu batch size is 512? But I use A100 GPU, the batch size only can be set 16?

    opened by qiao1025566574 5
  • pretraining task

    pretraining task

    Hello, the author, great work! I'm curious whether you have tried to add Image Text Contrast Learning in the pretraining task? Because in the ALBEF paper, they reported that the ITC task had a great impact on the experimental results.

    opened by mactavish91 4
  • Inference with Fine-tuned SNLI Model

    Inference with Fine-tuned SNLI Model

    Hi,

    Thank you for the great work and the fine-tuned models, but I just wanted to ask how I should go about running inference with the fine-tuned model. Currently, I run into this error in my notebook:

    1 model = METERTransformerSS(cfg)
    ----> 2 model.load_state_dict(torch.load("/content/meter_clip16_288_roberta_snli.ckpt")['state_dict'])
    
    [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in load_state_dict(self, state_dict, strict)
       1050         if len(error_msgs) > 0:
       1051             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    -> 1052                                self.__class__.__name__, "\n\t".join(error_msgs)))
       1053         return _IncompatibleKeys(missing_keys, unexpected_keys)
       1054 
    
    RuntimeError: Error(s) in loading state_dict for METERTransformerSS:
    	Unexpected key(s) in state_dict: "vit_model.token_embedding.weight". 
    	size mismatch for vit_model.visual.positional_embedding: copying a param with shape torch.Size([577, 768]) from checkpoint, the shape in current model is torch.Size([197, 768]).
    

    I wonder if this is due to how I configure the model or not, is there a specific way I should create the config for inference? Thank you in advance.

    opened by sramshetty 4
  • The model meter_clip16_288_roberta_flickr.ckpt is inconsistent with the network weight parameter dimension

    The model meter_clip16_288_roberta_flickr.ckpt is inconsistent with the network weight parameter dimension

    Hi, Thank you for your excellent work, may I use this model "METER-CLIP16-RoBERTa fine-tuned on Flickr30k IR/TR (resolution: 384^2)" as meter_clip16_288_roberta_flickr.ckpt, why does the code report this error showing inconsistent dimensions, thank you answer my question. Y88) J7_592_AJQR}4LBP4W

    opened by attutude 4
  • Unable to train models faster with more gpus

    Unable to train models faster with more gpus

    Hi, I am facing an issue where, on increasing the number of gpus and nodes, the number of steps for each epoch doesnot change. for eg if I run

    python run.py with data_root=/data/datasets/meter_data_combined num_gpus=4 num_nodes=8 task_mlm_itm_clip_bert per_gpu_batchsize=64 clip16 text_roberta image_size=224 precision=16 datasets='["vg"]'

    the number of steps per epoch is nearly 150k. I observe that the number of steps is 150k when num_gpus=1 num_nodes=1, and when num_gpus=4 num_nodes=8. I made sure that all gpus were being utilized when I set num_gpus=4 num_nodes=8. I also observe that while using num_gpus=4 num_nodes=8, the time for each epoch is ~160 hours in my case, while it is ~30 hours if I set num_gpus=1 num_nodes=1.

    Is there any suggestion that you have for this problem?

    opened by HarmanDotpy 3
  • GPU OOM when pretraining

    GPU OOM when pretraining

    HI, I'm trying to pre-train the METER by using 8 A100 GPUS with the recommended config:

    python run.py with num_gpus=8 num_nodes=1 task_mlm_itm_clip_bert per_gpu_batchsize=32 clip16 text_roberta image_size=288
    

    but the GPU OOM occurred.

    So what is the extract per_gpu_batchsize? And how can I pre-train the model in about 8 days as mentioned in the paper.

    By the way, will the mixed precision training (precision=16) cause a performance drop?

    Many thanks!

    opened by hi-zhenyu 3
  • The training set of using different pretraining datasets.

    The training set of using different pretraining datasets.

    When I tried to reproduce the results in Table 17, I found that using the default learning rate and only using the coco pertaining dataset worked extremely poorly on downstream tasks.

    So, I would like to ask, do you set different training parameters (eg, lr, bs, max epoch, etc) for different pre-training datasets?

    opened by ShiYaya 2
  • question about the pre-trained weights

    question about the pre-trained weights

    Dear authors, Thanks for the great work! I have downloaded the pre-trained weights of ViT-B-16(224)+RoBERTa checkpoint from https://github.com/zdou0830/METER/releases/download/checkpoint2/meter_clip16_224_roberta_pretrain.ckpt, and found that the last layer of the visual encoder "vit_model.visual.transformer.resblocks.11..." is not included in the ckpt file? Did I miss something? Could you please help me to check it?

    opened by Junction4Nako 2
  • About license

    About license

    Thanks for the great work! The codebase is released under an MIT license (https://github.com/zdou0830/METER/blob/main/LICENSE) and an Apache License (https://github.com/zdou0830/METER/blob/main/ViLT_LICENSE).

    I want to know whether the pre-trained models are also released under the same license? Thanks.

    opened by WangWenhao0716 2
  • Pretrained weights of CLIP-ViT-224/32

    Pretrained weights of CLIP-ViT-224/32

    Hi,

    Thanks for the code! I wonder if you plan to release the pretrained weights of CLIP-ViT-224/32 (e.g., METER-CLIP32-RoBERTa (resolution: 224^2) pre-trained on GCC+SBU+COCO+VG)? It would be helpful for those who want to play with your model but don't have enough computational resources. Thanks!

    opened by bfshi 0
  • The last checkpoint or the best one on the Val split?

    The last checkpoint or the best one on the Val split?

    Hi, I'm confused by the testing checkpoint in the downstream tasks.

    I wonder which checkpoint should I use to evaluate, the last ckpt or the saved top-1 on the val split?

    opened by hi-zhenyu 3
  • Why the test results are different using same data?

    Why the test results are different using same data?

    I used pl.seed_everything to set seed,

    pl.seed_everything(_config["seed"], workers=True)
    

    but I still got different result when I tested flickr30k Image2Text Retrieval task on the model trained by myself. First:

    (tensor(0.7382),  tensor(0.9274), tensor(0.9638), tensor(0.8965), tensor(0.9814), tensor(0.9941)) 0)
    

    Second:

    (tensor(0.7366), tensor(0.9294), tensor(0.9656), tensor(0.8975), tensor(0.9814), tensor(0.9941)) 0
    

    I ensure the config files are same. Do you meet this problem?

    opened by qiao1025566574 1
  • ValueError and AttributeError

    ValueError and AttributeError

    Hi, I‘m trying to making "run.py" work for Pre-training, but I got ValueError and AttributeError, and I didn't find a solution, can you help me to check it? Thank you very much!

    Traceback (most recent call last): File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 312, in run_commandline return self.run( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 276, in run run() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/run.py", line 238, in call self.result = self.main_function(*args) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/config/captured_function.py", line 42, in captured_function result = wrapped(*args, **kwargs) File "run.py", line 20, in main dm = MTDataModule(_config, dist=True) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/multitask_datamodule.py", line 19, in init self.dm_dicts = {key: _datamoduleskey for key in datamodule_keys} File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/multitask_datamodule.py", line 19, in self.dm_dicts = {key: _datamoduleskey for key in datamodule_keys} File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/coco_caption_karpathy_datamodule.py", line 7, in init super().init(*args, **kwargs) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/datamodule_base.py", line 60, in init self.tokenizer = get_pretrained_tokenizer(tokenizer) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/datamodule_base.py", line 25, in get_pretrained_tokenizer return RobertaTokenizer.from_pretrained(from_pretrained) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1672, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/file_utils.py", line 1271, in cached_path output_path = get_from_cache( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/file_utils.py", line 1494, in get_from_cache raise ValueError( ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "run.py", line 16, in def main(_config): File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 190, in automain self.run_commandline() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 347, in run_commandline print_filtered_stacktrace() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 493, in print_filtered_stacktrace print(format_filtered_stacktrace(filter_traceback), file=sys.stderr) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 528, in format_filtered_stacktrace return "".join(filtered_traceback_format(tb_exception)) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 568, in filtered_traceback_format current_tb = tb_exception.exc_traceback AttributeError: 'TracebackException' object has no attribute 'exc_traceback'

    opened by huhuhud 3
  • Pre-trained models for the Merged Attention Model?

    Pre-trained models for the Merged Attention Model?

    Thanks for the amazing repository. The code is really clean. If I understand correctly, the current implementation is co-attention model, and same for pre-trained weights. I wanted to know if you had plans to release the merge attention model weights as well! Thanks in advance!

    opened by TheShadow29 1
Owner
Zi-Yi Dou
Zi-Yi Dou (窦子轶).
Zi-Yi Dou
Stacked Hourglass Network with a Multi-level Attention Mechanism: Where to Look for Intervertebral Disc Labeling

⚠️ ‎‎‎ A more recent and actively-maintained version of this code is available in ivadomed Stacked Hourglass Network with a Multi-level Attention Mech

Reza Azad 14 Oct 24, 2022
Pytorch implementation of "Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet"

Token Labeling: Training an 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet (arxiv) This is a Pytorch implementation of our te

蒋子航 383 Dec 27, 2022
Open Source Light Field Toolbox for Super-Resolution

BasicLFSR BasicLFSR is an open-source and easy-to-use Light Field (LF) image Super-Ressolution (SR) toolbox based on PyTorch, including a collection o

Squidward 50 Nov 18, 2022
Real-world Anomaly Detection in Surveillance Videos- pytorch Re-implementation

Real world Anomaly Detection in Surveillance Videos : Pytorch RE-Implementation This repository is a re-implementation of "Real-world Anomaly Detectio

seominseok 62 Dec 08, 2022
ScaleNet: A Shallow Architecture for Scale Estimation

ScaleNet: A Shallow Architecture for Scale Estimation Repository for the code of ScaleNet paper: "ScaleNet: A Shallow Architecture for Scale Estimatio

Axel Barroso 34 Nov 09, 2022
Rotary Transformer

[中文|English] Rotary Transformer Rotary Transformer is an MLM pre-trained language model with rotary position embedding (RoPE). The RoPE is a relative

325 Jan 03, 2023
Official Implement of CVPR 2021 paper “Cross-Modal Collaborative Representation Learning and a Large-Scale RGBT Benchmark for Crowd Counting”

RGBT Crowd Counting Lingbo Liu, Jiaqi Chen, Hefeng Wu, Guanbin Li, Chenglong Li, Liang Lin. "Cross-Modal Collaborative Representation Learning and a L

37 Dec 08, 2022
[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Akshita Gupta 127 Dec 27, 2022
PyTorch Implementation of Vector Quantized Variational AutoEncoders.

Pytorch implementation of VQVAE. This paper combines 2 tricks: Vector Quantization (check out this amazing blog for better understanding.) Straight-Th

Vrushank Changawala 2 Oct 06, 2021
Keras-1D-ACGAN-Data-Augmentation

Keras-1D-ACGAN-Data-Augmentation What is the ACGAN(Auxiliary Classifier GANs) ? Related Paper : [Abstract : Synthesizing high resolution photorealisti

Jae-Hoon Shim 7 Dec 23, 2022
ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.

ManimML ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.

259 Jan 04, 2023
This project generates news headlines using a Long Short-Term Memory (LSTM) neural network.

News Headlines Generator bunnysaini/Generate-Headlines Goal This project aims to generate news headlines using a Long Short-Term Memory (LSTM) neural

Bunny Saini 1 Jan 24, 2022
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch

Generative Adversarial Notebooks Collection of my Generative Adversarial Network implementations Most codes are for python3, most notebooks works on C

tjwei 1.5k Dec 16, 2022
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022
Demonstrates how to divide a DL model into multiple IR model files (division) and introduce a simplest way to implement a custom layer works with OpenVINO IR models.

Demonstration of OpenVINO techniques - Model-division and a simplest-way to support custom layers Description: Model Optimizer in Intel(r) OpenVINO(tm

Yasunori Shimura 12 Nov 09, 2022
Code for pre-training CharacterBERT models (as well as BERT models).

Pre-training CharacterBERT (and BERT) This is a repository for pre-training BERT and CharacterBERT. DISCLAIMER: The code was largely adapted from an o

Hicham EL BOUKKOURI 31 Dec 05, 2022
Official implementation for "Symbolic Learning to Optimize: Towards Interpretability and Scalability"

Symbolic Learning to Optimize This is the official implementation for ICLR-2022 paper "Symbolic Learning to Optimize: Towards Interpretability and Sca

VITA 8 Dec 19, 2022
Code for paper Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting

Decoupled Spatial-Temporal Graph Neural Networks Code for our paper: Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting.

S22 43 Jan 04, 2023
(CVPR 2022) A minimalistic mapless end-to-end stack for joint perception, prediction, planning and control for self driving.

LAV Learning from All Vehicles Dian Chen, Philipp Krähenbühl CVPR 2022 (also arXiV 2203.11934) This repo contains code for paper Learning from all veh

Dian Chen 300 Dec 15, 2022