[paper]
OmniMAE: Single Model Masked Pretraining on Images and Videos. Transformer-based architectures have become competitive across a variety of visual domains, most notably images and videos. While prior work has studied these modalities in isolation, having a common architecture suggests that one can train a single unified model for multiple visual modalities. Prior attempts at unified modeling typically use architectures tailored for vision tasks, or obtain worse performance compared to single modality models. In this work, we show that masked autoencoding can be used to train a simple Vision Transformer on images and videos, without requiring any labeled data. This single model learns visual representations that are comparable to or better than single-modality representations on both image and video benchmarks, while using a much simpler architecture. In particular,our single pretrained model can be finetuned to achieve 86.5% on ImageNet and 75.3% on the challenging Something Something-v2 video benchmark. Furthermore, this model can be learned by dropping 90% of the image and 95% of the video patches, enabling extremely fast training.
We share checkpoints for the models in the OmniMAE paper.
Pre-training checkpoints,
The models in this table are pretrained jointly pretrained on SSv2 and In1k for 1600 epochs,
Name | Model / Checkpoint |
---|---|
OmniMAE ViT-B | vit_base_mae_pretraining |
OmniMAE ViT-L | vit_large_mae_pretraining |
OmniMAE ViT-H | vit_huge_mae_pretraining |
Finetuned SSv2 checkpoints,
Name | SSv2 (Top-1) | Model / Checkpoint |
---|---|---|
OmniMAE ViT-B | 69.5 | vit_base_mae_finetune_ssv2 |
OmniMAE ViT-L | 74.2 | vit_large_mae_finetune_ssv2 |
OmniMAE ViT-H | 75.3 | vit_huge_mae_finetune_ssv2 |
Finetuned In1k checkpoints,
Name | IN1k (Top-1) | Model / Checkpoint |
---|---|---|
OmniMAE ViT-B | 83.0 | vit_base_mae_finetune_in1k |
OmniMAE ViT-L | 85.1 | vit_large_mae_finetune_in1k |
OmniMAE ViT-H | 86.5 | vit_huge_mae_finetune_in1k |
If this work is helpful in your research, please consider starring ⭐ us and citing:
@article{girdhar2022omnimae,
title={OmniMAE: Single Model Masked Pretraining on Images and Videos},
author={Girdhar, Rohit and El-Nouby, Alaaeldin and Singh, Mannat and Alwala, Kalyan Vasudev and Joulin, Armand and Misra, Ishan},
journal={arXiv preprint arXiv:2206.08356},
year={2022}
}
We welcome your pull requests! Please see CONTRIBUTING and CODE_OF_CONDUCT for more information.
OmniMAE is released under the CC-BY-NC 4.0 license. See LICENSE for additional details. However the Swin Transformer implementation is additionally licensed under the Apache 2.0 license (see NOTICE for additional details).