- Sample-efficiency: Zero-, Few-, and Full-shot. Due to the high cost of annotating data, it is often desired to provide a small number of labeled image-label pairs in downstream datasets. Transferable models should be able to reach high performance in this data-limited scenario..
- - Parameter-efficiency: Frozen Model Inference, Prompting Tuning, Linear Probing vs Full Model Fine-tuning.. A smaller number of trainable parameter in model adaptation typically means a small training cost in a new task.
+ - Parameter-efficiency: Frozen Model Inference, Prompting Tuning, Linear Probing vs Full Model Fine-tuning.. A smaller number of trainable parameters in model adaptation typically means a small training cost in a new task.
|
@@ -98,7 +98,7 @@ One major advantage of pre-trained models is the promise that they can transfer
### :loudspeaker: News
-* [09/2023] 🔥 Discover the fascinating journey of "[Multimodal Foundation Models: From Specialists to General-Purpose Assistants](https://arxiv.org/abs/2309.10020)" 🌐 Dive into the evolution of large models in #ComputerVision & #VisionLanguage! This is based on our [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/2023/), where you could find videos and slides of the core chapters. For its preceding paper, please check out [Vision-Language Pre-training: Basics, Recent Advances, and Future Trends](https://arxiv.org/abs/2210.09263)
+* [09/2023] 🔥 Discover the fascinating journey of "[Multimodal Foundation Models: From Specialists to General-Purpose Assistants](https://arxiv.org/abs/2309.10020)" 🌐 Dive into the evolution of large models in #ComputerVision & #VisionLanguage! This is based on our [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/2023/), where you can find videos and slides of the core chapters. For its preceding paper, please check out [Vision-Language Pre-training: Basics, Recent Advances, and Future Trends](https://arxiv.org/abs/2210.09263)
@@ -512,7 +512,7 @@ Open-vocabulary Object Detection via Vision and Language Knowledge Distillation.
Class-agnostic Object Detection with Multi-modal Transformer.
-Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer and Ming-Hsuan Yang.
+Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, and Ming-Hsuan Yang.
ECCV 2022.
[paper] [code]
@@ -926,7 +926,7 @@ NeurIPS 2023 (Spotlight). [[paper](https://arxiv.org/abs/2306.09347)] [[code](ht
## :orange_book: Grounded Image Generation in the Wild
-:new: This is a new research topic: grounded image generation based on any open-set concept, include text and visual prompt. All the text-to-image pre-trained generation models allow open-set prompting at the image-level, and thus belong to ``Grounded Image Generation in the Wild'' by default. This paper collection focuses on more fine-grained controlability in the image generation, such as specifying new concept at the the level of bounding box, masks, edge/depth maps etc.
+:new: This is a new research topic: grounded image generation based on any open-set concept, including text and visual prompt. All the text-to-image pre-trained generation models allow open-set prompting at the image level and thus belong to ``Grounded Image Generation in the Wild'' by default. This paper collection focuses on more fine-grained controllability in image generation, such as specifying new concepts at the level of the bounding boxes, masks, edge/depth maps, etc.
GLIGEN: Open-Set Grounded Text-to-Image Generation.
@@ -996,7 +996,7 @@ NeurIPS 2023 (Spotlight). [[paper](https://arxiv.org/abs/2306.09347)] [[code](ht
## :orange_book: Large Multimodal Models
-:new: This is a new research topic: build general-purpose multimodal assistants based on large language models (LLM). One prominent example is OpenAI Multimodal GPT-4. A comphrensive list paper list is compiled at [Awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models). Our collection maintains the a brief list for the completenes of CVinW.
+:new: This is a new research topic: building general-purpose multimodal assistants based on large language models (LLM). One prominent example is OpenAI Multimodal GPT-4. A comprehensive list paper list is compiled at [Awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models). Our collection maintains a brief list for the completeness of CVinW.
@@ -1665,17 +1665,17 @@ $\colorbox{powderblue}{Prompt}$ $\colorbox{tomato}{Adapter}$
# :beers: Acknowledgements
-We thank all the authors above for their great works! Related Reading List
+We thank all the authors above for their great work! Related Reading List
- [[Awesome Detection Transformer]](https://github.com/IDEACVR/awesome-detection-transformer)
- [[Awesome Prompting Papers in Computer Vision]](https://github.com/ttengwang/Awesome_Prompting_Papers_in_Computer_Vision)
-If you find this repository useful, please consider giving a star :star: and cite the related papers :beer::
+If you find this repository useful, please consider giving a star: and cite the related papers :beer::
```
@article{li2022elevater,
title={ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models},
- author={Li, Chunyuan and Liu, Haotian and Li, Liunian Harold and Zhang, Pengchuan and Aneja, Jyoti and Yang, Jianwei and Jin, Ping and Hu, Houdong and Liu, Zicheng and Lee, Yong Jae and Gao, Jianfeng},
+ author={Li, Chunyuan and Liu, Haotian, and Li, Liunian Harold and Zhang, Pengchuan, and Aneja, Jyoti and Yang, Jianwei and Jin, Ping and Hu, Houdong and Liu, Zicheng and Lee, Yong Jae and Gao, Jianfeng},
journal={Neural Information Processing Systems},
year={2022}
}
@@ -1689,7 +1689,7 @@ If you find this repository useful, please consider giving a star :star: and c
@article{gan2022vision,
title={Vision-language pre-training: Basics, recent advances, and future trends},
- author={Gan, Zhe and Li, Linjie and Li, Chunyuan and Wang, Lijuan and Liu, Zicheng and Gao, Jianfeng},
+ author={Gan, Zhe and Li, Linjie and Li, Chunyuan, and Wang, Lijuan and Liu, Zicheng and Gao, Jianfeng},
journal={Foundations and Trends{\textregistered} in Computer Graphics and Vision},
volume={14},
number={3--4},