diff --git a/README.md b/README.md index f6034dd..05ace8d 100644 --- a/README.md +++ b/README.md @@ -1,13 +1,13 @@ # CVinW Readings [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/Computer-Vision-in-the-Wild/CVinW_Readings) -``[Computer Vision in the Wild (CVinW)](https://computer-vision-in-the-wild.github.io/eccv-2022/)'' is an emerging research field. This writeup provides a quick introduction of CVinW and maintains a collection of papers on the topic. If you find some missing papers or resources, please open issues or pull requests (recommended). +``[Computer Vision in the Wild (CVinW)](https://computer-vision-in-the-wild.github.io/eccv-2022/)'' is an emerging research field. This write-up provides a quick introduction to CVinW and maintains a collection of papers on the topic. If you find some missing papers or resources, please open issues or pull requests (recommended). # Table of Contents - [What is Computer Vision in the Wild (CVinW)?](#what-is-computer-vision-in-the-wild) - [Goals of CVinW](#star-goals-of-cvinw) - - [Task Transfer Scenarios are Broad](#one-task-transfer-scenarios-are-broad) + - [Task Transfer Scenarios are Broad](#x-task-transfer-scenarios-are-broad) - [Task Transfer Cost is Low](#two-task-transfer-cost-is-low ) - [Benchmarks](#cinema-benchmarks) - [News](#loudspeaker-news) @@ -32,21 +32,21 @@ # What is Computer Vision in the Wild? ### :star: Goals of CVinW -Developing a transferable foundation model/system that can *effortlessly* adapt to *a large range of visual tasks* in the wild. It comes with two key factors: (i) The task transfer scenarios are broad, and (ii) The task transfer cost is low. The main idea is illustrated as follows, please see the detailed description in [ELEVATER paper](https://arxiv.org/abs/2204.08790). +Developing a transferable foundation model/system that can *effortlessly* adapt to *a large range of visual tasks* in the wild. It comes with two key factors: (i) The task transfer scenarios are broad, and (ii) The task transfer cost is low. The main idea is illustrated as follows: please see the detailed description in [ELEVATER paper](https://arxiv.org/abs/2204.08790). ### :one: Task Transfer Scenarios are Broad We illustrate and compare CVinW with other settings using a 2D chart in Figure 1, where the space is constructed with two orthogonal dimensions: -input image distribution and output concept set. The 2D chart is divided into four quadrants, based on how the model evaluation stage is different from model development stage. For any visual recognition problems at different granularity such as image classification, object detection and segmentation, the modeling setup cann be categorized into one of the four settings. We see an emerging trend on moving towards CVinW. Interested in the various pre-trained vision models that move towards CVinW? please check out Section :fire:[``Papers on Task-level Transfer with Pre-trained Models''](#fire-papers-on-task-level-transfer-with-pre-trained-models). +Input image distribution and output concept set. The 2D chart is divided into four quadrants based on how the model evaluation stage differs from the model development stage. For any visual recognition problems at different granularity, such as image classification, object detection, and segmentation, the modeling setup can be categorized into one of the four settings. We see an emerging trend in moving towards CVinW. Interested in the various pre-trained vision models that move towards CVinW? please check out Section :fire:[``Papers on Task-level Transfer with Pre-trained Models''](#fire-papers-on-task-level-transfer-with-pre-trained-models). @@ -63,14 +63,14 @@ input image distribution and output concept set. The 2D chart is divided into fo ### :two: Task Transfer Cost is Low -One major advantage of pre-trained models is the promise that they can transfer to downstream tasks *effortlessly*. The model adaptation cost is considered in two orthogonal dimensions: *sample-efficiency* and *parameter-efficiency*, as illustrated in Figure 2. The bottom-left corner and top-right corner is the most inexpensive and expensive adaptation strategy, respectively. One may interpolate and make combinations in the 2D space, to get different model adaptation methods with different cost. To efficient adapt large vision models of the gradaully increaseing size, we see an emerging need on efficient model adaptation. Interested in contributing your smart efficient adaptation algorithms and see how it differs from existing papers? please check out Section :snowflake:[``Papers on Efficient Model Adaptation''](#snowflake-papers-on-efficient-model-adaptation) . +One major advantage of pre-trained models is the promise that they can transfer to downstream tasks *effortlessly*. The model adaptation cost is considered in two orthogonal dimensions: *sample-efficiency* and *parameter-efficiency*, as illustrated in Figure 2. The bottom-left corner and top-right corner is the most inexpensive and expensive adaptation strategy, respectively. One may interpolate and make combinations in the 2D space to get different model adaptation methods with different costs. To efficiently adapt large vision models of gradually increasing size, we see an emerging need for efficient model adaptation. Interested in contributing your smart, efficient adaptation algorithms and seeing how they differ from existing papers? please check out Section :snowflake:[``Papers on Efficient Model Adaptation''](#snowflake-papers-on-efficient-model-adaptation) .
  • The Close-Set Setting. Both training and evaluation distributions are consistent in both dimensions, a typical setting in ML/CV textbooks.
  • -
  • Open-Set/Vocabulary/World Setting. It allows new concepts in evaluation, while typically remains the same visual domain. Please see examples in image classification and object detection.
  • -
  • Domain Generalization Setting. Domain shift allows new visual domain in evaluation, while typically remains the same concept pool. Please see examples such as DomainBed and DomainNet.
  • -
  • Computer Vision in the Wild Setting. CVinW allows the flexibility in both dimensions, where any new tasks/datasets in the wild essentially fall into.
  • +
  • Open-Set/Vocabulary/World Setting. It allows new concepts in evaluation while typically remaining in the same visual domain. Please see examples in image classification and object detection.
  • +
  • Domain Generalization Setting. Domain shift allows new visual domain in evaluation while typically remaining in the same concept pool. Please see examples such as DomainBed and DomainNet.
  • +
  • Computer Vision in the Wild Setting. CVinW allows flexibility in both dimensions, where any new tasks/datasets in the wild essentially fall into.
@@ -98,7 +98,7 @@ One major advantage of pre-trained models is the promise that they can transfer ### :loudspeaker: News -* [09/2023] 🔥 Discover the fascinating journey of "[Multimodal Foundation Models: From Specialists to General-Purpose Assistants](https://arxiv.org/abs/2309.10020)" 🌐 Dive into the evolution of large models in #ComputerVision & #VisionLanguage! This is based on our [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/2023/), where you could find videos and slides of the core chapters. For its preceding paper, please check out [Vision-Language Pre-training: Basics, Recent Advances, and Future Trends](https://arxiv.org/abs/2210.09263) +* [09/2023] 🔥 Discover the fascinating journey of "[Multimodal Foundation Models: From Specialists to General-Purpose Assistants](https://arxiv.org/abs/2309.10020)" 🌐 Dive into the evolution of large models in #ComputerVision & #VisionLanguage! This is based on our [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/2023/), where you can find videos and slides of the core chapters. For its preceding paper, please check out [Vision-Language Pre-training: Basics, Recent Advances, and Future Trends](https://arxiv.org/abs/2210.09263) @@ -512,7 +512,7 @@ Open-vocabulary Object Detection via Vision and Language Knowledge Distillation.

Class-agnostic Object Detection with Multi-modal Transformer.
-Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer and Ming-Hsuan Yang. +Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, and Ming-Hsuan Yang.
ECCV 2022. [paper] [code] @@ -926,7 +926,7 @@ NeurIPS 2023 (Spotlight). [[paper](https://arxiv.org/abs/2306.09347)] [[code](ht

## :orange_book: Grounded Image Generation in the Wild -:new: This is a new research topic: grounded image generation based on any open-set concept, include text and visual prompt. All the text-to-image pre-trained generation models allow open-set prompting at the image-level, and thus belong to ``Grounded Image Generation in the Wild'' by default. This paper collection focuses on more fine-grained controlability in the image generation, such as specifying new concept at the the level of bounding box, masks, edge/depth maps etc. +:new: This is a new research topic: grounded image generation based on any open-set concept, including text and visual prompt. All the text-to-image pre-trained generation models allow open-set prompting at the image level and thus belong to ``Grounded Image Generation in the Wild'' by default. This paper collection focuses on more fine-grained controllability in image generation, such as specifying new concepts at the level of the bounding boxes, masks, edge/depth maps, etc.

GLIGEN: Open-Set Grounded Text-to-Image Generation. @@ -996,7 +996,7 @@ NeurIPS 2023 (Spotlight). [[paper](https://arxiv.org/abs/2306.09347)] [[code](ht

## :orange_book: Large Multimodal Models -:new: This is a new research topic: build general-purpose multimodal assistants based on large language models (LLM). One prominent example is OpenAI Multimodal GPT-4. A comphrensive list paper list is compiled at [Awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models). Our collection maintains the a brief list for the completenes of CVinW. +:new: This is a new research topic: building general-purpose multimodal assistants based on large language models (LLM). One prominent example is OpenAI Multimodal GPT-4. A comprehensive list paper list is compiled at [Awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models). Our collection maintains a brief list for the completeness of CVinW.

@@ -1665,17 +1665,17 @@ $\colorbox{powderblue}{Prompt}$   $\colorbox{tomato}{Adapter}$ # :beers: Acknowledgements -We thank all the authors above for their great works! Related Reading List +We thank all the authors above for their great work! Related Reading List - [[Awesome Detection Transformer]](https://github.com/IDEACVR/awesome-detection-transformer) - [[Awesome Prompting Papers in Computer Vision]](https://github.com/ttengwang/Awesome_Prompting_Papers_in_Computer_Vision) -If you find this repository useful, please consider giving a star :star: and cite the related papers :beer:: +If you find this repository useful, please consider giving a star: and cite the related papers :beer:: ``` @article{li2022elevater, title={ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models}, - author={Li, Chunyuan and Liu, Haotian and Li, Liunian Harold and Zhang, Pengchuan and Aneja, Jyoti and Yang, Jianwei and Jin, Ping and Hu, Houdong and Liu, Zicheng and Lee, Yong Jae and Gao, Jianfeng}, + author={Li, Chunyuan and Liu, Haotian, and Li, Liunian Harold and Zhang, Pengchuan, and Aneja, Jyoti and Yang, Jianwei and Jin, Ping and Hu, Houdong and Liu, Zicheng and Lee, Yong Jae and Gao, Jianfeng}, journal={Neural Information Processing Systems}, year={2022} } @@ -1689,7 +1689,7 @@ If you find this repository useful, please consider giving a star :star: and c @article{gan2022vision, title={Vision-language pre-training: Basics, recent advances, and future trends}, - author={Gan, Zhe and Li, Linjie and Li, Chunyuan and Wang, Lijuan and Liu, Zicheng and Gao, Jianfeng}, + author={Gan, Zhe and Li, Linjie and Li, Chunyuan, and Wang, Lijuan and Liu, Zicheng and Gao, Jianfeng}, journal={Foundations and Trends{\textregistered} in Computer Graphics and Vision}, volume={14}, number={3--4},

  • Sample-efficiency: Zero-, Few-, and Full-shot. Due to the high cost of annotating data, it is often desired to provide a small number of labeled image-label pairs in downstream datasets. Transferable models should be able to reach high performance in this data-limited scenario..
  • -
  • Parameter-efficiency: Frozen Model Inference, Prompting Tuning, Linear Probing vs Full Model Fine-tuning.. A smaller number of trainable parameter in model adaptation typically means a small training cost in a new task.
  • +
  • Parameter-efficiency: Frozen Model Inference, Prompting Tuning, Linear Probing vs Full Model Fine-tuning.. A smaller number of trainable parameters in model adaptation typically means a small training cost in a new task.