Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: fixed all the typos and grammatical errors in README.md #17

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 17 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# CVinW Readings [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/Computer-Vision-in-the-Wild/CVinW_Readings)

``[Computer Vision in the Wild (CVinW)](https://computer-vision-in-the-wild.github.io/eccv-2022/)'' is an emerging research field. This writeup provides a quick introduction of CVinW and maintains a collection of papers on the topic. If you find some missing papers or resources, please open issues or pull requests (recommended).
``[Computer Vision in the Wild (CVinW)](https://computer-vision-in-the-wild.github.io/eccv-2022/)'' is an emerging research field. This write-up provides a quick introduction to CVinW and maintains a collection of papers on the topic. If you find some missing papers or resources, please open issues or pull requests (recommended).


# Table of Contents

- [What is Computer Vision in the Wild (CVinW)?](#what-is-computer-vision-in-the-wild)
- [Goals of CVinW](#star-goals-of-cvinw)
- [Task Transfer Scenarios are Broad](#one-task-transfer-scenarios-are-broad)
- [Task Transfer Scenarios are Broad](#x-task-transfer-scenarios-are-broad)
- [Task Transfer Cost is Low](#two-task-transfer-cost-is-low )
- [Benchmarks](#cinema-benchmarks)
- [News](#loudspeaker-news)
Expand All @@ -32,21 +32,21 @@
# What is Computer Vision in the Wild?

### :star: Goals of CVinW
Developing a transferable foundation model/system that can *effortlessly* adapt to *a large range of visual tasks* in the wild. It comes with two key factors: (i) The task transfer scenarios are broad, and (ii) The task transfer cost is low. The main idea is illustrated as follows, please see the detailed description in [ELEVATER paper](https://arxiv.org/abs/2204.08790).
Developing a transferable foundation model/system that can *effortlessly* adapt to *a large range of visual tasks* in the wild. It comes with two key factors: (i) The task transfer scenarios are broad, and (ii) The task transfer cost is low. The main idea is illustrated as follows: please see the detailed description in [ELEVATER paper](https://arxiv.org/abs/2204.08790).

### :one: Task Transfer Scenarios are Broad

We illustrate and compare CVinW with other settings using a 2D chart in Figure 1, where the space is constructed with two orthogonal dimensions:
input image distribution and output concept set. The 2D chart is divided into four quadrants, based on how the model evaluation stage is different from model development stage. For any visual recognition problems at different granularity such as image classification, object detection and segmentation, the modeling setup cann be categorized into one of the four settings. We see an emerging trend on moving towards CVinW. Interested in the various pre-trained vision models that move towards CVinW? please check out Section :fire:[``Papers on Task-level Transfer with Pre-trained Models''](#fire-papers-on-task-level-transfer-with-pre-trained-models).
Input image distribution and output concept set. The 2D chart is divided into four quadrants based on how the model evaluation stage differs from the model development stage. For any visual recognition problems at different granularity, such as image classification, object detection, and segmentation, the modeling setup can be categorized into one of the four settings. We see an emerging trend in moving towards CVinW. Interested in the various pre-trained vision models that move towards CVinW? please check out Section :fire:[``Papers on Task-level Transfer with Pre-trained Models''](#fire-papers-on-task-level-transfer-with-pre-trained-models).

<table>
<tr>
<td width="50%">
<ul>
<li><b>The Close-Set Setting. </b> Both training and evaluation distributions are consistent in both dimensions, a typical setting in ML/CV textbooks.</li>
<li><b>Open-Set/Vocabulary/World Setting.</b> It allows new concepts in evaluation, while typically remains the same visual domain. Please see examples in <a href='https://arxiv.org/abs/1707.00600'>image classification</a> and <a href='https://arxiv.org/abs/2011.10678'>object detection</a>. </li>
<li><b>Domain Generalization Setting.</b> Domain shift allows new visual domain in evaluation, while typically remains the same concept pool. Please see examples such as <a href='https://arxiv.org/abs/2007.01434'>DomainBed</a> and <a href='http://ai.bu.edu/M3SDA/'>DomainNet</a>. </li>
<li style="background-color:powderblue;"><b>Computer Vision in the Wild Setting. </b> CVinW allows the flexibility in both dimensions, where any new tasks/datasets in the wild essentially fall into.</li>
<li><b>Open-Set/Vocabulary/World Setting.</b> It allows new concepts in evaluation while typically remaining in the same visual domain. Please see examples in <a href='https://arxiv.org/abs/1707.00600'>image classification</a> and <a href='https://arxiv.org/abs/2011.10678'>object detection</a>. </li>
<li><b>Domain Generalization Setting.</b> Domain shift allows new visual domain in evaluation while typically remaining in the same concept pool. Please see examples such as <a href='https://arxiv.org/abs/2007.01434'>DomainBed</a> and <a href='http://ai.bu.edu/M3SDA/'>DomainNet</a>. </li>
<li style="background-color:powderblue;"><b>Computer Vision in the Wild Setting. </b> CVinW allows flexibility in both dimensions, where any new tasks/datasets in the wild essentially fall into.</li>
</ul>

</td>
Expand All @@ -63,14 +63,14 @@ input image distribution and output concept set. The 2D chart is divided into fo

### :two: Task Transfer Cost is Low

One major advantage of pre-trained models is the promise that they can transfer to downstream tasks *effortlessly*. The model adaptation cost is considered in two orthogonal dimensions: *sample-efficiency* and *parameter-efficiency*, as illustrated in Figure 2. The bottom-left corner and top-right corner is the most inexpensive and expensive adaptation strategy, respectively. One may interpolate and make combinations in the 2D space, to get different model adaptation methods with different cost. To efficient adapt large vision models of the gradaully increaseing size, we see an emerging need on efficient model adaptation. Interested in contributing your smart efficient adaptation algorithms and see how it differs from existing papers? please check out Section :snowflake:[``Papers on Efficient Model Adaptation''](#snowflake-papers-on-efficient-model-adaptation) .
One major advantage of pre-trained models is the promise that they can transfer to downstream tasks *effortlessly*. The model adaptation cost is considered in two orthogonal dimensions: *sample-efficiency* and *parameter-efficiency*, as illustrated in Figure 2. The bottom-left corner and top-right corner is the most inexpensive and expensive adaptation strategy, respectively. One may interpolate and make combinations in the 2D space to get different model adaptation methods with different costs. To efficiently adapt large vision models of gradually increasing size, we see an emerging need for efficient model adaptation. Interested in contributing your smart, efficient adaptation algorithms and seeing how they differ from existing papers? please check out Section :snowflake:[``Papers on Efficient Model Adaptation''](#snowflake-papers-on-efficient-model-adaptation) .

<table>
<tr>
<td width="50%">
<ul>
<li><b>Sample-efficiency: Zero-, Few-, and Full-shot. </b> Due to the high cost of annotating data, it is often desired to provide a small number of labeled image-label pairs in downstream datasets. Transferable models should be able to reach high performance in this data-limited scenario..</li>
<li><b>Parameter-efficiency: Frozen Model Inference, Prompting Tuning, Linear Probing vs Full Model Fine-tuning..</b> A smaller number of trainable parameter in model adaptation typically means a small training cost in a new task. </li>
<li><b>Parameter-efficiency: Frozen Model Inference, Prompting Tuning, Linear Probing vs Full Model Fine-tuning..</b> A smaller number of trainable parameters in model adaptation typically means a small training cost in a new task. </li>
</ul>

</td>
Expand Down Expand Up @@ -98,7 +98,7 @@ One major advantage of pre-trained models is the promise that they can transfer

### :loudspeaker: News

* [09/2023] 🔥 Discover the fascinating journey of "[Multimodal Foundation Models: From Specialists to General-Purpose Assistants](https://arxiv.org/abs/2309.10020)" 🌐 Dive into the evolution of large models in #ComputerVision & #VisionLanguage! This is based on our [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/2023/), where you could find videos and slides of the core chapters. For its preceding paper, please check out [Vision-Language Pre-training: Basics, Recent Advances, and Future Trends](https://arxiv.org/abs/2210.09263)
* [09/2023] 🔥 Discover the fascinating journey of "[Multimodal Foundation Models: From Specialists to General-Purpose Assistants](https://arxiv.org/abs/2309.10020)" 🌐 Dive into the evolution of large models in #ComputerVision & #VisionLanguage! This is based on our [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/2023/), where you can find videos and slides of the core chapters. For its preceding paper, please check out [Vision-Language Pre-training: Basics, Recent Advances, and Future Trends](https://arxiv.org/abs/2210.09263)

<img src="images/mfm_evolution.jpeg" width=60%/>

Expand Down Expand Up @@ -512,7 +512,7 @@ Open-vocabulary Object Detection via Vision and Language Knowledge Distillation.
<p>
<font size=3><b>Class-agnostic Object Detection with Multi-modal Transformer.</b></font>
<br>
<font size=2>Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer and Ming-Hsuan Yang.</font>
<font size=2>Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, and Ming-Hsuan Yang.</font>
<br>
<font size=2>ECCV 2022.</font>
<a href='https://arxiv.org/abs/2111.11430'>[paper]</a> <a href='https://github.com/mmaaz60/mvits_for_class_agnostic_od'>[code]</a>
Expand Down Expand Up @@ -926,7 +926,7 @@ NeurIPS 2023 (Spotlight). [[paper](https://arxiv.org/abs/2306.09347)] [[code](ht
</p>

## :orange_book: Grounded Image Generation in the Wild
:new: This is a new research topic: grounded image generation based on any open-set concept, include text and visual prompt. All the text-to-image pre-trained generation models allow open-set prompting at the image-level, and thus belong to ``Grounded Image Generation in the Wild'' by default. This paper collection focuses on more fine-grained controlability in the image generation, such as specifying new concept at the the level of bounding box, masks, edge/depth maps etc.
:new: This is a new research topic: grounded image generation based on any open-set concept, including text and visual prompt. All the text-to-image pre-trained generation models allow open-set prompting at the image level and thus belong to ``Grounded Image Generation in the Wild'' by default. This paper collection focuses on more fine-grained controllability in image generation, such as specifying new concepts at the level of the bounding boxes, masks, edge/depth maps, etc.

<p>
<font size=3><b>GLIGEN: Open-Set Grounded Text-to-Image Generation.</b></font>
Expand Down Expand Up @@ -996,7 +996,7 @@ NeurIPS 2023 (Spotlight). [[paper](https://arxiv.org/abs/2306.09347)] [[code](ht
</p>

## :orange_book: Large Multimodal Models
:new: This is a new research topic: build general-purpose multimodal assistants based on large language models (LLM). One prominent example is OpenAI Multimodal GPT-4. A comphrensive list paper list is compiled at [Awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models). Our collection maintains the a brief list for the completenes of CVinW.
:new: This is a new research topic: building general-purpose multimodal assistants based on large language models (LLM). One prominent example is OpenAI Multimodal GPT-4. A comprehensive list paper list is compiled at [Awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models). Our collection maintains a brief list for the completeness of CVinW.


<p>
Expand Down Expand Up @@ -1665,17 +1665,17 @@ $\colorbox{powderblue}{Prompt}$ &nbsp; $\colorbox{tomato}{Adapter}$

# :beers: Acknowledgements

We thank all the authors above for their great works! Related Reading List
We thank all the authors above for their great work! Related Reading List

- [[Awesome Detection Transformer]](https://github.com/IDEACVR/awesome-detection-transformer)
- [[Awesome Prompting Papers in Computer Vision]](https://github.com/ttengwang/Awesome_Prompting_Papers_in_Computer_Vision)

If you find this repository useful, please consider giving a star :star: and cite the related papers :beer::
If you find this repository useful, please consider giving a star: and cite the related papers :beer::

```
@article{li2022elevater,
title={ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models},
author={Li, Chunyuan and Liu, Haotian and Li, Liunian Harold and Zhang, Pengchuan and Aneja, Jyoti and Yang, Jianwei and Jin, Ping and Hu, Houdong and Liu, Zicheng and Lee, Yong Jae and Gao, Jianfeng},
author={Li, Chunyuan and Liu, Haotian, and Li, Liunian Harold and Zhang, Pengchuan, and Aneja, Jyoti and Yang, Jianwei and Jin, Ping and Hu, Houdong and Liu, Zicheng and Lee, Yong Jae and Gao, Jianfeng},
journal={Neural Information Processing Systems},
year={2022}
}
Expand All @@ -1689,7 +1689,7 @@ If you find this repository useful, please consider giving a star :star: and c

@article{gan2022vision,
title={Vision-language pre-training: Basics, recent advances, and future trends},
author={Gan, Zhe and Li, Linjie and Li, Chunyuan and Wang, Lijuan and Liu, Zicheng and Gao, Jianfeng},
author={Gan, Zhe and Li, Linjie and Li, Chunyuan, and Wang, Lijuan and Liu, Zicheng and Gao, Jianfeng},
journal={Foundations and Trends{\textregistered} in Computer Graphics and Vision},
volume={14},
number={3--4},
Expand Down