Skip to content

Monad-Cube/awesome-visual-grounding

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 

Repository files navigation

List

2D Visual Grounding

Semi-Supervised Learning

CVPR 2023 🎐RefTeacher: A Strong Baseline for Semi-Supervised Referring Expression Comprehension🎐
ACM MM 2023 🎐Semi-Supervised Panoptic Narrative Grounding🎐
Arxiv 2024 🎐ACTRESS: Active Retraining for Semi-supervised Visual Grounding🎐

Weakly-Supervised Learning

CVPR 2023 🎐RefCLIP: A Universal Teacher for Weakly Supervised Referring Expression Comprehension🎐

ICCV 2021 🎐MDETR-modulated detection for end-to-end multi-modal understanding🎐

ECCV 2024 🎐Exploring Phrase-Level Grounding with Text-to-Image Diffusion Model🎐

ECCV 2024 🎐SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding🎐

3D Visual Grounding

CVPR 2020 🎐Scanrefer: 3d object localization in rgb-d scans using natural language🎐
ECCV 2020 Oral 🎐Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes🎐

ICCV 2023 🎐3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment🎐
ECCV 2024 🎐Unifying 3D Vision-Language Understanding via Promptable Queries🎐

CVPR 2023 🎐NS3D: Neuro-Symbolic Grounding of 3D Objects and Relations🎐
CVPR 2024 🎐Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding🎐
CVPR 2024 🎐Naturally Supervised 3D Visual Grounding with Language-Regularized Concept Learners🎐
CVPR 2024 🎐LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning🎐
CVPR 2024 🎐SIG3D: Situational Awareness Matters in 3D Vision Language Reasoning🎐 [arxiv] [github]
ECCV 2024 🎐Empowering 3D Visual Grounding with Reasoning Capabilities🎐
Arxiv 2024.09 🎐LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with 3D-awareness🎐
CoRL 2024 🎐VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding🎐 [arxiv] [github]
Arxiv 2024.10 🎐Robin3D: Improving 3D Large Language Model via Robust Instruction Tuning🎐 [arxiv] [github]

AAAI 2023 Oral 🎐Language-Assisted 3D Feature Learning for Semantic Scene Understanding🎐
CVPR 2024 🎐G3-LQ: Marrying Hyperbolic Alignment with Explicit Semantic-Geometric Modeling for 3D Visual Grounding🎐

ICCV 2021 🎐Free-form description guided 3d visual graph network for object grounding in point cloud🎐

ACM MM 2021 🎐Transrefer3d: Entity-and-relation aware transformer for fine-grained 3d visual grounding🎐
ICCV 2021 🎐3DVG-Transformer: Relation modeling for visual grounding on point clouds🎐
CoRL 2021 🎐Languagerefer: Spatial-language model for 3d visual grounding🎐
NeurIPS 2023 🎐Exploiting Contextual Objects and Relations for 3D Visual Grounding🎐
CVPR 2024 🎐MiKASA: Multi-Key-Anchor & Scene-Aware Transformer for 3D Visual Grounding🎐
Arxiv 24.08 🎐PD-TPE: Parallel Decoder with Text-guided Position Encoding for 3D Visual Grounding🎐

CVPR 2022 Oral 🎐3D-SPS: Single-stage 3d visual grounding via referred point progressive selection🎐
ECCV 2022 🎐BUTD-DETR: Bottom Up Top Down Detection Transformers for Language Grounding in Images and Point Clouds🎐
EMNLP 2023 🎐3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding🎐
CVPR 2023 🎐EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding🎐

ICCV 2021 🎐SAT: 2d semantics assisted training for 3d visual grounding🎐
CVPR 2022 🎐X-Trans2Cap: Cross-Modal Knowledge Transfer using Transformer for 3D Dense Captioning🎐
NIPS 2022 🎐Look Around and Refer: 2D Synthetic Semantics Knowledge Distillation for 3D Visual Grounding🎐
CVPR 2024 🎐Towards CLIP-driven Language-free 3D Visual Grounding via 2D-3D Relational Enhancement and Consistency🎐
AAAI 2024 🎐Mono3DVG: 3D Visual Grounding in Monocular Images🎐 [arxiv] [github]

CVPR 2022 🎐Multi-view transformer for 3d visual grounding🎐
ICCV 2023 🎐ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding with GPT and Prototype Guidance🎐

AAAI 2021 🎐Text-Guided Graph Neural Networks for Referring 3D Instance Segmentation🎐
ICCV 2021 🎐InstanceRefer: Cooperative Holistic Understanding for Visual Grounding on Point Clouds through Instance Multi-level Contextual Referring🎐
ICCV 2023 workshop 🎐Three Ways to Improve Verbo-visual Fusion for Dense 3D Visual Grounding🎐
ECCV 2024 🎐Multi-branch Collaborative Learning Network for 3D Visual Grounding🎐
ACM MM 2024 🎐RefMask3D: Language-Guided Transformer for 3D Referring Segmentation🎐
ACM MM 2024 🎐3D-GRES: Generalized 3D Referring Expression Segmentation🎐

NIPS 2022 🎐Language Conditioned Spatial Relation Reasoning for 3D Object Grounding🎐
CVPR 2024 🎐Multi-Attribute Interactions Matter for 3D Visual Grounding🎐

ICCV 2023 🎐WS-3DVG: Distilling Coarse-to-Fine Semantic Matching Knowledge for Weakly Supervised 3D Visual Grounding🎐

ACM MM 2023 🎐Dense Object Grounding in 3D Scenes🎐

IJCAI 2024 🎐3D Vision and Language Pretraining with Large-Scale Synthetic Data🎐

Arxiv 2024 🎐Task-oriented Sequential Grounding in 3D Scenes🎐

Arxiv 2022.07 🎐Toward Fine-Grained 3D Visual Grounding through Referring Textual Phrases🎐

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published