UOUO: Uncontextualized Uncommon Objects for Measuring Knowledge Horizons of Vision Language Models

1University of California San Diego, 2University of Illinois Urbana-Champaign
EMNLP 2024 main

*Indicates Equal Contribution

Access to UOUO data is currently limited. Please contact mw34@illinois.edu to request access.
Banner Image

UOUO Data Curation Pipeline. Snowflake means frozen weights, and fire means tune-able weights.

Abstract

Smaller-scale Vision-Langauge Models(VLMs) often claim to perform on par with larger models in general-domain visual grounding and question-answering benchmarks while offering advantages in computational efficiency and storage. However, their ability to handle rare objects, which fall into the long tail of data distributions, is less understood. To rigorously evaluate this aspect, we introduce the "Uncontextualized Uncommon Objects"(UOUO) benchmark. This benchmark focuses on systematically testing VLMs with both large and small parameter counts on rare and specialized objects. Our comprehensive analysis reveals that while smaller VLMs maintain competitive performance on common datasets, they significantly underperform on tasks involving uncommon objects. We also propose an advanced, scalable pipeline for data collection and cleaning, ensuring the UOUO benchmark provides high-quality, challenging instances. These findings highlight the need to consider long-tail distributions when assessing the true capabilities of VLMs.

Poster

BibTeX

@inproceedings{pi-etal-2024-uouo,
    title = "{UOUO}: Uncontextualized Uncommon Objects for Measuring Knowledge Horizons of Vision Language Models",
    author = "Pi, Xinyu  and
      Wu, Mingyuan  and
      Jiang, Jize  and
      Zheng, Haozhen  and
      Tian, Beitong  and
      Zhai, ChengXiang  and
      Nahrstedt, Klara  and
      Hu, Zhiting",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.369/",
    doi = "10.18653/v1/2024.emnlp-main.369",
    pages = "6432--6441",
    abstract = "Smaller-scale Vision-Language Models (VLMs) often claim to perform on par with larger models in general-domain visual grounding and question-answering benchmarks while offering advantages in computational efficiency and storage. However, their ability to handle rare objects, which fall into the long tail of data distributions, is less understood. To rigorously evaluate this aspect, we introduce the {\textquotedblleft}Uncontextualized Uncommon Objects{\textquotedblright} (UOUO) benchmark. This benchmark focuses on systematically testing VLMs with both large and small parameter counts on rare and specialized objects. Our comprehensive analysis reveals that while smaller VLMs maintain competitive performance on common datasets, they significantly underperform on tasks involving uncommon objects. We also propose an advanced, scalable pipeline for data collection and cleaning, ensuring the UOUO benchmark provides high-quality, challenging instances. These findings highlight the need to consider long-tail distributions when assessing the true capabilities of VLMs. Code and project details for UOUO can be found at https://zoezheng126.github.io/UOUO-Website/."
}