NEMO

Can Multimodal LLMs Identify Attribute-Modified Objects?

1The University of Tokyo, 2National Institute of Informatics

Introduction

Multimodal Large Language Models (MLLMs) have made notable advances in visual understanding, yet their abilities to recognize objects modified by specific attributes remain an open question. To address this, we explore MLLMs' reasoning capabilities in object recognition, ranging from commonsense to beyond-commonsense scenarios. We introduce a novel benchmark, NEMO, which comprises 900 images of origiNal fruits and their corresponding attributE-MOdified ones; along with a set of 2,700 questions including open-, multiple-choice-, unsolvable types. We assess 26 recent open-sourced and commercial models using our benchmark. The findings highlight pronounced performance gaps in recognizing objects in NEMO and reveal distinct answer preferences across different models. Although stronger vision encoders improve performance, MLLMs still lag behind standalone vision encoders. Interestingly, scaling up the model size does not consistently yield better outcomes, as deeper analysis reveals that larger LLMs can weaken vision encoders during fine-tuning. These insights shed light on critical limitations in current MLLMs and suggest potential pathways toward developing more versatile and resilient multimodal models.

Highlights

grade-lv

Challenges in consistently identifying original and attribute-modified objects are illustrated in (a) and (b): while LLaVA-NeXT-Qwen-32B and GPT-4o correctly recognize the original "mango", both fail with a blue variant. Panels (c) and (d) show average accuracy scores of LLaVA-NeXT-Qwen-32B and GPT-4o across objects in our NEMO, comparing original and attribute-modified versions. (e) provides a comparative overview of average scores for representative MLLMs on original (upper) and attribute-modified (lower) objects.

Citation

@misc{li2024nemo,
      title={NEMO: Can Multimodal LLMs Identify Attribute-Modified Objects?}, 
      author={Jiaxuan Li and Junwen Mo and MinhDuc Vo and Akihiro Sugimoto and Hideki Nakayama},
      year={2024},
      eprint={2411.17794},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.17794}, 
}