đź§­ EgoOrientBench

Is Right Right? Enhancing Object Orientation Understanding in Multimodal Language Models through Egocentric Instruction Tuning

Ji Hyeok Jung1, Eun Tae Kim1, Seo Yeon Kim1, Joo Ho Lee1, Bumsoo Kim2,*, Buru Chang3,*

1Sogang University, 2Chungang University, 3Korea University

{ji9759, untae0122, ksy02031, jhleecs}@sogang.ac.kr   bumsoo@cau.ac.kr   buru_chang@korea.ac.kr

Abstract

Multimodal large language models (MLLMs) act as essential interfaces, connecting humans with AI technologies in multimodal applications. However, current MLLMs face challenges in accurately interpreting object orientation in images due to inconsistent orientation annotations in training data, hindering the development of a coherent orientation understanding.

To overcome this, we propose egocentric instruction tuning, which aligns an MLLM's orientation understanding with the user's perspective using a consistent annotation standard derived from the user's viewpoint. We first generate egocentric instruction data that leverages MLLMs' ability to recognize object details and applies prior knowledge for orientation understanding. Using this data, we perform instruction tuning to enhance the model's capability for accurate orientation interpretation.

In addition, we introduce EgoOrientBench, a benchmark that evaluates MLLMs' orientation understanding across three tasks using images collected from diverse domains. Experimental results show that our tuning strategy significantly improves orientation understanding without compromising overall MLLM performance.

EgoOrientBench Example 1
Figure A
EgoOrientBench Example 2
Figure B
EgoOrientBench Example 3
Figure C

Figure 1: Examples of Ego-Oriented VQA and Confusing Annotation Samples

Benchmark: đź§­ EgoOrientBench

EgoOrientBench evaluates MLLMs' ability to understand object orientation across three tasks using images from various domains (including PACS, ImageNet, and others). It assesses how well models align with human egocentric perspectives—crucial for robotics alignment and autonomous driving, where orientation understanding is critical.

Figure 2: VQA example

How to Cite

@misc{jung2025rightrightenhancingobject,
  title={Is 'Right' Right? Enhancing Object Orientation Understanding in Multimodal Large Language Models through Egocentric Instruction Tuning},
  author={Ji Hyeok Jung and Eun Tae Kim and Seo Yeon Kim and Joo Ho Lee and Bumsoo Kim and Buru Chang},
  year={2025},
  eprint={2411.16761},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2411.16761}
}