VRU-Accident: A Vision-Language Benchmark for Video Question Answering and Dense Captioning for Accident Scene Understanding

University of Central Florida
*Corresponding Author

Vulnerable Road Users (VRUs), such as pedestrians and cyclists, face high risk in traffic environments, making it essential to understand the causes and preventability of accidents involving them. Against this backdrop, Multimodal Large Language Models (MLLMs) have shown promise in complex scene understanding for applications like autonomous driving and accident analysis. However, there is no benchmark specifically designed to evaluate MLLMs in safety-critical situations involving Vulnerable Road Users (VRUs), where fine-grained reasoning, causal inference, and prevention-focused understanding are essential. We present VRU-Accident, the first vision-language benchmark that focuses on real-world traffic accidents involving VRUs, supporting both video question answering and dense captioning to assess MLLMs’ capabilities in accident scene understanding.

Abstract

We present VRU-Accident, a large-scale vision-language benchmark designed to evaluate multimodal large language models (MLLMs) in high-risk traffic scenarios involving Vulnerable Road Users (VRUs) such as pedestrians and cyclists. VRU-Accident comprises 1K real-world dashcam accident videos, annotated with 6K multiple-choice question-answer pairs across six safety-critical categories (with 24K candidate options and 3.4K unique answer choices), as well as 1K dense scene descriptions. Unlike prior works, our benchmark focuses explicitly on VRU-vehicle accidents, providing rich, fine-grained annotations that capture both spatial-temporal dynamics and causal semantics of accidents. To assess the current landscape of MLLMs, we conduct a comprehensive evaluation of 17 state-of-the-art models on the multiple-choice VQA task and on the dense captioning task. Our findings reveal that while MLLMs perform reasonably well on visually grounded attributes, they face significant challenges in reasoning and describing accident causes, types, and preventability.

VQA Statistics

Performance of MLLMs on the VQA Task

VQA result

Performance of MLLMs on the Dense Captioning Task

Caption result

Qualitative Visualization of the Best-Performing MLLM

Qualitative result

Qualitative Examples of VQA and Dense Captioning tasks on VRU-Accident Benchmark

Citation

@misc{kim2025vruaccidentvisionlanguagebenchmarkvideo,
      title={VRU-Accident: A Vision-Language Benchmark for Video Question Answering and Dense Captioning for Accident Scene Understanding}, 
      author={Younggun Kim and Ahmed S. Abdelrahman and Mohamed Abdel-Aty},
      year={2025},
      eprint={2507.09815},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.09815}, 
      }