We present VRU-Accident, a large-scale vision-language benchmark designed to evaluate multimodal large language models (MLLMs) in high-risk traffic scenarios involving Vulnerable Road Users (VRUs) such as pedestrians and cyclists. VRU-Accident comprises 1K real-world dashcam accident videos, annotated with 6K multiple-choice question-answer pairs across six safety-critical categories (with 24K candidate options and 3.4K unique answer choices), as well as 1K dense scene descriptions. Unlike prior works, our benchmark focuses explicitly on VRU-vehicle accidents, providing rich, fine-grained annotations that capture both spatial-temporal dynamics and causal semantics of accidents. To assess the current landscape of MLLMs, we conduct a comprehensive evaluation of 17 state-of-the-art models on the multiple-choice VQA task and on the dense captioning task. Our findings reveal that while MLLMs perform reasonably well on visually grounded attributes, they face significant challenges in reasoning and describing accident causes, types, and preventability.
@misc{kim2025vruaccidentvisionlanguagebenchmarkvideo,
title={VRU-Accident: A Vision-Language Benchmark for Video Question Answering and Dense Captioning for Accident Scene Understanding},
author={Younggun Kim and Ahmed S. Abdelrahman and Mohamed Abdel-Aty},
year={2025},
eprint={2507.09815},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.09815},
}