Datasets:
image
imagewidth (px) 155
5.47k
|
|---|
MM-UAVBench
A comprehensive multimodal benchmark designed to evaluate the perception, cognition, and planning abilities of Multimodal Large Language Models (MLLMs) in low-altitude UAV scenarios.
π Dataset Overview
MM-UAVBench focuses on assessing MLLMs' performance in UAV-specific low-altitude scenarios, with three core characteristics:
Key Features
Comprehensive Task Design
19 tasks across 3 capability dimensions (perception/cognition/planning), incorporating UAV-specific considerations β specifically multi-level cognition (object/scene/event) and planning for both aerial and ground agents.
Diverse Real-World Scenarios
- 1,549 real-world UAV video clips
- 2,873 high-resolution UAV images (avg. resolution: 1622 x 1033)
- Collected from diverse real-world low-altitude scenarios (urban/suburban/rural)
- High-Quality Annotations
- 5,702 multiple-choice QA pairs in total
- 16 tasks with manual human annotations
- 3 additional tasks via rule-based transformation of manual labels
π― Dataset Structure
MM-UAVBench/
βββ images/
β βββ annotated/ # Annotated images (used for official benchmark evaluation)
β βββ raw/ # Unannotated raw UAV images (open-sourced for custom annotation)
βββ tasks/ # QA annotations
βββ tools/
β βββ render_annotated.py # Script to render labels on raw images
β βββ util.py # Visualization tools
βββ README.md # Dataset usage guide
Important Notes on Image Files
- Evaluation Usage: The benchmark evaluation is conducted using annotated images in
images/annotated/. - Raw Images for Custom Annotation: We also open-source unannotated raw UAV images in
images/raw/. You can refer to thetools/render_annotated.pyscript to render custom labels on these raw images.
π Quick Start
Evaluate MLLMs on MM-UAVBench
MM-UAVBench is fully compatible with VLMEvalKit:
Step 1: Install Dependencies
git clone https://github.com/MM-UAVBench/MM-UAVBench.git
cd MM-UAVBench
git clone https://github.com/open-compass/VLMEvalKit.git
cd VLMEvalKit
pip install -e .
Step 2: Configure Evaluation Dataset
Copy the dataset file to the VLMEvalKit directory:
cp ~/MM-UAVBench/mmuavbench.py ~/MM-UAVBench/VLMEvalKit/vlmeval/dataset
Edit ~/MM-UAVBench/VLMEvalKit/vlmeval/dataset/__init__.py and add the following content:
from.mmuavbench import MMUAVBench_Image, MMUAVBench_Video
IMAGE_DATASET = [
# Existing datasets
MMUAVBench_Image,
]
VIDEO_DATASET = [
# Existing datasets
MMUAVBench_Video,
]
Step 3: Download Dataset
Download the dataset from huggingface and put it in ~/MM-UAVBench/data.
Set the dataset path in ~/MM-UAVBench/VLMEvalKit/.envοΌ
LMUData="~/MM-UAVBench/data"
Step 4: Run Evaluation
Modify the model checkpoint path in ~/MM-UAVBench/VLMEvalKit/vlmeval/config.py to your target model path.
Run the evaluation command:
python run.py \
--data MMUAVBench_Image MMUAVBench_Video \
--model Qwen3-VL-8B-Instruct \
--mode all \
--work-dir ~/MM-UAVBench/eval_results \
--verbose
Render Custom Annotations on Raw Images
To generate annotated images from raw files (using our script):
# 1. Set your MM-UAVBench root directory in render_annotated.py
# 2. Run the annotation rendering script
python tools/render_annotated.py
π Citation
If you find MM-UAVBench useful in your research tasks or applications, please consider to give starβ and kindly cite:
@article{dai2025mm,
title={MM-UAVBench: How Well Do Multimodal Large Language Models See, Think, and Plan in Low-Altitude UAV Scenarios?},
author={Dai, Shiqi and Ma, Zizhi and Luo, Zhicong and Yang, Xuesong and Huang, Yibin and Zhang, Wanyue and Chen, Chi and Guo, Zonghao and Xu, Wang and Sun, Yufei and others},
journal={arXiv preprint arXiv:2512.23219},
year={2025}οΌ
url={https://arxiv.org/abs/2512.23219}
}
- Downloads last month
- 2,011