Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Tags:
uav
Libraries:
Datasets
License:
Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
155
5.47k
End of preview. Expand in Data Studio

MM-UAVBench

A comprehensive multimodal benchmark designed to evaluate the perception, cognition, and planning abilities of Multimodal Large Language Models (MLLMs) in low-altitude UAV scenarios.

πŸ“š Dataset Overview

MM-UAVBench focuses on assessing MLLMs' performance in UAV-specific low-altitude scenarios, with three core characteristics:

Key Features

  1. Comprehensive Task Design

    19 tasks across 3 capability dimensions (perception/cognition/planning), incorporating UAV-specific considerations – specifically multi-level cognition (object/scene/event) and planning for both aerial and ground agents.

  2. Diverse Real-World Scenarios

  • 1,549 real-world UAV video clips
  • 2,873 high-resolution UAV images (avg. resolution: 1622 x 1033)
  • Collected from diverse real-world low-altitude scenarios (urban/suburban/rural)
  1. High-Quality Annotations
  • 5,702 multiple-choice QA pairs in total
  • 16 tasks with manual human annotations
  • 3 additional tasks via rule-based transformation of manual labels

🎯 Dataset Structure

MM-UAVBench/
β”œβ”€β”€ images/          
β”‚   β”œβ”€β”€ annotated/   # Annotated images (used for official benchmark evaluation)
β”‚   └── raw/         # Unannotated raw UAV images (open-sourced for custom annotation)
β”œβ”€β”€ tasks/           # QA annotations 
β”œβ”€β”€ tools/
β”‚   └── render_annotated.py  # Script to render labels on raw images
β”‚   └── util.py              # Visualization tools
└── README.md                # Dataset usage guide

Important Notes on Image Files

  • Evaluation Usage: The benchmark evaluation is conducted using annotated images in images/annotated/.
  • Raw Images for Custom Annotation: We also open-source unannotated raw UAV images in images/raw/. You can refer to the tools/render_annotated.py script to render custom labels on these raw images.

πŸš€ Quick Start

Evaluate MLLMs on MM-UAVBench

MM-UAVBench is fully compatible with VLMEvalKit:

Step 1: Install Dependencies

git clone https://github.com/MM-UAVBench/MM-UAVBench.git 
cd MM-UAVBench
git clone https://github.com/open-compass/VLMEvalKit.git
cd VLMEvalKit
pip install -e .

Step 2: Configure Evaluation Dataset

Copy the dataset file to the VLMEvalKit directory:

cp ~/MM-UAVBench/mmuavbench.py ~/MM-UAVBench/VLMEvalKit/vlmeval/dataset

Edit ~/MM-UAVBench/VLMEvalKit/vlmeval/dataset/__init__.py and add the following content:

from.mmuavbench import MMUAVBench_Image, MMUAVBench_Video

IMAGE_DATASET = [
    # Existing datasets
    MMUAVBench_Image,
]

VIDEO_DATASET = [
    # Existing datasets
    MMUAVBench_Video,
]

Step 3: Download Dataset

Download the dataset from huggingface and put it in ~/MM-UAVBench/data.

Set the dataset path in ~/MM-UAVBench/VLMEvalKit/.env:

LMUData="~/MM-UAVBench/data"

Step 4: Run Evaluation

Modify the model checkpoint path in ~/MM-UAVBench/VLMEvalKit/vlmeval/config.py to your target model path.

Run the evaluation command:

python run.py \
    --data MMUAVBench_Image MMUAVBench_Video \
    --model Qwen3-VL-8B-Instruct \
    --mode all \
    --work-dir ~/MM-UAVBench/eval_results \
    --verbose

Render Custom Annotations on Raw Images

To generate annotated images from raw files (using our script):

# 1. Set your MM-UAVBench root directory in render_annotated.py
# 2. Run the annotation rendering script
python tools/render_annotated.py

πŸ“– Citation

If you find MM-UAVBench useful in your research tasks or applications, please consider to give star⭐ and kindly cite:

@article{dai2025mm,
  title={MM-UAVBench: How Well Do Multimodal Large Language Models See, Think, and Plan in Low-Altitude UAV Scenarios?},
  author={Dai, Shiqi and Ma, Zizhi and Luo, Zhicong and Yang, Xuesong and Huang, Yibin and Zhang, Wanyue and Chen, Chi and Guo, Zonghao and Xu, Wang and Sun, Yufei and others},
  journal={arXiv preprint arXiv:2512.23219},
  year={2025},
  url={https://arxiv.org/abs/2512.23219}
}
Downloads last month
2,011

Paper for daisq/MM-UAVBench