jiahaoli2077 commited on
Commit
bcf4645
·
verified ·
1 Parent(s): a4a15b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md CHANGED
@@ -1,4 +1,95 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - robotics
5
+ - video-classification
6
+ - image-to-text
7
+ tags:
8
+ - vla-arena
9
+ - robotics
10
+ - multimodal
11
+ - imitation-learning
12
+ - vision-language-action
13
+ - lerobot
14
+ - openpi
15
+ size_categories:
16
+ - 10K<n<100K
17
  ---
18
 
19
+ # VLA-Arena Dataset (L0 - Large Variant)
20
+
21
+ ## About VLA-Arena
22
+
23
+ VLA-Arena is an open-source benchmark designed for the systematic evaluation of Vision-Language-Action (VLA) models. It provides a complete and unified toolchain covering scene modeling, demonstration collection, model training, and evaluation. Featuring 150+ tasks across 11 specialized suites, VLA-Arena assesses models through hierarchical difficulty levels (L0-L2) to ensure comprehensive metrics for safety, generalization, and efficiency.
24
+
25
+ **Key Evaluation Domains**
26
+ VLA-Arena focuses on four critical dimensions to ensure robotic agents can operate effectively in the real world:
27
+ * **Safety**: Evaluate the ability to operate reliably in the physical world while avoiding static/dynamic obstacles and hazards.
28
+ * **Distractor**: Assess performance stability when facing environmental unpredictability and visual clutter.
29
+ * **Extrapolation**: Test the ability to generalize learned knowledge to novel situations, unseen objects, and new workflows.
30
+ * **Long Horizon**: Challenge agents to combine long sequences of actions to achieve complex, multi-step goals.
31
+
32
+ **Highlights**
33
+ * **End-to-End Toolchain**: From scene construction to final evaluation metrics.
34
+ * **Systematic Difficulty Scaling**: Tasks range from basic object manipulation (L0) to complex, constraint-heavy scenarios (L2).
35
+ * **Flexible Customization**: Powered by CBDDL (Constrained Behavior Domain Definition Language) for easy task definition.
36
+
37
+ **Resources**
38
+ * **Project Homepage**: [VLA-Arena Website](https://vla-arena.github.io)
39
+ * **GitHub Repository**: [PKU-Alignment/VLA-Arena](https://github.com/PKU-Alignment/VLA-Arena)
40
+ * **Documentation**: [Read the Docs](https://github.com/PKU-Alignment/VLA-Arena/tree/main/docs)
41
+
42
+ ---
43
+
44
+ ## Dataset Description
45
+
46
+ This dataset is the **Level 0 (L0) - Large (L)** variant of the VLA-Arena benchmark data. It contains a balanced set of human demonstrations suitable for standard training scenarios.
47
+
48
+ * **Tasks Covered**: 60 distinct tasks at Difficulty Level 0.
49
+ * **Total Trajectories**: 3,000 (50 trajectories per task).
50
+ * **Task Suites**: Covers Safety, Distractor, Extrapolation, and Long Horizon domains.
51
+
52
+ ### Format and Compatibility
53
+ This dataset is strictly formatted according to the **RLDS** format.
54
+
55
+ The data structure includes standardized features for:
56
+ * **Observation**: High-resolution RGB images (256x256) and robot state vectors.
57
+ * **Action**: 7-DoF continuous control signals (End-effector pose + Gripper).
58
+ * **Language**: Natural language task instructions.
59
+
60
+ ## Dataset Construction and Preprocessing
61
+
62
+ To ensure high data quality and fair comparison, the dataset underwent several rigorous construction and quality control steps:
63
+
64
+ **1. High-Resolution Regeneration**
65
+ The demonstrations were re-rendered at a higher resolution of 256 x 256. Simple upscaling of the original 128 x 128 benchmark images resulted in poor visual fidelity. We re-executed the recorded action trajectories in the simulator to capture superior visual observations suitable for modern VLA backbones.
66
+
67
+ **2. Camera Selection and Rotation**
68
+ * **Viewpoint**: Only the static third-person camera images are utilized. Wrist camera images were discarded to ensure fair comparison across baselines.
69
+ * **Rotation**: All third-person camera images were rotated by 180 degrees at both train and test time to correct for the visual inversion observed in the simulation environment.
70
+
71
+ **3. Success Filtering**
72
+ All demonstrations were replayed in the simulation environments. Any trajectory that failed to meet the task's success criteria during replay was filtered out.
73
+
74
+ **4. Action Filtering (Iterative Optimization)**
75
+ Standard data cleaning often involves filtering out all no-operation (no-op) actions. However, we found that completely removing no-ops significantly decreased the trajectory success rate upon playback in the VLA-Arena setup. To address this, we adopted an iterative optimization strategy:
76
+ * Instead of removing all no-ops, we sequentially attempted to preserve N no-operation actions (N = 4, 8, 12, 16), specifically around critical state transition points (e.g., gripper closure and opening).
77
+ * Only trajectories that remained successful during validation playback were retained.
78
+
79
+ ## Evaluation & Usage
80
+
81
+ This dataset is designed to be used within the VLA-Arena benchmark ecosystem. It allows for the training of models that are subsequently tested across 11 specialized suites with difficulty levels ranging from L0 (Basic) to L2 (Advanced).
82
+
83
+ For detailed evaluation instructions, metrics, and scripts, please refer to the VLA-Arena repository.
84
+ <!--
85
+ ## Citation
86
+
87
+ If you use this dataset or the VLA-Arena benchmark in your research, please cite:
88
+
89
+ ```bibtex
90
+ @misc{vla-arena2025,
91
+ title={VLA-Arena: A Comprehensive Benchmark for Vision-Language-Action Models},
92
+ author={Jiahao Li, Borong Zhang, Jiachen Shen, Jiaming Ji, and Yaodong Yang},
93
+ journal={GitHub repository},
94
+ year={2025}
95
+ } -->