---
license: mit
---
TeleEgo:
Benchmarking Egocentric AI Assistants in the Wild
๐ข **Note**๏ผThis project is still under active development, and the benchmark will be continuously updated.
## ๐ Introduction
**TeleEgo** is a comprehensive **omni benchmark** designed for **multi-person, multi-scene, multi-task, and multimodal long-term memory reasoning** in egocentric video streams.
It reflects realistic personal assistant scenarios where continuous egocentric video data is collected across hours or even days, requiring models to maintain and reason over **memory, understanding, and cross-memory reasoning**. **Omni** here means that TeleEgo covers the full spectrum of **roles, scenes, tasks, modalities, and memory horizons**, offering all-round evaluation for egocentric AI assistants.
**TeleEgo provides:**
- ๐ง **Omni-scale, diverse egocentric data** from 5 roles across 4 daily scenarios.
- ๐ค **Multi-modal annotations**: video, narration, and speech transcripts.
- โ **Fine-grained QA benchmark**: 3 cognitive dimensions, 12 subcategories.
---
## ๐ Dataset Overview
- **Participants**: 5 (balanced gender)
- **Scenarios**:
- Work & Study
- Lifestyle & Routines
- Social Activities
- Outings & Culture
- **Recording**: 3 days/participant (~14.4 hours each)
- **Modalities**:
- Egocentric video streams
- Speech & conversations
- Narration and event descriptions
---
## Download
```bash
# Extract (only need to specify the first file)
7z x archive.7z.001
# Or extract to a specific directory
7z x archive.7z.001 -o./extracted_data
```
## Dataset Structure
After extraction, the dataset structure is:
```
TeleEgo/
โโโ merged_P1_A.json # QA annotations for Participant 1
โโโ merged_P2_A.json # QA annotations for Participant 2
โโโ merged_P3_A.json # QA annotations for Participant 3
โโโ merged_P4_A.json # QA annotations for Participant 4
โโโ merged_P5_A.json # QA annotations for Participant 5
โโโ merged_P1.mp4 # Video stream for Participant 1 (~46GB)
โโโ merged_P2.mp4 # Video stream for Participant 2 (~35GB)
โโโ merged_P3.mp4 # Video stream for Participant 3 (~58GB)
โโโ merged_P4.mp4 # Video stream for Participant 4 (~57GB)
โโโ merged_P5.mp4 # Video stream for Participant 5 (~38GB)
โโโ timeline_P1.json # Temporal annotations for Participant 1
โโโ timeline_P2.json # Temporal annotations for Participant 2
โโโ timeline_P3.json # Temporal annotations for Participant 3
โโโ timeline_P4.json # Temporal annotations for Participant 4
โโโ timeline_P5.json # Temporal annotations for Participant 5
```
## Alternative Download Methods
If you have difficulty accessing Hugging Face, you can also download the dataset from:
**Baidu Netdisk (็พๅบฆ็ฝ็)**
```
Link: https://pan.baidu.com/s/1TSqfjqeaXdP2TWEpiy_3KA?pwd=7wmh
```
The Baidu Netdisk version contains the **uncompressed data files** (MP4 videos and JSON annotations) directly
## ๐งช Benchmark Tasks
TeleEgo-QA evaluates models along **three main dimensions**:
1. **Memory**
- Short-term / Long-term / Ultra-long Memory
- Entity Tracking
- Temporal Comparison & Interval
2. **Understanding**
- Causal Understanding
- Intent Inference
- Multi-step Reasoning
- Cross-modal Understanding
3. **Cross-Memory Reasoning**
- Cross-temporal Causality
- Cross-entity Relation
- Temporal Chain Understanding
Each QA instance includes:
- Question type: Single-choice, Multi-choice, Binary, Open-ended
## ๐ Citation
If you find our **TeleEgo** in your research, please cite:
```bib
@article{yan2025teleego,
title={TeleEgo: Benchmarking Egocentric AI Assistants in the Wild},
author={Yan, Jiaqi and Ren, Ruilong and Liu, Jingren and Xu, Shuning and Wang, Ling and Wang, Yiheng and Wang, Yun and Zhang, Long and Chen, Xiangyu and Sun, Changzhi and others},
journal={arXiv preprint arXiv:2510.23981},
year={2025}
}
```
## ๐ชช License
This project is licensed under the **MIT License**.
Dataset usage is restricted under a **research-only license**.
---
## ๐ฌ Contact
If you have any questions, please feel free to reach out: chxy95@gmail.com.
---
โจ TeleEgo is an Omni benchmark, a step toward building personalized AI assistants with true long-term memory, reasoning and decision-making in real-world wearable scenarios. โจ