Add metadata, paper link, and citation to TransPhy3D (#2)
Browse files- Add metadata, paper link, and citation to TransPhy3D (44bc3e0caf8b36181e5e61d5d50867d0680c1cbf)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,60 +1,63 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
## Quick Start
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
```
|
| 26 |
-
python load_demo.py
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
```
|
| 31 |
-
outputs/
|
| 32 |
-
|-- output_depth.mp4
|
| 33 |
-
|-- output_normal.mp4
|
| 34 |
-
`-- output_rgb.mp4
|
| 35 |
-
```
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|--
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- depth-estimation
|
| 5 |
+
tags:
|
| 6 |
+
- transparency
|
| 7 |
+
- video-depth-estimation
|
| 8 |
+
- computer-vision
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# TransPhy3D
|
| 12 |
+
|
| 13 |
+
[**Project Page**](https://daniellli.github.io/projects/DKT/) | [**Paper**](https://huggingface.co/papers/2512.23705) | [**Code**](https://github.com/Daniellli/DKT)
|
| 14 |
+
|
| 15 |
+
TransPhy3D is a synthetic video corpus of transparent and reflective scenes, consisting of 11k sequences rendered with Blender/Cycles. It provides high-quality RGB frames along with physically based depth and normal labels. The dataset was introduced in the paper "Diffusion Knows Transparency: Repurposing Video Diffusion for Transparent Object Depth and Normal Estimation".
|
| 16 |
+
|
| 17 |
+
## Introduction
|
| 18 |
+
|
| 19 |
+
This dataset aims to provide the first transparent-object oriented video dataset with perfect depth and normal labels, and diverse categories and shapes. Scenes are assembled from a curated bank of category-rich static assets and shape-rich procedural assets paired with glass/plastic/metal materials.
|
| 20 |
+
|
| 21 |
+
## Quick Start
|
| 22 |
+
|
| 23 |
+
The dataset repository includes a demo script to load and visualize the data:
|
| 24 |
+
|
| 25 |
+
```bash
|
| 26 |
+
python load_demo.py --data_path test/0826_0006_materials.000000.tar --output outputs
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
The results will be saved in the `outputs/` directory as follows:
|
| 30 |
+
```text
|
| 31 |
+
outputs/
|
| 32 |
+
|-- output_depth.mp4
|
| 33 |
+
|-- output_normal.mp4
|
| 34 |
+
`-- output_rgb.mp4
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
## Data Structure
|
| 38 |
+
|
| 39 |
+
The dataset is organized as follows:
|
| 40 |
+
|
| 41 |
+
```text
|
| 42 |
+
|-- parametric_train #* the shape-rich dataset
|
| 43 |
+
|-- test
|
| 44 |
+
|-- 1_materials.000000.tar
|
| 45 |
+
|-- ...
|
| 46 |
+
|-- training
|
| 47 |
+
`-- validation
|
| 48 |
+
|-- test #* TransPhy3D-Test
|
| 49 |
+
`-- train #* the category-rich dataset
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
## Citation
|
| 53 |
+
|
| 54 |
+
If you use this dataset in your research, please cite the following paper:
|
| 55 |
+
|
| 56 |
+
```bibtex
|
| 57 |
+
@article{dkt2025,
|
| 58 |
+
title = {Diffusion Knows Transparency: Repurposing Video Diffusion for Transparent Object Depth and Normal Estimation},
|
| 59 |
+
author = {Shaocong Xu and Songlin Wei and Qizhe Wei and Zheng Geng and Hong Li and Licheng Shen and Qianpu Sun and Shu Han and Bin Ma and Bohan Li and Chongjie Ye and Yuhang Zheng and Nan Wang and Saining Zhang and Hao Zhao},
|
| 60 |
+
journal = {https://arxiv.org/abs/2512.23705},
|
| 61 |
+
year = {2025}
|
| 62 |
+
}
|
| 63 |
+
```
|