Daniellesry nielsr HF Staff commited on
Commit
3b023eb
·
verified ·
1 Parent(s): 71bfc48

Add metadata, paper link, and citation to TransPhy3D (#2)

Browse files

- Add metadata, paper link, and citation to TransPhy3D (44bc3e0caf8b36181e5e61d5d50867d0680c1cbf)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +63 -60
README.md CHANGED
@@ -1,60 +1,63 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
-
6
-
7
-
8
-
9
- # TransPhy3D
10
- A dataset used in `Diffusion Knows Transparency: Repurposing Video Diffusion for Transparent Object Depth and Normal Estimation`
11
-
12
-
13
- ## Introduction
14
- this dataset aim to provide the first transparent-object oriented video dataset with perfect depth and normal label, and diverse categories and shape. The details of this dataset please refer to our paper: [DKT](https://daniellli.github.io/projects/DKT/)
15
-
16
-
17
-
18
-
19
-
20
-
21
- ## Quick Start
22
-
23
-
24
-
25
- ```
26
- python load_demo.py --data_path test/0826_0006_materials.000000.tar --output outputs
27
-
28
- ```
29
- the results will be saved at outputs as following:
30
- ```
31
- outputs/
32
- |-- output_depth.mp4
33
- |-- output_normal.mp4
34
- `-- output_rgb.mp4
35
- ```
36
-
37
-
38
-
39
-
40
-
41
- ## Data Structure
42
-
43
- ```
44
- |-- parametric_train #* the shape-rich dataset
45
- |-- test
46
- |-- 1_materials.000000.tar
47
- |-- ...
48
- |-- training
49
- `-- validation
50
- |-- test #* TransPhy3D-Test
51
- `-- train #* the category-rich dataset
52
- ```
53
-
54
-
55
-
56
- ## cite
57
-
58
- ```
59
- tbd
60
- ```
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - depth-estimation
5
+ tags:
6
+ - transparency
7
+ - video-depth-estimation
8
+ - computer-vision
9
+ ---
10
+
11
+ # TransPhy3D
12
+
13
+ [**Project Page**](https://daniellli.github.io/projects/DKT/) | [**Paper**](https://huggingface.co/papers/2512.23705) | [**Code**](https://github.com/Daniellli/DKT)
14
+
15
+ TransPhy3D is a synthetic video corpus of transparent and reflective scenes, consisting of 11k sequences rendered with Blender/Cycles. It provides high-quality RGB frames along with physically based depth and normal labels. The dataset was introduced in the paper "Diffusion Knows Transparency: Repurposing Video Diffusion for Transparent Object Depth and Normal Estimation".
16
+
17
+ ## Introduction
18
+
19
+ This dataset aims to provide the first transparent-object oriented video dataset with perfect depth and normal labels, and diverse categories and shapes. Scenes are assembled from a curated bank of category-rich static assets and shape-rich procedural assets paired with glass/plastic/metal materials.
20
+
21
+ ## Quick Start
22
+
23
+ The dataset repository includes a demo script to load and visualize the data:
24
+
25
+ ```bash
26
+ python load_demo.py --data_path test/0826_0006_materials.000000.tar --output outputs
27
+ ```
28
+
29
+ The results will be saved in the `outputs/` directory as follows:
30
+ ```text
31
+ outputs/
32
+ |-- output_depth.mp4
33
+ |-- output_normal.mp4
34
+ `-- output_rgb.mp4
35
+ ```
36
+
37
+ ## Data Structure
38
+
39
+ The dataset is organized as follows:
40
+
41
+ ```text
42
+ |-- parametric_train #* the shape-rich dataset
43
+ |-- test
44
+ |-- 1_materials.000000.tar
45
+ |-- ...
46
+ |-- training
47
+ `-- validation
48
+ |-- test #* TransPhy3D-Test
49
+ `-- train #* the category-rich dataset
50
+ ```
51
+
52
+ ## Citation
53
+
54
+ If you use this dataset in your research, please cite the following paper:
55
+
56
+ ```bibtex
57
+ @article{dkt2025,
58
+ title = {Diffusion Knows Transparency: Repurposing Video Diffusion for Transparent Object Depth and Normal Estimation},
59
+ author = {Shaocong Xu and Songlin Wei and Qizhe Wei and Zheng Geng and Hong Li and Licheng Shen and Qianpu Sun and Shu Han and Bin Ma and Bohan Li and Chongjie Ye and Yuhang Zheng and Nan Wang and Saining Zhang and Hao Zhao},
60
+ journal = {https://arxiv.org/abs/2512.23705},
61
+ year = {2025}
62
+ }
63
+ ```